add infermatic-text and whois plugins for AI text generation and WHOIS lookups

This commit is contained in:
2026-04-26 02:20:23 -05:00
parent ed62397661
commit 6f86fe679f
10 changed files with 345 additions and 290 deletions
+62 -3
View File
@@ -205,6 +205,41 @@ Data Returned:
Requires DNSDUMPSTER_KEY environment variable in .env file
```
### 🔍 WHOIS Lookup
**🌐 !whois <domain/ip>**
Perform comprehensive WHOIS lookups for domains and IP addresses.
**Features:**
- Domain validation and IP address recognition
- Registrar information and WHOIS server details
- Registration, update, and expiration dates
- Domain status and name server information
- Organization and geographic contact details
- Formatted HTML output with clear sections
- Comprehensive error handling for invalid queries
**Usage Examples:**
```bash
!whois example.com
!whois google.com
!whois 8.8.8.8
!whois 1.1.1.1
```
**Output includes:**
- Domain/IP query information
- Registrar and WHOIS server
- Important dates (creation, update, expiration)
- Domain status codes
- Name servers (up to 5, with count if more)
- Contact information (organization, country, state, city)
**Error Handling:**
- Validates domain/IP format before querying
- Provides clear error messages for failed lookups
- Handles rate limiting and WHOIS server unavailability
## ExploitDB Plugin
A security plugin that searches Exploit-DB for vulnerabilities and exploits directly from Matrix.
@@ -368,9 +403,33 @@ Generates images using self-hosted Stable Diffusion with customizable parameters
- `--sampler` - Sampler name (default: DPM++ SDE)
**📄 !text [prompt] [options]**
Generates text using Ollama's Mistral 7B Instruct model:
- `--max_tokens` - Maximum tokens to generate (default: 512)
- `--temperature` - Sampling temperature (default: 0.7)
Generates text using the Infermatic AI API with multiple model support:
**Main Commands:**
- `!text <prompt>` - Generate text using the default model from INFERMATIC_MODEL
- `!text --list-models` - List all available models from Infermatic AI
- `!text --use-model <model> <prompt>` - Use a specific model instead of the default
**Parameters:**
- `--temperature <value>` - Set generation temperature (0.0-1.0, default: 0.9)
- `--max-tokens <value>` - Set maximum tokens to generate (default: 2048)
**Configuration:**
- Requires `INFERMATIC_API` environment variable in `.env` file (your API key)
- Requires `INFERMATIC_MODEL` environment variable in `.env` file (default: Sao10K-L3.1-70B-Hanami-x1)
**Examples:**
```bash
!text write a python function to calculate fibonacci numbers
!text --use-model llama-v3-8b-instruct explain quantum computing simply
!text --temperature 0.7 --max-tokens 500 write a haiku about artificial intelligence
!text --list-models
```
**Model Management:**
- Use `--list-models` to see available models with their capabilities
- Different models support various context lengths and specializations
- Costs and token limits vary by model
### Media & Search Commands
+1 -1
View File
@@ -14,4 +14,4 @@ config_file = "funguy.conf"
[plugins.disabled]
"!uFhErnfpYhhlauJsNK:matrix.org" = [ "youtube-preview", "ai", "proxy",]
"!vYcfWXpPvxeQvhlFdV:matrix.org" = []
"!NXdVjDXPxXowPkrJJY:matrix.org" = [ "karma",]
"!NXdVjDXPxXowPkrJJY:matrix.org" = [ "karma"]
+1 -1
View File
@@ -1,4 +1,4 @@
#!/usr/bin/env python
#!/usr/bin/env python3
"""
Funguy Bot Class
+44 -1
View File
@@ -77,6 +77,23 @@ async def handle_command(room, message, bot, prefix, config):
<p>Fetches the current Bitcoin price in USD from bitcointicker.co API. Shows real-time BTC/USD price with proper formatting. Includes error handling for API timeouts and data parsing issues.</p>
</details>
<details><summary>🌐 <strong>!whois &lt;domain/ip&gt;</strong></summary>
<p>Perform comprehensive WHOIS lookups for domains and IP addresses. Retrieves registrar information, registration dates, name servers, and contact details from WHOIS databases.</p>
<p><strong>Usage:</strong></p>
<ul>
<li><code>!whois &lt;domain&gt;</code> - Query domain registration information</li>
<li><code>!whois &lt;ip&gt;</code> - Query IP address allocation details</li>
</ul>
<p><strong>Examples:</strong></p>
<ul>
<li><code>!whois example.com</code></li>
<li><code>!whois google.com</code></li>
<li><code>!whois 8.8.8.8</code></li>
<li><code>!whois 1.1.1.1</code></li>
</ul>
<p><strong>Output includes:</strong> Domain/IP information, registrar, WHOIS server, creation/expiration dates, name servers, and contact details.</p>
</details>
<details><summary>🔍 <strong>!shodan [command] [query]</strong></summary>
<p>Shodan.io integration for security reconnaissance and threat intelligence.</p>
<p><strong>Commands:</strong></p>
@@ -290,7 +307,33 @@ Search Exploit-DB for security vulnerabilities and exploits. Returns detailed in
</details>
<details><summary>📄 <strong>!text [prompt]</strong></summary>
<p>Generates text using Ollama's Mistral 7B Instruct model. Options: --max_tokens, --temperature. Uses queuing system for sequential processing.</p>
<p>Generates text using the Infermatic AI API. Supports multiple models, configurable parameters, and model listing. Uses queuing system for sequential processing.</p>
<p><strong>Usage:</strong></p>
<ul>
<li><code>!text &lt;prompt&gt;</code> - Generate text using the default model</li>
<li><code>!text --list-models</code> - List all available models from Infermatic AI</li>
<li><code>!text --use-model &lt;model_name&gt; &lt;prompt&gt;</code> - Use a specific model instead of the default</li>
<li><code>!text --temperature &lt;value&gt; &lt;prompt&gt;</code> - Set temperature (0.0-1.0, default: 0.9)</li>
<li><code>!text --max-tokens &lt;value&gt; &lt;prompt&gt;</code> - Set maximum tokens to generate (default: 2048)</li>
</ul>
<p><strong>Configuration:</strong></p>
<ul>
<li>Requires <code>INFERMATIC_API</code> environment variable set to your API key</li>
<li>Requires <code>INFERMATIC_MODEL</code> environment variable for default model (default: Sao10K-L3.1-70B-Hanami-x1)</li>
</ul>
<p><strong>Model Management:</strong></p>
<ul>
<li>Use <code>!text --list-models</code> to see all available models</li>
<li>Models support different capabilities and context lengths</li>
<li>Costs and token limits vary by model</li>
</ul>
<p><strong>Examples:</strong></p>
<ul>
<li><code>!text write a python function to calculate fibonacci</code></li>
<li><code>!text --list-models</code></li>
<li><code>!text --use-model llama-v3-8b-instruct explain quantum computing</code></li>
<li><code>!text --temperature 0.7 --max-tokens 500 write a haiku about AI</code></li>
</ul>
</details>
<details><summary>📰 <strong>!xkcd</strong></summary>
+233
View File
@@ -0,0 +1,233 @@
"""
Plugin for generating text using Infermatic AI API and sending it to a Matrix chat room.
"""
import os
import requests
import argparse
import json
import simplematrixbotlib as botlib
from asyncio import Queue
from dotenv import load_dotenv
# Load environment variables from .env file in the parent directory
plugin_dir = os.path.dirname(os.path.abspath(__file__))
parent_dir = os.path.dirname(plugin_dir)
dotenv_path = os.path.join(parent_dir, '.env')
load_dotenv(dotenv_path)
# Infermatic AI API configuration
INFERMATIC_API_KEY = os.getenv("INFERMATIC_API", "")
DEFAULT_MODEL = os.getenv("INFERMATIC_MODEL", "Sao10K-L3.1-70B-Hanami-x1")
INFERMATIC_API_BASE = "https://api.totalgpt.ai/v1"
# Queue to store pending commands
command_queue = Queue()
async def process_command(room, message, bot, prefix, config):
"""Queue and process !text commands sequentially."""
match = botlib.MessageMatch(room, message, bot, prefix)
if match.prefix() and match.command("text"):
if command_queue.empty():
await handle_command(room, message, bot, prefix, config)
else:
await command_queue.put((room, message, bot, prefix, config))
await bot.api.send_text_message(room.room_id, "Command queued. Please wait for the current request to finish.")
async def handle_command(room, message, bot, prefix, config):
"""Handle !text command: generate text using Infermatic AI API."""
match = botlib.MessageMatch(room, message, bot, prefix)
if not (match.prefix() and match.command("text")):
return
# Check if API key is configured
if not INFERMATIC_API_KEY:
await bot.api.send_text_message(
room.room_id,
"Infermatic API key not configured. Please set INFERMATIC_API environment variable."
)
return
# Parse command arguments
args = match.args()
if len(args) < 1:
await show_usage(room, bot)
return
# Check if it's a --list-models command
if args[0] == "--list-models":
await list_models(room, bot)
return
# Parse other arguments
try:
# Extract options manually since argparse doesn't handle mixed positional/optional well
temperature = 0.9
max_tokens = 2048
custom_model = None
prompt_parts = []
i = 0
while i < len(args):
if args[i] == "--temperature" and i + 1 < len(args):
temperature = float(args[i + 1])
i += 2
elif args[i] == "--max-tokens" and i + 1 < len(args):
max_tokens = int(args[i + 1])
i += 2
elif args[i] == "--use-model" and i + 1 < len(args):
custom_model = args[i + 1]
i += 2
else:
prompt_parts.append(args[i])
i += 1
prompt = ' '.join(prompt_parts).strip()
if not prompt:
await show_usage(room, bot)
return
model = custom_model or DEFAULT_MODEL
await generate_text(room, bot, prompt, model, temperature, max_tokens)
except ValueError as e:
await bot.api.send_text_message(room.room_id, f"Invalid parameter value: {e}")
except Exception as e:
await bot.api.send_text_message(room.room_id, f"Error processing command: {str(e)}")
async def show_usage(room, bot):
"""Display command usage information."""
usage = """
<strong>📄 Infermatic Text Generation Usage:</strong>
<strong>Basic:</strong>
• <code>!text &lt;prompt&gt;</code> - Generate text using default model
<strong>Commands:</strong>
• <code>!text --list-models</code> - List all available models
• <code>!text --use-model &lt;model&gt; &lt;prompt&gt;</code> - Use specific model
<strong>Parameters:</strong>
• <code>--temperature &lt;0.0-1.0&gt;</code> - Set temperature (default: 0.9)
• <code>--max-tokens &lt;number&gt;</code> - Set max tokens (default: 2048)
<strong>Examples:</strong>
• <code>!text write a python function to calculate fibonacci</code>
• <code>!text --list-models</code>
• <code>!text --use-model llama-v3-8b-instruct explain quantum computing</code>
• <code>!text --temperature 0.7 write a haiku about AI</code>
"""
await bot.api.send_markdown_message(room.room_id, usage)
async def list_models(room, bot):
"""List all available models from Infermatic AI."""
try:
await bot.api.send_text_message(room.room_id, "🔍 Fetching available models...")
url = f"{INFERMATIC_API_BASE}/models"
headers = {
"Authorization": f"Bearer {INFERMATIC_API_KEY}",
"Content-Type": "application/json"
}
response = requests.get(url, headers=headers, timeout=30)
response.raise_for_status()
data = response.json()
models = data.get('data', [])
if not models:
await bot.api.send_text_message(room.room_id, "No models found or error in response.")
return
# Format the model list
output = "<strong>🔧 Available Models:</strong><br><br>"
for model in models:
model_id = model.get('id', 'Unknown')
model_name = model.get('name', model_id)
context_length = model.get('context_length', 'Unknown')
pricing = model.get('pricing', {})
output += f"<strong>• {model_name}</strong><br>"
output += f" └─ ID: <code>{model_id}</code><br>"
output += f" └─ Context: {context_length}<br>"
if pricing:
prompt_price = pricing.get('prompt', '0')
completion_price = pricing.get('completion', '0')
output += f" └─ Price: ${prompt_price}/${completion_price} per 1M tokens<br>"
output += f" └─ <strong>Usage:</strong> <code>!text --use-model {model_id} &lt;prompt&gt;</code><br><br>"
# Wrap in collapsible details since list can be long
output = f"<details><summary><strong>🔧 Available Models (Click to expand)</strong></summary>{output}</details>"
await bot.api.send_markdown_message(room.room_id, output)
except requests.exceptions.RequestException as e:
await bot.api.send_text_message(room.room_id, f"❌ Error fetching models: {str(e)}")
except Exception as e:
await bot.api.send_text_message(room.room_id, f"❌ Unexpected error: {str(e)}")
async def generate_text(room, bot, prompt, model, temperature, max_tokens):
"""Generate text using the Infermatic AI API."""
try:
# Send initial processing message
await bot.api.send_text_message(room.room_id, f"📝 Generating text...")
url = f"{INFERMATIC_API_BASE}/chat/completions"
headers = {
"Authorization": f"Bearer {INFERMATIC_API_KEY}",
"Content-Type": "application/json"
}
payload = {
"model": model,
"messages": [
{"role": "user", "content": prompt}
],
"temperature": temperature,
"max_tokens": max_tokens
}
response = requests.post(url, headers=headers, json=payload, timeout=120)
response.raise_for_status()
data = response.json()
generated_text = data.get('choices', [{}])[0].get('message', {}).get('content', '').strip()
if not generated_text:
await bot.api.send_text_message(room.room_id, "No response generated.")
return
# Format the output with collapsible sections
output = f"<details><summary><strong>📝 Generated Text (Click to expand)</strong></summary>"
output += f"<strong>Model:</strong> <code>{model}</code><br><br>"
output += f"<strong>Prompt:</strong> {prompt}<br><br>"
output += f"<strong>Response:</strong><br><br>"
output += f"{generated_text}"
output += f"</details>"
await bot.api.send_markdown_message(room.room_id, output)
except requests.exceptions.Timeout:
await bot.api.send_text_message(room.room_id, "❌ Request timed out. The model is taking too long to respond.")
except requests.exceptions.HTTPError as e:
if e.response.status_code == 401:
await bot.api.send_text_message(room.room_id, "❌ Authentication failed. Please check your INFERMATIC_API key.")
elif e.response.status_code == 429:
await bot.api.send_text_message(room.room_id, "❌ Rate limit exceeded. Please try again later.")
else:
await bot.api.send_text_message(room.room_id, f"❌ API error: HTTP {e.response.status_code}")
except Exception as e:
await bot.api.send_text_message(room.room_id, f"❌ Error generating text: {str(e)}")
finally:
# Process next queued command
if not command_queue.empty():
next_command = await command_queue.get()
await handle_command(*next_command)
-95
View File
@@ -1,95 +0,0 @@
"""
Plugin for generating text using Ollama's Mistral 7B Instruct model and sending it to a Matrix chat room.
"""
import requests
from asyncio import Queue
import simplematrixbotlib as botlib
import argparse
# Queue to store pending commands
command_queue = Queue()
API_URL = "http://localhost:11434/api/generate"
MODEL_NAME = "mistral:7b-instruct"
async def process_command(room, message, bot, prefix, config):
"""
Queue and process !text commands sequentially.
"""
match = botlib.MessageMatch(room, message, bot, prefix)
if match.prefix() and match.command("text"):
if command_queue.empty():
await handle_command(room, message, bot, prefix, config)
else:
await command_queue.put((room, message, bot, prefix, config))
async def handle_command(room, message, bot, prefix, config):
"""
Send the prompt to Ollama API and return the generated text.
"""
match = botlib.MessageMatch(room, message, bot, prefix)
if not (match.prefix() and match.command("text")):
return
# Parse optional arguments
parser = argparse.ArgumentParser(description='Generate text using Ollama API')
parser.add_argument('--max_tokens', type=int, default=512, help='Maximum tokens to generate')
parser.add_argument('--temperature', type=float, default=0.7, help='Temperature for generation')
parser.add_argument('prompt', nargs='+', help='Prompt for the model')
try:
args = parser.parse_args(message.body.split()[1:]) # Skip command itself
prompt = ' '.join(args.prompt).strip()
if not prompt:
await bot.api.send_text_message(room.room_id, "Usage: !text <your prompt here>")
return
payload = {
"model": MODEL_NAME,
"prompt": prompt,
"max_tokens": args.max_tokens,
"temperature": args.temperature,
"stream": False
}
response = requests.post(API_URL, json=payload, timeout=60)
response.raise_for_status()
r = response.json()
generated_text = r.get("response", "").strip()
if not generated_text:
generated_text = "(No response from model)"
await bot.api.send_text_message(room.room_id, generated_text)
except argparse.ArgumentError as e:
await bot.api.send_text_message(room.room_id, f"Argument error: {e}")
except requests.exceptions.RequestException as e:
await bot.api.send_text_message(room.room_id, f"Error connecting to Ollama API: {e}")
except Exception as e:
await bot.api.send_text_message(room.room_id, f"Unexpected error: {e}")
finally:
# Process next command from the queue, if any
if not command_queue.empty():
next_command = await command_queue.get()
await handle_command(*next_command)
def print_help():
"""
Generates help text for the !text command.
"""
return """
<p>Generate text using Ollama's Mistral 7B Instruct model</p>
<p>Usage:</p>
<ul>
<li>!text <prompt> - Basic prompt for the model</li>
<li>Optional arguments:</li>
<ul>
<li>--max_tokens MAX_TOKENS - Maximum tokens to generate (default 512)</li>
<li>--temperature TEMPERATURE - Sampling temperature (default 0.7)</li>
</ul>
</ul>
"""
+1 -1
View File
@@ -118,7 +118,7 @@ async def handle_command(room, message, bot, prefix, config):
r = response.json()
# Use secure temporary file
with tempfile.NamedTemporaryFile(suffix='.jpg', delete=False) as temp_file:
with tempfile.NamedTemporaryFile(suffix='.png', delete=False) as temp_file:
filename = temp_file.name
temp_file.write(base64.b64decode(r['images'][0]))
-187
View File
@@ -1,187 +0,0 @@
"""
Plugin for providing a command to fetch YouTube video information from links.
"""
# Importing necessary libraries
import re
import logging
import asyncio
import aiohttp
import yt_dlp
import simplematrixbotlib as botlib
from youtube_title_parse import get_artist_title
LYRICIST_API_URL = "https://lyrist.vercel.app/api/{}/{}"
def seconds_to_minutes_seconds(seconds):
"""
Converts seconds to a string representation of minutes and seconds.
Args:
seconds (int): The number of seconds.
Returns:
str: A string representation of minutes and seconds in the format MM:SS.
"""
minutes = seconds // 60
seconds %= 60
return f"{minutes:02d}:{seconds:02d}"
async def fetch_lyrics(song, artist):
"""
Asynchronously fetches lyrics for a song from the Lyricist API.
Args:
song (str): The name of the song.
artist (str): The name of the artist.
Returns:
str: Lyrics of the song.
None if an error occurs during fetching.
"""
try:
async with aiohttp.ClientSession() as session:
url = LYRICIST_API_URL.format(artist, song)
logging.info(f"Fetching lyrics from: {url}")
async with session.get(url, timeout=aiohttp.ClientTimeout(total=10)) as response:
if response.status == 200:
data = await response.json()
return data.get("lyrics")
else:
logging.warning(f"Lyrics API returned status {response.status}")
return None
except asyncio.TimeoutError:
logging.error("Timeout fetching lyrics")
return None
except Exception as e:
logging.error(f"Error fetching lyrics: {str(e)}")
return None
async def fetch_youtube_info(youtube_url):
"""
Asynchronously fetches information about a YouTube video using yt-dlp.
Args:
youtube_url (str): The URL of the YouTube video.
Returns:
str: A message containing information about the YouTube video.
None if an error occurs during fetching.
"""
try:
logging.info(f"Fetching YouTube info for: {youtube_url}")
# Configure yt-dlp options
ydl_opts = {
'quiet': True,
'no_warnings': True,
'extract_flat': False,
'skip_download': True,
}
# Run yt-dlp in thread pool to avoid blocking
loop = asyncio.get_event_loop()
def extract_info():
with yt_dlp.YoutubeDL(ydl_opts) as ydl:
return ydl.extract_info(youtube_url, download=False)
info = await loop.run_in_executor(None, extract_info)
if not info:
logging.error("No info returned from yt-dlp")
return None
# Extract video information
title = info.get('title', 'Unknown Title')
description = info.get('description', 'No description available')
duration = info.get('duration', 0)
view_count = info.get('view_count', 0)
uploader = info.get('uploader', 'Unknown')
logging.info(f"Video title: {title}")
length = seconds_to_minutes_seconds(duration)
# Parse artist and song from title
artist, song = get_artist_title(title)
logging.info(f"Parsed artist: {artist}, song: {song}")
# Limit description length to avoid huge messages
if len(description) > 500:
description = description[:500] + "..."
description_with_breaks = description.replace('\n', '<br>')
# Build basic info message
info_message = f"""<strong>🎬🎝 Title:</strong> {title}<br><strong>Length:</strong> {length} | <strong>Views:</strong> {view_count:,} | <strong>Uploader:</strong> {uploader}<br><details><summary><strong>⤵︎Description⤵︎</strong></summary>{description_with_breaks}</details>"""
# Try to fetch lyrics if artist and song were parsed
if artist and song:
logging.info("Attempting to fetch lyrics...")
lyrics = await fetch_lyrics(song, artist)
if lyrics:
lyrics = lyrics.replace('\n', "<br>")
# Limit lyrics length
if len(lyrics) > 3000:
lyrics = lyrics[:3000] + "<br>...(truncated)"
info_message += f"<br><details><summary><strong>🎵 Lyrics:</strong></summary><br>{lyrics}</details>"
else:
logging.info("No lyrics found")
else:
logging.info("Could not parse artist/song from title, skipping lyrics")
return info_message
except Exception as e:
logging.error(f"Error fetching YouTube video information: {str(e)}", exc_info=True)
return None
async def handle_command(room, message, bot, prefix, config):
"""
Asynchronously handles the command to fetch YouTube video information.
Args:
room (Room): The Matrix room where the command was invoked.
message (RoomMessage): The message object containing the command.
bot (MatrixBot): The Matrix bot instance.
prefix (str): The command prefix.
config (dict): The bot's configuration.
Returns:
None
"""
match = botlib.MessageMatch(room, message, bot, prefix)
# Check if message contains a YouTube link
if match.is_not_from_this_bot() and re.search(r'(youtube\.com/watch\?v=|youtu\.be/)', message.body):
logging.info(f"YouTube link detected in message: {message.body}")
# Match both youtube.com and youtu.be formats
video_id_match = re.search(r'(?:youtube\.com/watch\?v=|youtu\.be/)([a-zA-Z0-9_-]{11})', message.body)
if video_id_match:
video_id = video_id_match.group(1)
youtube_url = f"https://www.youtube.com/watch?v={video_id}"
logging.info(f"Fetching information for YouTube video ID: {video_id}")
retry_count = 2 # Reduced retries since yt-dlp is more reliable
while retry_count > 0:
info_message = await fetch_youtube_info(youtube_url)
if info_message:
await bot.api.send_markdown_message(room.room_id, info_message)
logging.info("Sent YouTube video information to the room")
break
else:
logging.warning(f"Failed to fetch info, retrying... ({retry_count-1} attempts left)")
retry_count -= 1
if retry_count > 0:
await asyncio.sleep(2) # wait for 2 seconds before retrying
else:
logging.error("Failed to fetch YouTube video information after all retries")
await bot.api.send_text_message(room.room_id, "Failed to fetch YouTube video information. The video may be unavailable or age-restricted.")
else:
logging.warning("Could not extract video ID from YouTube URL")
+1 -1
View File
@@ -28,7 +28,7 @@ async def handle_command(room, message, bot, PREFIX, config):
else:
search_terms = " ".join(args)
logging.info(f"Performing YouTube search for: {search_terms}")
results = YoutubeSearch(search_terms, max_results=1).to_dict()
results = YoutubeSearch(search_terms, max_results=3).to_dict()
if results:
output = generate_output(results)
await send_collapsible_message(room, bot, output)
+2
View File
@@ -13,3 +13,5 @@ schedule
yt-dlp
pyopenssl
psutil
toml
python-whois