add infermatic-text and whois plugins for AI text generation and WHOIS lookups

This commit is contained in:
2026-04-26 02:20:23 -05:00
parent ed62397661
commit 6f86fe679f
10 changed files with 345 additions and 290 deletions
+44 -1
View File
@@ -77,6 +77,23 @@ async def handle_command(room, message, bot, prefix, config):
<p>Fetches the current Bitcoin price in USD from bitcointicker.co API. Shows real-time BTC/USD price with proper formatting. Includes error handling for API timeouts and data parsing issues.</p>
</details>
<details><summary>🌐 <strong>!whois &lt;domain/ip&gt;</strong></summary>
<p>Perform comprehensive WHOIS lookups for domains and IP addresses. Retrieves registrar information, registration dates, name servers, and contact details from WHOIS databases.</p>
<p><strong>Usage:</strong></p>
<ul>
<li><code>!whois &lt;domain&gt;</code> - Query domain registration information</li>
<li><code>!whois &lt;ip&gt;</code> - Query IP address allocation details</li>
</ul>
<p><strong>Examples:</strong></p>
<ul>
<li><code>!whois example.com</code></li>
<li><code>!whois google.com</code></li>
<li><code>!whois 8.8.8.8</code></li>
<li><code>!whois 1.1.1.1</code></li>
</ul>
<p><strong>Output includes:</strong> Domain/IP information, registrar, WHOIS server, creation/expiration dates, name servers, and contact details.</p>
</details>
<details><summary>🔍 <strong>!shodan [command] [query]</strong></summary>
<p>Shodan.io integration for security reconnaissance and threat intelligence.</p>
<p><strong>Commands:</strong></p>
@@ -290,7 +307,33 @@ Search Exploit-DB for security vulnerabilities and exploits. Returns detailed in
</details>
<details><summary>📄 <strong>!text [prompt]</strong></summary>
<p>Generates text using Ollama's Mistral 7B Instruct model. Options: --max_tokens, --temperature. Uses queuing system for sequential processing.</p>
<p>Generates text using the Infermatic AI API. Supports multiple models, configurable parameters, and model listing. Uses queuing system for sequential processing.</p>
<p><strong>Usage:</strong></p>
<ul>
<li><code>!text &lt;prompt&gt;</code> - Generate text using the default model</li>
<li><code>!text --list-models</code> - List all available models from Infermatic AI</li>
<li><code>!text --use-model &lt;model_name&gt; &lt;prompt&gt;</code> - Use a specific model instead of the default</li>
<li><code>!text --temperature &lt;value&gt; &lt;prompt&gt;</code> - Set temperature (0.0-1.0, default: 0.9)</li>
<li><code>!text --max-tokens &lt;value&gt; &lt;prompt&gt;</code> - Set maximum tokens to generate (default: 2048)</li>
</ul>
<p><strong>Configuration:</strong></p>
<ul>
<li>Requires <code>INFERMATIC_API</code> environment variable set to your API key</li>
<li>Requires <code>INFERMATIC_MODEL</code> environment variable for default model (default: Sao10K-L3.1-70B-Hanami-x1)</li>
</ul>
<p><strong>Model Management:</strong></p>
<ul>
<li>Use <code>!text --list-models</code> to see all available models</li>
<li>Models support different capabilities and context lengths</li>
<li>Costs and token limits vary by model</li>
</ul>
<p><strong>Examples:</strong></p>
<ul>
<li><code>!text write a python function to calculate fibonacci</code></li>
<li><code>!text --list-models</code></li>
<li><code>!text --use-model llama-v3-8b-instruct explain quantum computing</code></li>
<li><code>!text --temperature 0.7 --max-tokens 500 write a haiku about AI</code></li>
</ul>
</details>
<details><summary>📰 <strong>!xkcd</strong></summary>