add infermatic-text and whois plugins for AI text generation and WHOIS lookups
This commit is contained in:
@@ -205,6 +205,41 @@ Data Returned:
|
||||
Requires DNSDUMPSTER_KEY environment variable in .env file
|
||||
```
|
||||
|
||||
### 🔍 WHOIS Lookup
|
||||
|
||||
**🌐 !whois <domain/ip>**
|
||||
Perform comprehensive WHOIS lookups for domains and IP addresses.
|
||||
|
||||
**Features:**
|
||||
- Domain validation and IP address recognition
|
||||
- Registrar information and WHOIS server details
|
||||
- Registration, update, and expiration dates
|
||||
- Domain status and name server information
|
||||
- Organization and geographic contact details
|
||||
- Formatted HTML output with clear sections
|
||||
- Comprehensive error handling for invalid queries
|
||||
|
||||
**Usage Examples:**
|
||||
```bash
|
||||
!whois example.com
|
||||
!whois google.com
|
||||
!whois 8.8.8.8
|
||||
!whois 1.1.1.1
|
||||
```
|
||||
|
||||
**Output includes:**
|
||||
- Domain/IP query information
|
||||
- Registrar and WHOIS server
|
||||
- Important dates (creation, update, expiration)
|
||||
- Domain status codes
|
||||
- Name servers (up to 5, with count if more)
|
||||
- Contact information (organization, country, state, city)
|
||||
|
||||
**Error Handling:**
|
||||
- Validates domain/IP format before querying
|
||||
- Provides clear error messages for failed lookups
|
||||
- Handles rate limiting and WHOIS server unavailability
|
||||
|
||||
## ExploitDB Plugin
|
||||
|
||||
A security plugin that searches Exploit-DB for vulnerabilities and exploits directly from Matrix.
|
||||
@@ -368,9 +403,33 @@ Generates images using self-hosted Stable Diffusion with customizable parameters
|
||||
- `--sampler` - Sampler name (default: DPM++ SDE)
|
||||
|
||||
**📄 !text [prompt] [options]**
|
||||
Generates text using Ollama's Mistral 7B Instruct model:
|
||||
- `--max_tokens` - Maximum tokens to generate (default: 512)
|
||||
- `--temperature` - Sampling temperature (default: 0.7)
|
||||
Generates text using the Infermatic AI API with multiple model support:
|
||||
|
||||
**Main Commands:**
|
||||
- `!text <prompt>` - Generate text using the default model from INFERMATIC_MODEL
|
||||
- `!text --list-models` - List all available models from Infermatic AI
|
||||
- `!text --use-model <model> <prompt>` - Use a specific model instead of the default
|
||||
|
||||
**Parameters:**
|
||||
- `--temperature <value>` - Set generation temperature (0.0-1.0, default: 0.9)
|
||||
- `--max-tokens <value>` - Set maximum tokens to generate (default: 2048)
|
||||
|
||||
**Configuration:**
|
||||
- Requires `INFERMATIC_API` environment variable in `.env` file (your API key)
|
||||
- Requires `INFERMATIC_MODEL` environment variable in `.env` file (default: Sao10K-L3.1-70B-Hanami-x1)
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
!text write a python function to calculate fibonacci numbers
|
||||
!text --use-model llama-v3-8b-instruct explain quantum computing simply
|
||||
!text --temperature 0.7 --max-tokens 500 write a haiku about artificial intelligence
|
||||
!text --list-models
|
||||
```
|
||||
|
||||
**Model Management:**
|
||||
- Use `--list-models` to see available models with their capabilities
|
||||
- Different models support various context lengths and specializations
|
||||
- Costs and token limits vary by model
|
||||
|
||||
### Media & Search Commands
|
||||
|
||||
|
||||
Reference in New Issue
Block a user