Compare commits

...

44 Commits

Author SHA1 Message Date
-LAN-
903cf33342 feat(build-push): Avoid updating the latest version tag.
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-03 12:41:19 +08:00
-LAN-
4b938ab18d chore: Bump version
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-30 16:25:40 +08:00
-LAN-
88356de923 fix: Refactor web reader to use readabilipy (#19789)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-30 16:23:17 +08:00
-LAN-
5f09900dca chore(api): Upgrade dependencies (#19736)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-15 14:47:15 +08:00
-LAN-
9ac99abf20 docs(CHANGELOG): Update CHANGELOG
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-14 18:03:05 +08:00
-LAN-
32588f562e feat(model): fix and re-add gpt-4.1.
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-14 18:02:32 +08:00
Joel
36f8bd3f1a chore: frontend third-part package security issue (#19655) 2025-05-14 14:08:05 +08:00
-LAN-
c919074e06 docs(CHANGELOG.md): Update CHANGELOG.md
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-13 10:31:40 +08:00
kelvintsim
88cd9aedb7 add gunicorn keepalive setting (#19537)
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: Bowen Liang <liang.bowen.123@qq.com>
2025-05-13 10:28:13 +08:00
-LAN-
16a4f77fb4 fix(config): Allow DB_EXTRAS to set search_path via options
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-13 10:19:08 +08:00
-LAN-
3401c52665 chore(pyproject.toml): Upgrade huggingface-hub, transformers and resend (#19563)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-12 23:21:57 +08:00
Zixuan Cheng
4fa3d78ed8 Revert "feat : add GPT4.1 in the model providers" (#19002) 2025-04-28 18:15:24 +08:00
-LAN-
5f7f851b17 fix: Refines None checks in result transformation
Simplifies the code by replacing type checks for None with
direct comparisons, improving readability and consistency in
handling None values during output validation.

Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-04-28 15:40:14 +08:00
-LAN-
559ab46ee1 fix: Removes redundant token calculations and updates dependencies
Eliminates unnecessary pre-calculation of token limits and recalculation of max tokens
across multiple app runners, simplifying the logic for prompt handling.

Updates tiktoken library from version 0.8.0 to 0.9.0 for improved tokenization performance.

Increases default token limit in TokenBufferMemory to accommodate larger prompt messages.

These changes streamline the token management process and leverage the latest
improvements in the tiktoken library.

Fixes potential token overflow issues and prepares the system for handling larger
inputs more efficiently.

Relates to internal optimization tasks.

Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-04-28 15:39:12 +08:00
-LAN-
df98223c8c chore: Updates to version 0.15.7 with new model support
Adds support for GPT-4.1 and Amazon Bedrock DeepSeek-R1 models.
Fixes issues with app creation from template categories and
DSL version checks.

Updates version numbers in configuration files and Docker
setup to 0.15.7 for consistency.

Addresses issues #18807, #18868, #18872, #18878, and #18912.

Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-04-28 14:19:07 +08:00
Zixuan Cheng
144f9507f8 feat : add GPT4.1 in the model providers (#18912) 2025-04-27 19:31:20 +08:00
kelvintsim
2e097a1ac0 add bedrock deepseek-r1 (#18908) 2025-04-27 19:30:42 +08:00
NFish
9f7d8a981f Patch: hotfix/create from template category (#18807) (#18868) 2025-04-27 14:47:18 +08:00
zxhlyh
40b31bafd5 fix: check dsl version when create app from explore template (#18872) (#18878) 2025-04-27 14:21:45 +08:00
-LAN-
d38a2c95fb docs(CHANGELOG): Update CHANGELOG.md
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-04-25 18:31:08 +08:00
-LAN-
7d18e2a0ef feat(app_dsl_service): Refines version compatibility logic
Updates logic to handle various version comparisons, ensuring
more precise status returns based on version differences.
Improves handling of older and newer versions to prevent
mismatches and ensure appropriate compatibility status.

Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-04-25 18:27:31 +08:00
kelvintsim
024f242251 add bedrock claude-sonnet-3.7 (#18788) 2025-04-25 17:35:12 +08:00
-LAN-
bfdce78ca5 chore(*): Bump up to 0.15.6
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-04-23 14:06:46 +08:00
-LAN-
00c2258352 CHANGELOG): Adds initial changelog for version 0.15.6
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-04-23 13:55:33 +08:00
Joel
a1b3d41712 fix: clickjacking (#18552) 2025-04-22 17:08:52 +08:00
kautsar_masuara
b26e20fe34 fix: fix vertex gemini 2.0 flash 001 schema (#18405)
Co-authored-by: achmad-kautsar <achmad.kautsar@insignia.co.id>
2025-04-19 22:04:13 +08:00
NFish
161ff432f1 fix: update reset password token when email code verify success (#18362) 2025-04-18 17:15:15 +08:00
Xiyuan Chen
99a9def623 fix: reset_password security issue (#18366) 2025-04-18 05:04:44 -04:00
Alexi.F
fe1846c437 fix: change gemini-2.0-flash to validate google api #17082 (#17115) 2025-03-30 13:04:12 +08:00
-LAN-
8e75eb5c63 fix: update version to 0.15.5 in packaging and docker-compose files
Sgned-off-by: -LAN- <lapz8200@outlook.com>
2025-03-24 16:47:06 +08:00
-LAN-
970508fcb6 fix: update GitHub Actions workflow to trigger on tags
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-03-24 16:45:29 +08:00
NFish
9283a5414f fix: update yarn.lock 2025-03-24 16:41:07 +08:00
-LAN-
2a2a0e9be9 fix: update DifySandbox image version to 0.2.11 in docker-compose files
Sgned-off-by: -LAN- <laipz8200@outlook.com>
2025-03-24 15:37:55 +08:00
Joel
061a765b7d fix: sanitizer svg to avoid xss (#16608) 2025-03-24 14:48:40 +08:00
-LAN-
acd7fead87 feat: remove Vanna provider and associated assets from the project
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-03-24 14:34:03 +08:00
NFish
bbb080d5b2 fix: update chatbot help doc link on the create app form 2025-03-24 11:28:35 +08:00
NFish
c01d8a70f3 fix: upgrade nextjs to v14.2.25. a security patch for CVE-2025-29927. 2025-03-24 10:32:18 +08:00
-LAN-
1ca15989e0 chore: update version to 0.15.4 in configuration and docker files
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-03-21 16:39:06 +08:00
-LAN-
8b5a3a9424 Merge branch 'release/0.15.4' of github.com:langgenius/dify into release/0.15.4 2025-03-21 16:31:06 +08:00
-LAN-
42ddcf1edd chore: remove 0.15.3 branch config in the build action
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-03-21 16:30:33 +08:00
Joel
21561df10f fix: xss in render svg (#16437) 2025-03-21 15:24:58 +08:00
crazywoola
0e33a3aa5f chore: add ci 2025-02-19 14:34:36 +08:00
Hash Brown
d3895bcd6b revert 2025-02-19 14:32:28 +08:00
Hash Brown
eeb390650b fix: build failed 2025-02-19 14:32:28 +08:00
64 changed files with 9097 additions and 8526 deletions

View File

@@ -5,8 +5,8 @@ on:
branches:
- "main"
- "deploy/dev"
release:
types: [published]
tags:
- "*"
concurrency:
group: build-push-${{ github.head_ref || github.run_id }}
@@ -125,7 +125,6 @@ jobs:
with:
images: ${{ env[matrix.image_name_env] }}
tags: |
type=raw,value=latest,enable=${{ startsWith(github.ref, 'refs/tags/') && !contains(github.ref, '-') }}
type=ref,event=branch
type=sha,enable=true,priority=100,prefix=,suffix=,format=long
type=raw,value=${{ github.ref_name }},enable=${{ startsWith(github.ref, 'refs/tags/') }}

4
.markdownlint.json Normal file
View File

@@ -0,0 +1,4 @@
{
"MD024": false,
"MD013": false
}

45
CHANGELOG.md Normal file
View File

@@ -0,0 +1,45 @@
# Changelog
All notable changes to Dify will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.15.8] - 2025-05-30
### Added
- Added gunicorn keepalive setting (#19537)
### Fixed
- Fixed database configuration to allow DB_EXTRAS to set search_path via options (#16a4f77)
- Fixed frontend third-party package security issues (#19655)
- Updated dependencies: huggingface-hub (~0.16.4 to ~0.31.0), transformers (~4.35.0 to ~4.39.0), and resend (~0.7.0 to ~2.9.0) (#19563)
- Downgrade boto3 from 1.36 to 1.35 (#19736)
## [0.15.7] - 2025-04-27
### Added
- Added support for GPT-4.1 in model providers (#18912)
- Added support for Amazon Bedrock DeepSeek-R1 model (#18908)
- Added support for Amazon Bedrock Claude Sonnet 3.7 model (#18788)
- Refined version compatibility logic in app DSL service
### Fixed
- Fixed issue with creating apps from template categories (#18807, #18868)
- Fixed DSL version check when creating apps from explore templates (#18872, #18878)
## [0.15.6] - 2025-04-22
### Security
- Fixed clickjacking vulnerability (#18552)
- Fixed reset password security issue (#18366)
- Updated reset password token when email code verification succeeds (#18362)
### Fixed
- Fixed Vertex AI Gemini 2.0 Flash 001 schema (#18405)

View File

@@ -430,4 +430,7 @@ CREATE_TIDB_SERVICE_JOB_ENABLED=false
# Maximum number of submitted thread count in a ThreadPool for parallel node execution
MAX_SUBMIT_COUNT=100
# Lockout duration in seconds
LOGIN_LOCKOUT_DURATION=86400
LOGIN_LOCKOUT_DURATION=86400
# Prevent Clickjacking
ALLOW_EMBED=false

View File

@@ -1,5 +1,5 @@
from typing import Any, Literal, Optional
from urllib.parse import quote_plus
from urllib.parse import parse_qsl, quote_plus
from pydantic import Field, NonNegativeInt, PositiveFloat, PositiveInt, computed_field
from pydantic_settings import BaseSettings
@@ -166,14 +166,28 @@ class DatabaseConfig(BaseSettings):
default=False,
)
@computed_field
@computed_field # type: ignore[misc]
@property
def SQLALCHEMY_ENGINE_OPTIONS(self) -> dict[str, Any]:
# Parse DB_EXTRAS for 'options'
db_extras_dict = dict(parse_qsl(self.DB_EXTRAS))
options = db_extras_dict.get("options", "")
# Always include timezone
timezone_opt = "-c timezone=UTC"
if options:
# Merge user options and timezone
merged_options = f"{options} {timezone_opt}"
else:
merged_options = timezone_opt
connect_args = {"options": merged_options}
return {
"pool_size": self.SQLALCHEMY_POOL_SIZE,
"max_overflow": self.SQLALCHEMY_MAX_OVERFLOW,
"pool_recycle": self.SQLALCHEMY_POOL_RECYCLE,
"pool_pre_ping": self.SQLALCHEMY_POOL_PRE_PING,
"connect_args": {"options": "-c timezone=UTC"},
"connect_args": connect_args,
}

View File

@@ -9,7 +9,7 @@ class PackagingInfo(BaseSettings):
CURRENT_VERSION: str = Field(
description="Dify version",
default="0.15.3",
default="0.15.8",
)
COMMIT_SHA: str = Field(

View File

@@ -8,7 +8,7 @@ from constants.languages import languages
from controllers.console import api
from controllers.console.auth.error import EmailCodeError, InvalidEmailError, InvalidTokenError, PasswordMismatchError
from controllers.console.error import AccountInFreezeError, AccountNotFound, EmailSendIpLimitError
from controllers.console.wraps import setup_required
from controllers.console.wraps import email_password_login_enabled, setup_required
from events.tenant_event import tenant_was_created
from extensions.ext_database import db
from libs.helper import email, extract_remote_ip
@@ -22,6 +22,7 @@ from services.feature_service import FeatureService
class ForgotPasswordSendEmailApi(Resource):
@setup_required
@email_password_login_enabled
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("email", type=email, required=True, location="json")
@@ -53,6 +54,7 @@ class ForgotPasswordSendEmailApi(Resource):
class ForgotPasswordCheckApi(Resource):
@setup_required
@email_password_login_enabled
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("email", type=str, required=True, location="json")
@@ -72,11 +74,20 @@ class ForgotPasswordCheckApi(Resource):
if args["code"] != token_data.get("code"):
raise EmailCodeError()
return {"is_valid": True, "email": token_data.get("email")}
# Verified, revoke the first token
AccountService.revoke_reset_password_token(args["token"])
# Refresh token data by generating a new token
_, new_token = AccountService.generate_reset_password_token(
user_email, code=args["code"], additional_data={"phase": "reset"}
)
return {"is_valid": True, "email": token_data.get("email"), "token": new_token}
class ForgotPasswordResetApi(Resource):
@setup_required
@email_password_login_enabled
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("token", type=str, required=True, nullable=False, location="json")
@@ -95,6 +106,9 @@ class ForgotPasswordResetApi(Resource):
if reset_data is None:
raise InvalidTokenError()
# Must use token in reset phase
if reset_data.get("phase", "") != "reset":
raise InvalidTokenError()
AccountService.revoke_reset_password_token(token)

View File

@@ -22,7 +22,7 @@ from controllers.console.error import (
EmailSendIpLimitError,
NotAllowedCreateWorkspace,
)
from controllers.console.wraps import setup_required
from controllers.console.wraps import email_password_login_enabled, setup_required
from events.tenant_event import tenant_was_created
from libs.helper import email, extract_remote_ip
from libs.password import valid_password
@@ -38,6 +38,7 @@ class LoginApi(Resource):
"""Resource for user login."""
@setup_required
@email_password_login_enabled
def post(self):
"""Authenticate user and login."""
parser = reqparse.RequestParser()
@@ -110,6 +111,7 @@ class LogoutApi(Resource):
class ResetPasswordSendEmailApi(Resource):
@setup_required
@email_password_login_enabled
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("email", type=email, required=True, location="json")

View File

@@ -154,3 +154,16 @@ def enterprise_license_required(view):
return view(*args, **kwargs)
return decorated
def email_password_login_enabled(view):
@wraps(view)
def decorated(*args, **kwargs):
features = FeatureService.get_system_features()
if features.enable_email_password_login:
return view(*args, **kwargs)
# otherwise, return 403
abort(403)
return decorated

View File

@@ -104,7 +104,6 @@ class CotAgentRunner(BaseAgentRunner, ABC):
# recalc llm max tokens
prompt_messages = self._organize_prompt_messages()
self.recalc_llm_max_tokens(self.model_config, prompt_messages)
# invoke model
chunks = model_instance.invoke_llm(
prompt_messages=prompt_messages,

View File

@@ -84,7 +84,6 @@ class FunctionCallAgentRunner(BaseAgentRunner):
# recalc llm max tokens
prompt_messages = self._organize_prompt_messages()
self.recalc_llm_max_tokens(self.model_config, prompt_messages)
# invoke model
chunks: Union[Generator[LLMResultChunk, None, None], LLMResult] = model_instance.invoke_llm(
prompt_messages=prompt_messages,

View File

@@ -55,20 +55,6 @@ class AgentChatAppRunner(AppRunner):
query = application_generate_entity.query
files = application_generate_entity.files
# Pre-calculate the number of tokens of the prompt messages,
# and return the rest number of tokens by model context token size limit and max token size limit.
# If the rest number of tokens is not enough, raise exception.
# Include: prompt template, inputs, query(optional), files(optional)
# Not Include: memory, external data, dataset context
self.get_pre_calculate_rest_tokens(
app_record=app_record,
model_config=application_generate_entity.model_conf,
prompt_template_entity=app_config.prompt_template,
inputs=inputs,
files=files,
query=query,
)
memory = None
if application_generate_entity.conversation_id:
# get memory of conversation (read-only)

View File

@@ -15,10 +15,8 @@ from core.app.features.annotation_reply.annotation_reply import AnnotationReplyF
from core.app.features.hosting_moderation.hosting_moderation import HostingModerationFeature
from core.external_data_tool.external_data_fetch import ExternalDataFetch
from core.memory.token_buffer_memory import TokenBufferMemory
from core.model_manager import ModelInstance
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage
from core.model_runtime.entities.message_entities import AssistantPromptMessage, PromptMessage
from core.model_runtime.entities.model_entities import ModelPropertyKey
from core.model_runtime.errors.invoke import InvokeBadRequestError
from core.moderation.input_moderation import InputModeration
from core.prompt.advanced_prompt_transform import AdvancedPromptTransform
@@ -31,106 +29,6 @@ if TYPE_CHECKING:
class AppRunner:
def get_pre_calculate_rest_tokens(
self,
app_record: App,
model_config: ModelConfigWithCredentialsEntity,
prompt_template_entity: PromptTemplateEntity,
inputs: Mapping[str, str],
files: Sequence["File"],
query: Optional[str] = None,
) -> int:
"""
Get pre calculate rest tokens
:param app_record: app record
:param model_config: model config entity
:param prompt_template_entity: prompt template entity
:param inputs: inputs
:param files: files
:param query: query
:return:
"""
# Invoke model
model_instance = ModelInstance(
provider_model_bundle=model_config.provider_model_bundle, model=model_config.model
)
model_context_tokens = model_config.model_schema.model_properties.get(ModelPropertyKey.CONTEXT_SIZE)
max_tokens = 0
for parameter_rule in model_config.model_schema.parameter_rules:
if parameter_rule.name == "max_tokens" or (
parameter_rule.use_template and parameter_rule.use_template == "max_tokens"
):
max_tokens = (
model_config.parameters.get(parameter_rule.name)
or model_config.parameters.get(parameter_rule.use_template or "")
) or 0
if model_context_tokens is None:
return -1
if max_tokens is None:
max_tokens = 0
# get prompt messages without memory and context
prompt_messages, stop = self.organize_prompt_messages(
app_record=app_record,
model_config=model_config,
prompt_template_entity=prompt_template_entity,
inputs=inputs,
files=files,
query=query,
)
prompt_tokens = model_instance.get_llm_num_tokens(prompt_messages)
rest_tokens: int = model_context_tokens - max_tokens - prompt_tokens
if rest_tokens < 0:
raise InvokeBadRequestError(
"Query or prefix prompt is too long, you can reduce the prefix prompt, "
"or shrink the max token, or switch to a llm with a larger token limit size."
)
return rest_tokens
def recalc_llm_max_tokens(
self, model_config: ModelConfigWithCredentialsEntity, prompt_messages: list[PromptMessage]
):
# recalc max_tokens if sum(prompt_token + max_tokens) over model token limit
model_instance = ModelInstance(
provider_model_bundle=model_config.provider_model_bundle, model=model_config.model
)
model_context_tokens = model_config.model_schema.model_properties.get(ModelPropertyKey.CONTEXT_SIZE)
max_tokens = 0
for parameter_rule in model_config.model_schema.parameter_rules:
if parameter_rule.name == "max_tokens" or (
parameter_rule.use_template and parameter_rule.use_template == "max_tokens"
):
max_tokens = (
model_config.parameters.get(parameter_rule.name)
or model_config.parameters.get(parameter_rule.use_template or "")
) or 0
if model_context_tokens is None:
return -1
if max_tokens is None:
max_tokens = 0
prompt_tokens = model_instance.get_llm_num_tokens(prompt_messages)
if prompt_tokens + max_tokens > model_context_tokens:
max_tokens = max(model_context_tokens - prompt_tokens, 16)
for parameter_rule in model_config.model_schema.parameter_rules:
if parameter_rule.name == "max_tokens" or (
parameter_rule.use_template and parameter_rule.use_template == "max_tokens"
):
model_config.parameters[parameter_rule.name] = max_tokens
def organize_prompt_messages(
self,
app_record: App,

View File

@@ -50,20 +50,6 @@ class ChatAppRunner(AppRunner):
query = application_generate_entity.query
files = application_generate_entity.files
# Pre-calculate the number of tokens of the prompt messages,
# and return the rest number of tokens by model context token size limit and max token size limit.
# If the rest number of tokens is not enough, raise exception.
# Include: prompt template, inputs, query(optional), files(optional)
# Not Include: memory, external data, dataset context
self.get_pre_calculate_rest_tokens(
app_record=app_record,
model_config=application_generate_entity.model_conf,
prompt_template_entity=app_config.prompt_template,
inputs=inputs,
files=files,
query=query,
)
memory = None
if application_generate_entity.conversation_id:
# get memory of conversation (read-only)
@@ -194,9 +180,6 @@ class ChatAppRunner(AppRunner):
if hosting_moderation_result:
return
# Re-calculate the max tokens if sum(prompt_token + max_tokens) over model token limit
self.recalc_llm_max_tokens(model_config=application_generate_entity.model_conf, prompt_messages=prompt_messages)
# Invoke model
model_instance = ModelInstance(
provider_model_bundle=application_generate_entity.model_conf.provider_model_bundle,

View File

@@ -43,20 +43,6 @@ class CompletionAppRunner(AppRunner):
query = application_generate_entity.query
files = application_generate_entity.files
# Pre-calculate the number of tokens of the prompt messages,
# and return the rest number of tokens by model context token size limit and max token size limit.
# If the rest number of tokens is not enough, raise exception.
# Include: prompt template, inputs, query(optional), files(optional)
# Not Include: memory, external data, dataset context
self.get_pre_calculate_rest_tokens(
app_record=app_record,
model_config=application_generate_entity.model_conf,
prompt_template_entity=app_config.prompt_template,
inputs=inputs,
files=files,
query=query,
)
# organize all inputs and template to prompt messages
# Include: prompt template, inputs, query(optional), files(optional)
prompt_messages, stop = self.organize_prompt_messages(
@@ -152,9 +138,6 @@ class CompletionAppRunner(AppRunner):
if hosting_moderation_result:
return
# Re-calculate the max tokens if sum(prompt_token + max_tokens) over model token limit
self.recalc_llm_max_tokens(model_config=application_generate_entity.model_conf, prompt_messages=prompt_messages)
# Invoke model
model_instance = ModelInstance(
provider_model_bundle=application_generate_entity.model_conf.provider_model_bundle,

View File

@@ -26,7 +26,7 @@ class TokenBufferMemory:
self.model_instance = model_instance
def get_history_prompt_messages(
self, max_token_limit: int = 2000, message_limit: Optional[int] = None
self, max_token_limit: int = 100000, message_limit: Optional[int] = None
) -> Sequence[PromptMessage]:
"""
Get history prompt messages.

View File

@@ -0,0 +1,115 @@
model: us.anthropic.claude-3-7-sonnet-20250219-v1:0
label:
en_US: Claude 3.7 Sonnet(US.Cross Region Inference)
icon: icon_s_en.svg
model_type: llm
features:
- agent-thought
- vision
- tool-call
- stream-tool-call
model_properties:
mode: chat
context_size: 200000
# docs: https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-anthropic-claude-messages.html
parameter_rules:
- name: enable_cache
label:
zh_Hans: 启用提示缓存
en_US: Enable Prompt Cache
type: boolean
required: false
default: true
help:
zh_Hans: 启用提示缓存可以提高性能并降低成本。Claude 3.7 Sonnet支持在system、messages和tools字段中使用缓存检查点。
en_US: Enable prompt caching to improve performance and reduce costs. Claude 3.7 Sonnet supports cache checkpoints in system, messages, and tools fields.
- name: reasoning_type
label:
zh_Hans: 推理配置
en_US: Reasoning Type
type: boolean
required: false
default: false
placeholder:
zh_Hans: 设置推理配置
en_US: Set reasoning configuration
help:
zh_Hans: 控制模型的推理能力。启用时temperature将固定为1且top_p将被禁用。
en_US: Controls the model's reasoning capability. When enabled, temperature will be fixed to 1 and top_p will be disabled.
- name: reasoning_budget
show_on:
- variable: reasoning_type
value: true
label:
zh_Hans: 推理预算
en_US: Reasoning Budget
type: int
default: 1024
min: 0
max: 128000
help:
zh_Hans: 推理的预算限制最小1024必须小于max_tokens。仅在推理类型为enabled时可用。
en_US: Budget limit for reasoning (minimum 1024), must be less than max_tokens. Only available when reasoning type is enabled.
- name: max_tokens
use_template: max_tokens
required: true
label:
zh_Hans: 最大token数
en_US: Max Tokens
type: int
default: 8192
min: 1
max: 128000
help:
zh_Hans: 停止前生成的最大令牌数。请注意Anthropic Claude 模型可能会在达到 max_tokens 的值之前停止生成令牌。不同的 Anthropic Claude 模型对此参数具有不同的最大值。
en_US: The maximum number of tokens to generate before stopping. Note that Anthropic Claude models might stop generating tokens before reaching the value of max_tokens. Different Anthropic Claude models have different maximum values for this parameter.
- name: temperature
use_template: temperature
required: false
label:
zh_Hans: 模型温度
en_US: Model Temperature
type: float
default: 1
min: 0.0
max: 1.0
help:
zh_Hans: 生成内容的随机性。当推理功能启用时该值将被固定为1。
en_US: The amount of randomness injected into the response. When reasoning is enabled, this value will be fixed to 1.
- name: top_p
show_on:
- variable: reasoning_type
value: disabled
use_template: top_p
label:
zh_Hans: Top P
en_US: Top P
required: false
type: float
default: 0.999
min: 0.000
max: 1.000
help:
zh_Hans: 在核采样中的概率阈值。当推理功能启用时,该参数将被禁用。
en_US: The probability threshold in nucleus sampling. When reasoning is enabled, this parameter will be disabled.
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
required: false
type: int
default: 0
min: 0
# tip docs from aws has error, max value is 500
max: 500
help:
zh_Hans: 对于每个后续标记,仅从前 K 个选项中进行采样。使用 top_k 删除长尾低概率响应。
en_US: Only sample from the top K options for each subsequent token. Use top_k to remove long tail low probability responses.
- name: response_format
use_template: response_format
pricing:
input: '0.003'
output: '0.015'
unit: '0.001'
currency: USD

View File

@@ -58,6 +58,7 @@ class BedrockLargeLanguageModel(LargeLanguageModel):
# TODO There is invoke issue: context limit on Cohere Model, will add them after fixed.
CONVERSE_API_ENABLED_MODEL_INFO = [
{"prefix": "anthropic.claude-v2", "support_system_prompts": True, "support_tool_use": False},
{"prefix": "us.deepseek", "support_system_prompts": True, "support_tool_use": False},
{"prefix": "anthropic.claude-v1", "support_system_prompts": True, "support_tool_use": False},
{"prefix": "us.anthropic.claude-3", "support_system_prompts": True, "support_tool_use": True},
{"prefix": "eu.anthropic.claude-3", "support_system_prompts": True, "support_tool_use": True},

View File

@@ -0,0 +1,63 @@
model: us.deepseek.r1-v1:0
label:
en_US: DeepSeek-R1(US.Cross Region Inference)
icon: icon_s_en.svg
model_type: llm
features:
- agent-thought
- vision
- tool-call
- stream-tool-call
model_properties:
mode: chat
context_size: 32768
parameter_rules:
- name: max_tokens
use_template: max_tokens
required: true
label:
zh_Hans: 最大token数
en_US: Max Tokens
type: int
default: 8192
min: 1
max: 128000
help:
zh_Hans: 停止前生成的最大令牌数。
en_US: The maximum number of tokens to generate before stopping.
- name: temperature
use_template: temperature
required: false
label:
zh_Hans: 模型温度
en_US: Model Temperature
type: float
default: 1
min: 0.0
max: 1.0
help:
zh_Hans: 生成内容的随机性。当推理功能启用时该值将被固定为1。
en_US: The amount of randomness injected into the response. When reasoning is enabled, this value will be fixed to 1.
- name: top_p
show_on:
- variable: reasoning_type
value: disabled
use_template: top_p
label:
zh_Hans: Top P
en_US: Top P
required: false
type: float
default: 0.999
min: 0.000
max: 1.000
help:
zh_Hans: 在核采样中的概率阈值。当推理功能启用时,该参数将被禁用。
en_US: The probability threshold in nucleus sampling. When reasoning is enabled, this parameter will be disabled.
- name: response_format
use_template: response_format
pricing:
input: '0.001'
output: '0.005'
unit: '0.001'
currency: USD

View File

@@ -19,8 +19,8 @@ class GoogleProvider(ModelProvider):
try:
model_instance = self.get_model_instance(ModelType.LLM)
# Use `gemini-pro` model for validate,
model_instance.validate_credentials(model="gemini-pro", credentials=credentials)
# Use `gemini-2.0-flash` model for validate,
model_instance.validate_credentials(model="gemini-2.0-flash", credentials=credentials)
except CredentialsValidateFailedError as ex:
raise ex
except Exception as ex:

View File

@@ -19,5 +19,3 @@
- gemini-exp-1206
- gemini-exp-1121
- gemini-exp-1114
- gemini-pro
- gemini-pro-vision

View File

@@ -1,35 +0,0 @@
model: gemini-pro-vision
label:
en_US: Gemini Pro Vision
model_type: llm
features:
- vision
model_properties:
mode: chat
context_size: 12288
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens_to_sample
use_template: max_tokens
required: true
default: 4096
min: 1
max: 4096
pricing:
input: '0.00'
output: '0.00'
unit: '0.000001'
currency: USD
deprecated: true

View File

@@ -1,39 +0,0 @@
model: gemini-pro
label:
en_US: Gemini Pro
model_type: llm
features:
- agent-thought
- tool-call
- stream-tool-call
model_properties:
mode: chat
context_size: 30720
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: max_tokens_to_sample
use_template: max_tokens
required: true
default: 2048
min: 1
max: 2048
- name: response_format
use_template: response_format
pricing:
input: '0.00'
output: '0.00'
unit: '0.000001'
currency: USD
deprecated: true

View File

@@ -1,3 +1,4 @@
- gpt-4.1
- o1
- o1-2024-12-17
- o1-mini

View File

@@ -0,0 +1,60 @@
model: gpt-4.1
label:
zh_Hans: gpt-4.1
en_US: gpt-4.1
model_type: llm
features:
- multi-tool-call
- agent-thought
- stream-tool-call
- vision
model_properties:
mode: chat
context_size: 1047576
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: presence_penalty
use_template: presence_penalty
- name: frequency_penalty
use_template: frequency_penalty
- name: max_tokens
use_template: max_tokens
default: 512
min: 1
max: 32768
- name: reasoning_effort
label:
zh_Hans: 推理工作
en_US: Reasoning Effort
type: string
help:
zh_Hans: 限制推理模型的推理工作
en_US: Constrains effort on reasoning for reasoning models
required: false
options:
- low
- medium
- high
- name: response_format
label:
zh_Hans: 回复格式
en_US: Response Format
type: string
help:
zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output
required: false
options:
- text
- json_object
- json_schema
- name: json_schema
use_template: json_schema
pricing:
input: '2.00'
output: '8.00'
unit: '0.000001'
currency: USD

View File

@@ -1049,6 +1049,9 @@ class OpenAILargeLanguageModel(_CommonOpenAI, LargeLanguageModel):
"""Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.
Official documentation: https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb"""
if not messages and not tools:
return 0
if model.startswith("ft:"):
model = model.split(":")[1]
@@ -1057,18 +1060,18 @@ class OpenAILargeLanguageModel(_CommonOpenAI, LargeLanguageModel):
model = "gpt-4o"
try:
encoding = tiktoken.encoding_for_model(model)
except KeyError:
logger.warning("Warning: model not found. Using cl100k_base encoding.")
model = "cl100k_base"
encoding = tiktoken.get_encoding(model)
except (KeyError, ValueError) as e:
logger.warning("Warning: model not found. Using cl100k_base encoding.")
encoding_name = "cl100k_base"
encoding = tiktoken.get_encoding(encoding_name)
if model.startswith("gpt-3.5-turbo-0301"):
# every message follows <im_start>{role/name}\n{content}<im_end>\n
tokens_per_message = 4
# if there's a name, the role is omitted
tokens_per_name = -1
elif model.startswith("gpt-3.5-turbo") or model.startswith("gpt-4") or model.startswith(("o1", "o3")):
elif model.startswith("gpt-3.5-turbo") or model.startswith("gpt-4") or model.startswith(("o1", "o3", "o4")):
tokens_per_message = 3
tokens_per_name = 1
else:

View File

@@ -5,11 +5,6 @@ model_type: llm
features:
- agent-thought
- vision
- tool-call
- stream-tool-call
- document
- video
- audio
model_properties:
mode: chat
context_size: 1048576
@@ -20,20 +15,21 @@ parameter_rules:
use_template: top_p
- name: top_k
label:
zh_Hans: 取样数量
en_US: Top k
type: int
help:
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false
- name: presence_penalty
use_template: presence_penalty
- name: frequency_penalty
use_template: frequency_penalty
- name: max_output_tokens
use_template: max_tokens
required: true
default: 8192
min: 1
max: 8192
- name: json_schema
use_template: json_schema
pricing:
input: '0.00'
output: '0.00'

View File

@@ -77,5 +77,4 @@
- onebot
- regex
- trello
- vanna
- fal

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.5 KiB

View File

@@ -1,134 +0,0 @@
from typing import Any, Union
from vanna.remote import VannaDefault # type: ignore
from core.tools.entities.tool_entities import ToolInvokeMessage
from core.tools.errors import ToolProviderCredentialValidationError
from core.tools.tool.builtin_tool import BuiltinTool
class VannaTool(BuiltinTool):
def _invoke(
self, user_id: str, tool_parameters: dict[str, Any]
) -> Union[ToolInvokeMessage, list[ToolInvokeMessage]]:
"""
invoke tools
"""
# Ensure runtime and credentials
if not self.runtime or not self.runtime.credentials:
raise ToolProviderCredentialValidationError("Tool runtime or credentials are missing")
api_key = self.runtime.credentials.get("api_key", None)
if not api_key:
raise ToolProviderCredentialValidationError("Please input api key")
model = tool_parameters.get("model", "")
if not model:
return self.create_text_message("Please input RAG model")
prompt = tool_parameters.get("prompt", "")
if not prompt:
return self.create_text_message("Please input prompt")
url = tool_parameters.get("url", "")
if not url:
return self.create_text_message("Please input URL/Host/DSN")
db_name = tool_parameters.get("db_name", "")
username = tool_parameters.get("username", "")
password = tool_parameters.get("password", "")
port = tool_parameters.get("port", 0)
base_url = self.runtime.credentials.get("base_url", None)
vn = VannaDefault(model=model, api_key=api_key, config={"endpoint": base_url})
db_type = tool_parameters.get("db_type", "")
if db_type in {"Postgres", "MySQL", "Hive", "ClickHouse"}:
if not db_name:
return self.create_text_message("Please input database name")
if not username:
return self.create_text_message("Please input username")
if port < 1:
return self.create_text_message("Please input port")
schema_sql = "SELECT * FROM INFORMATION_SCHEMA.COLUMNS"
match db_type:
case "SQLite":
schema_sql = "SELECT type, sql FROM sqlite_master WHERE sql is not null"
vn.connect_to_sqlite(url)
case "Postgres":
vn.connect_to_postgres(host=url, dbname=db_name, user=username, password=password, port=port)
case "DuckDB":
vn.connect_to_duckdb(url=url)
case "SQLServer":
vn.connect_to_mssql(url)
case "MySQL":
vn.connect_to_mysql(host=url, dbname=db_name, user=username, password=password, port=port)
case "Oracle":
vn.connect_to_oracle(user=username, password=password, dsn=url)
case "Hive":
vn.connect_to_hive(host=url, dbname=db_name, user=username, password=password, port=port)
case "ClickHouse":
vn.connect_to_clickhouse(host=url, dbname=db_name, user=username, password=password, port=port)
enable_training = tool_parameters.get("enable_training", False)
reset_training_data = tool_parameters.get("reset_training_data", False)
if enable_training:
if reset_training_data:
existing_training_data = vn.get_training_data()
if len(existing_training_data) > 0:
for _, training_data in existing_training_data.iterrows():
vn.remove_training_data(training_data["id"])
ddl = tool_parameters.get("ddl", "")
question = tool_parameters.get("question", "")
sql = tool_parameters.get("sql", "")
memos = tool_parameters.get("memos", "")
training_metadata = tool_parameters.get("training_metadata", False)
if training_metadata:
if db_type == "SQLite":
df_ddl = vn.run_sql(schema_sql)
for ddl in df_ddl["sql"].to_list():
vn.train(ddl=ddl)
else:
df_information_schema = vn.run_sql(schema_sql)
plan = vn.get_training_plan_generic(df_information_schema)
vn.train(plan=plan)
if ddl:
vn.train(ddl=ddl)
if sql:
if question:
vn.train(question=question, sql=sql)
else:
vn.train(sql=sql)
if memos:
vn.train(documentation=memos)
#########################################################################################
# Due to CVE-2024-5565, we have to disable the chart generation feature
# The Vanna library uses a prompt function to present the user with visualized results,
# it is possible to alter the prompt using prompt injection and run arbitrary Python code
# instead of the intended visualization code.
# Specifically - allowing external input to the librarys “ask” method
# with "visualize" set to True (default behavior) leads to remote code execution.
# Affected versions: <= 0.5.5
#########################################################################################
allow_llm_to_see_data = tool_parameters.get("allow_llm_to_see_data", False)
res = vn.ask(
prompt, print_results=False, auto_train=True, visualize=False, allow_llm_to_see_data=allow_llm_to_see_data
)
result = []
if res is not None:
result.append(self.create_text_message(res[0]))
if len(res) > 1 and res[1] is not None:
result.append(self.create_text_message(res[1].to_markdown()))
if len(res) > 2 and res[2] is not None:
result.append(
self.create_blob_message(blob=res[2].to_image(format="svg"), meta={"mime_type": "image/svg+xml"})
)
return result

View File

@@ -1,213 +0,0 @@
identity:
name: vanna
author: QCTC
label:
en_US: Vanna.AI
zh_Hans: Vanna.AI
description:
human:
en_US: The fastest way to get actionable insights from your database just by asking questions.
zh_Hans: 一个基于大模型和RAG的Text2SQL工具。
llm: A tool for converting text to SQL.
parameters:
- name: prompt
type: string
required: true
label:
en_US: Prompt
zh_Hans: 提示词
pt_BR: Prompt
human_description:
en_US: used for generating SQL
zh_Hans: 用于生成SQL
llm_description: key words for generating SQL
form: llm
- name: model
type: string
required: true
label:
en_US: RAG Model
zh_Hans: RAG模型
human_description:
en_US: RAG Model for your database DDL
zh_Hans: 存储数据库训练数据的RAG模型
llm_description: RAG Model for generating SQL
form: llm
- name: db_type
type: select
required: true
options:
- value: SQLite
label:
en_US: SQLite
zh_Hans: SQLite
- value: Postgres
label:
en_US: Postgres
zh_Hans: Postgres
- value: DuckDB
label:
en_US: DuckDB
zh_Hans: DuckDB
- value: SQLServer
label:
en_US: Microsoft SQL Server
zh_Hans: 微软 SQL Server
- value: MySQL
label:
en_US: MySQL
zh_Hans: MySQL
- value: Oracle
label:
en_US: Oracle
zh_Hans: Oracle
- value: Hive
label:
en_US: Hive
zh_Hans: Hive
- value: ClickHouse
label:
en_US: ClickHouse
zh_Hans: ClickHouse
default: SQLite
label:
en_US: DB Type
zh_Hans: 数据库类型
human_description:
en_US: Database type.
zh_Hans: 选择要链接的数据库类型。
form: form
- name: url
type: string
required: true
label:
en_US: URL/Host/DSN
zh_Hans: URL/Host/DSN
human_description:
en_US: Please input depending on DB type, visit https://vanna.ai/docs/ for more specification
zh_Hans: 请根据数据库类型填入对应值详情参考https://vanna.ai/docs/
form: form
- name: db_name
type: string
required: false
label:
en_US: DB name
zh_Hans: 数据库名
human_description:
en_US: Database name
zh_Hans: 数据库名
form: form
- name: username
type: string
required: false
label:
en_US: Username
zh_Hans: 用户名
human_description:
en_US: Username
zh_Hans: 用户名
form: form
- name: password
type: secret-input
required: false
label:
en_US: Password
zh_Hans: 密码
human_description:
en_US: Password
zh_Hans: 密码
form: form
- name: port
type: number
required: false
label:
en_US: Port
zh_Hans: 端口
human_description:
en_US: Port
zh_Hans: 端口
form: form
- name: ddl
type: string
required: false
label:
en_US: Training DDL
zh_Hans: 训练DDL
human_description:
en_US: DDL statements for training data
zh_Hans: 用于训练RAG Model的建表语句
form: llm
- name: question
type: string
required: false
label:
en_US: Training Question
zh_Hans: 训练问题
human_description:
en_US: Question-SQL Pairs
zh_Hans: Question-SQL中的问题
form: llm
- name: sql
type: string
required: false
label:
en_US: Training SQL
zh_Hans: 训练SQL
human_description:
en_US: SQL queries to your training data
zh_Hans: 用于训练RAG Model的SQL语句
form: llm
- name: memos
type: string
required: false
label:
en_US: Training Memos
zh_Hans: 训练说明
human_description:
en_US: Sometimes you may want to add documentation about your business terminology or definitions
zh_Hans: 添加更多关于数据库的业务说明
form: llm
- name: enable_training
type: boolean
required: false
default: false
label:
en_US: Training Data
zh_Hans: 训练数据
human_description:
en_US: You only need to train once. Do not train again unless you want to add more training data
zh_Hans: 训练数据无更新时,训练一次即可
form: form
- name: reset_training_data
type: boolean
required: false
default: false
label:
en_US: Reset Training Data
zh_Hans: 重置训练数据
human_description:
en_US: Remove all training data in the current RAG Model
zh_Hans: 删除当前RAG Model中的所有训练数据
form: form
- name: training_metadata
type: boolean
required: false
default: false
label:
en_US: Training Metadata
zh_Hans: 训练元数据
human_description:
en_US: If enabled, it will attempt to train on the metadata of that database
zh_Hans: 是否自动从数据库获取元数据来训练
form: form
- name: allow_llm_to_see_data
type: boolean
required: false
default: false
label:
en_US: Whether to allow the LLM to see the data
zh_Hans: 是否允许LLM查看数据
human_description:
en_US: Whether to allow the LLM to see the data
zh_Hans: 是否允许LLM查看数据
form: form

View File

@@ -1,46 +0,0 @@
import re
from typing import Any
from urllib.parse import urlparse
from core.tools.errors import ToolProviderCredentialValidationError
from core.tools.provider.builtin.vanna.tools.vanna import VannaTool
from core.tools.provider.builtin_tool_provider import BuiltinToolProviderController
class VannaProvider(BuiltinToolProviderController):
def _get_protocol_and_main_domain(self, url):
parsed_url = urlparse(url)
protocol = parsed_url.scheme
hostname = parsed_url.hostname
port = f":{parsed_url.port}" if parsed_url.port else ""
# Check if the hostname is an IP address
is_ip = re.match(r"^\d{1,3}(\.\d{1,3}){3}$", hostname) is not None
# Return the full hostname (with port if present) for IP addresses, otherwise return the main domain
main_domain = f"{hostname}{port}" if is_ip else ".".join(hostname.split(".")[-2:]) + port
return f"{protocol}://{main_domain}"
def _validate_credentials(self, credentials: dict[str, Any]) -> None:
base_url = credentials.get("base_url")
if not base_url:
base_url = "https://ask.vanna.ai/rpc"
else:
base_url = base_url.removesuffix("/")
credentials["base_url"] = base_url
try:
VannaTool().fork_tool_runtime(
runtime={
"credentials": credentials,
}
).invoke(
user_id="",
tool_parameters={
"model": "chinook",
"db_type": "SQLite",
"url": f"{self._get_protocol_and_main_domain(credentials['base_url'])}/Chinook.sqlite",
"query": "What are the top 10 customers by sales?",
},
)
except Exception as e:
raise ToolProviderCredentialValidationError(str(e))

View File

@@ -1,35 +0,0 @@
identity:
author: QCTC
name: vanna
label:
en_US: Vanna.AI
zh_Hans: Vanna.AI
description:
en_US: The fastest way to get actionable insights from your database just by asking questions.
zh_Hans: 一个基于大模型和RAG的Text2SQL工具。
icon: icon.png
tags:
- utilities
- productivity
credentials_for_provider:
api_key:
type: secret-input
required: true
label:
en_US: API key
zh_Hans: API key
placeholder:
en_US: Please input your API key
zh_Hans: 请输入你的 API key
pt_BR: Please input your API key
help:
en_US: Get your API key from Vanna.AI
zh_Hans: 从 Vanna.AI 获取你的 API key
url: https://vanna.ai/account/profile
base_url:
type: text-input
required: false
label:
en_US: Vanna.AI Endpoint Base URL
placeholder:
en_US: https://ask.vanna.ai/rpc

View File

@@ -1,21 +1,13 @@
import hashlib
import json
import mimetypes
import os
import re
import site
import subprocess
import tempfile
import unicodedata
from contextlib import contextmanager
from pathlib import Path
from typing import Any, Literal, Optional, cast
from collections.abc import Sequence
from dataclasses import dataclass
from typing import Any, Optional, cast
from urllib.parse import unquote
import chardet
import cloudscraper # type: ignore
from bs4 import BeautifulSoup, CData, Comment, NavigableString # type: ignore
from regex import regex # type: ignore
from readabilipy import simple_json_from_html_string # type: ignore
from core.helper import ssrf_proxy
from core.rag.extractor import extract_processor
@@ -23,9 +15,7 @@ from core.rag.extractor.extract_processor import ExtractProcessor
FULL_TEMPLATE = """
TITLE: {title}
AUTHORS: {authors}
PUBLISH DATE: {publish_date}
TOP_IMAGE_URL: {top_image}
AUTHOR: {author}
TEXT:
{text}
@@ -73,8 +63,8 @@ def get_url(url: str, user_agent: Optional[str] = None) -> str:
response = ssrf_proxy.get(url, headers=headers, follow_redirects=True, timeout=(120, 300))
elif response.status_code == 403:
scraper = cloudscraper.create_scraper()
scraper.perform_request = ssrf_proxy.make_request
response = scraper.get(url, headers=headers, follow_redirects=True, timeout=(120, 300))
scraper.perform_request = ssrf_proxy.make_request # type: ignore
response = scraper.get(url, headers=headers, follow_redirects=True, timeout=(120, 300)) # type: ignore
if response.status_code != 200:
return "URL returned status code {}.".format(response.status_code)
@@ -90,273 +80,36 @@ def get_url(url: str, user_agent: Optional[str] = None) -> str:
else:
content = response.text
a = extract_using_readabilipy(content)
article = extract_using_readabilipy(content)
if not a["plain_text"] or not a["plain_text"].strip():
if not article.text:
return ""
res = FULL_TEMPLATE.format(
title=a["title"],
authors=a["byline"],
publish_date=a["date"],
top_image="",
text=a["plain_text"] or "",
title=article.title,
author=article.auther,
text=article.text,
)
return res
def extract_using_readabilipy(html):
with tempfile.NamedTemporaryFile(delete=False, mode="w+") as f_html:
f_html.write(html)
f_html.close()
html_path = f_html.name
# Call Mozilla's Readability.js Readability.parse() function via node, writing output to a temporary file
article_json_path = html_path + ".json"
jsdir = os.path.join(find_module_path("readabilipy"), "javascript")
with chdir(jsdir):
subprocess.check_call(["node", "ExtractArticle.js", "-i", html_path, "-o", article_json_path])
# Read output of call to Readability.parse() from JSON file and return as Python dictionary
input_json = json.loads(Path(article_json_path).read_text(encoding="utf-8"))
# Deleting files after processing
os.unlink(article_json_path)
os.unlink(html_path)
article_json: dict[str, Any] = {
"title": None,
"byline": None,
"date": None,
"content": None,
"plain_content": None,
"plain_text": None,
}
# Populate article fields from readability fields where present
if input_json:
if input_json.get("title"):
article_json["title"] = input_json["title"]
if input_json.get("byline"):
article_json["byline"] = input_json["byline"]
if input_json.get("date"):
article_json["date"] = input_json["date"]
if input_json.get("content"):
article_json["content"] = input_json["content"]
article_json["plain_content"] = plain_content(article_json["content"], False, False)
article_json["plain_text"] = extract_text_blocks_as_plain_text(article_json["plain_content"])
if input_json.get("textContent"):
article_json["plain_text"] = input_json["textContent"]
article_json["plain_text"] = re.sub(r"\n\s*\n", "\n", article_json["plain_text"])
return article_json
@dataclass
class Article:
title: str
auther: str
text: Sequence[dict]
def find_module_path(module_name):
for package_path in site.getsitepackages():
potential_path = os.path.join(package_path, module_name)
if os.path.exists(potential_path):
return potential_path
return None
@contextmanager
def chdir(path):
"""Change directory in context and return to original on exit"""
# From https://stackoverflow.com/a/37996581, couldn't find a built-in
original_path = os.getcwd()
os.chdir(path)
try:
yield
finally:
os.chdir(original_path)
def extract_text_blocks_as_plain_text(paragraph_html):
# Load article as DOM
soup = BeautifulSoup(paragraph_html, "html.parser")
# Select all lists
list_elements = soup.find_all(["ul", "ol"])
# Prefix text in all list items with "* " and make lists paragraphs
for list_element in list_elements:
plain_items = "".join(
list(filter(None, [plain_text_leaf_node(li)["text"] for li in list_element.find_all("li")]))
)
list_element.string = plain_items
list_element.name = "p"
# Select all text blocks
text_blocks = [s.parent for s in soup.find_all(string=True)]
text_blocks = [plain_text_leaf_node(block) for block in text_blocks]
# Drop empty paragraphs
text_blocks = list(filter(lambda p: p["text"] is not None, text_blocks))
return text_blocks
def plain_text_leaf_node(element):
# Extract all text, stripped of any child HTML elements and normalize it
plain_text = normalize_text(element.get_text())
if plain_text != "" and element.name == "li":
plain_text = "* {}, ".format(plain_text)
if plain_text == "":
plain_text = None
if "data-node-index" in element.attrs:
plain = {"node_index": element["data-node-index"], "text": plain_text}
else:
plain = {"text": plain_text}
return plain
def plain_content(readability_content, content_digests, node_indexes):
# Load article as DOM
soup = BeautifulSoup(readability_content, "html.parser")
# Make all elements plain
elements = plain_elements(soup.contents, content_digests, node_indexes)
if node_indexes:
# Add node index attributes to nodes
elements = [add_node_indexes(element) for element in elements]
# Replace article contents with plain elements
soup.contents = elements
return str(soup)
def plain_elements(elements, content_digests, node_indexes):
# Get plain content versions of all elements
elements = [plain_element(element, content_digests, node_indexes) for element in elements]
if content_digests:
# Add content digest attribute to nodes
elements = [add_content_digest(element) for element in elements]
return elements
def plain_element(element, content_digests, node_indexes):
# For lists, we make each item plain text
if is_leaf(element):
# For leaf node elements, extract the text content, discarding any HTML tags
# 1. Get element contents as text
plain_text = element.get_text()
# 2. Normalize the extracted text string to a canonical representation
plain_text = normalize_text(plain_text)
# 3. Update element content to be plain text
element.string = plain_text
elif is_text(element):
if is_non_printing(element):
# The simplified HTML may have come from Readability.js so might
# have non-printing text (e.g. Comment or CData). In this case, we
# keep the structure, but ensure that the string is empty.
element = type(element)("")
else:
plain_text = element.string
plain_text = normalize_text(plain_text)
element = type(element)(plain_text)
else:
# If not a leaf node or leaf type call recursively on child nodes, replacing
element.contents = plain_elements(element.contents, content_digests, node_indexes)
return element
def add_node_indexes(element, node_index="0"):
# Can't add attributes to string types
if is_text(element):
return element
# Add index to current element
element["data-node-index"] = node_index
# Add index to child elements
for local_idx, child in enumerate([c for c in element.contents if not is_text(c)], start=1):
# Can't add attributes to leaf string types
child_index = "{stem}.{local}".format(stem=node_index, local=local_idx)
add_node_indexes(child, node_index=child_index)
return element
def normalize_text(text):
"""Normalize unicode and whitespace."""
# Normalize unicode first to try and standardize whitespace characters as much as possible before normalizing them
text = strip_control_characters(text)
text = normalize_unicode(text)
text = normalize_whitespace(text)
return text
def strip_control_characters(text):
"""Strip out unicode control characters which might break the parsing."""
# Unicode control characters
# [Cc]: Other, Control [includes new lines]
# [Cf]: Other, Format
# [Cn]: Other, Not Assigned
# [Co]: Other, Private Use
# [Cs]: Other, Surrogate
control_chars = {"Cc", "Cf", "Cn", "Co", "Cs"}
retained_chars = ["\t", "\n", "\r", "\f"]
# Remove non-printing control characters
return "".join(
[
"" if (unicodedata.category(char) in control_chars) and (char not in retained_chars) else char
for char in text
]
def extract_using_readabilipy(html: str):
json_article: dict[str, Any] = simple_json_from_html_string(html, use_readability=True)
article = Article(
title=json_article.get("title") or "",
auther=json_article.get("byline") or "",
text=json_article.get("plain_text") or [],
)
def normalize_unicode(text):
"""Normalize unicode such that things that are visually equivalent map to the same unicode string where possible."""
normal_form: Literal["NFC", "NFD", "NFKC", "NFKD"] = "NFKC"
text = unicodedata.normalize(normal_form, text)
return text
def normalize_whitespace(text):
"""Replace runs of whitespace characters with a single space as this is what happens when HTML text is displayed."""
text = regex.sub(r"\s+", " ", text)
# Remove leading and trailing whitespace
text = text.strip()
return text
def is_leaf(element):
return element.name in {"p", "li"}
def is_text(element):
return isinstance(element, NavigableString)
def is_non_printing(element):
return any(isinstance(element, _e) for _e in [Comment, CData])
def add_content_digest(element):
if not is_text(element):
element["data-content-digest"] = content_digest(element)
return element
def content_digest(element):
digest: Any
if is_text(element):
# Hash
trimmed_string = element.string.strip()
if trimmed_string == "":
digest = ""
else:
digest = hashlib.sha256(trimmed_string.encode("utf-8")).hexdigest()
else:
contents = element.contents
num_contents = len(contents)
if num_contents == 0:
# No hash when no child elements exist
digest = ""
elif num_contents == 1:
# If single child, use digest of child
digest = content_digest(contents[0])
else:
# Build content digest from the "non-empty" digests of child nodes
digest = hashlib.sha256()
child_digests = list(filter(lambda x: x != "", [content_digest(content) for content in contents]))
for child in child_digests:
digest.update(child.encode("utf-8"))
digest = digest.hexdigest()
return digest
return article
def get_image_upload_file_ids(content):

View File

@@ -195,7 +195,7 @@ class CodeNode(BaseNode[CodeNodeData]):
if output_config.type == "object":
# check if output is object
if not isinstance(result.get(output_name), dict):
if isinstance(result.get(output_name), type(None)):
if result.get(output_name) is None:
transformed_result[output_name] = None
else:
raise OutputValidationError(
@@ -223,7 +223,7 @@ class CodeNode(BaseNode[CodeNodeData]):
elif output_config.type == "array[number]":
# check if array of number available
if not isinstance(result[output_name], list):
if isinstance(result[output_name], type(None)):
if result[output_name] is None:
transformed_result[output_name] = None
else:
raise OutputValidationError(
@@ -244,7 +244,7 @@ class CodeNode(BaseNode[CodeNodeData]):
elif output_config.type == "array[string]":
# check if array of string available
if not isinstance(result[output_name], list):
if isinstance(result[output_name], type(None)):
if result[output_name] is None:
transformed_result[output_name] = None
else:
raise OutputValidationError(
@@ -265,7 +265,7 @@ class CodeNode(BaseNode[CodeNodeData]):
elif output_config.type == "array[object]":
# check if array of object available
if not isinstance(result[output_name], list):
if isinstance(result[output_name], type(None)):
if result[output_name] is None:
transformed_result[output_name] = None
else:
raise OutputValidationError(

View File

@@ -968,14 +968,12 @@ def _handle_memory_chat_mode(
*,
memory: TokenBufferMemory | None,
memory_config: MemoryConfig | None,
model_config: ModelConfigWithCredentialsEntity,
model_config: ModelConfigWithCredentialsEntity, # TODO(-LAN-): Needs to remove
) -> Sequence[PromptMessage]:
memory_messages: Sequence[PromptMessage] = []
# Get messages from memory for chat model
if memory and memory_config:
rest_tokens = _calculate_rest_token(prompt_messages=[], model_config=model_config)
memory_messages = memory.get_history_prompt_messages(
max_token_limit=rest_tokens,
message_limit=memory_config.window.size if memory_config.window.enabled else None,
)
return memory_messages

View File

@@ -35,6 +35,7 @@ else
--worker-class ${SERVER_WORKER_CLASS:-gevent} \
--worker-connections ${SERVER_WORKER_CONNECTIONS:-10} \
--timeout ${GUNICORN_TIMEOUT:-200} \
--keep-alive ${GUNICORN_KEEP_ALIVE:-2} \
app:app
fi
fi

6289
api/poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -21,7 +21,7 @@ azure-ai-inference = "~1.0.0b8"
azure-ai-ml = "~1.20.0"
azure-identity = "1.16.1"
beautifulsoup4 = "4.12.2"
boto3 = "1.36.12"
boto3 = "~1.35.0"
bs4 = "~0.0.1"
cachetools = "~5.3.0"
celery = "~5.4.0"
@@ -48,7 +48,7 @@ google-generativeai = "0.8.1"
googleapis-common-protos = "1.63.0"
gunicorn = "~23.0.0"
httpx = { version = "~0.27.0", extras = ["socks"] }
huggingface-hub = "~0.16.4"
huggingface-hub = "~0.31.0"
jieba = "0.42.1"
langfuse = "~2.51.3"
langsmith = "~0.1.77"
@@ -78,18 +78,18 @@ pyyaml = "~6.0.1"
readabilipy = "0.2.0"
redis = { version = "~5.0.3", extras = ["hiredis"] }
replicate = "~0.22.0"
resend = "~0.7.0"
resend = "~2.9.0"
sagemaker = "~2.231.0"
scikit-learn = "~1.5.1"
sentry-sdk = { version = "~1.44.1", extras = ["flask"] }
sqlalchemy = "~2.0.29"
starlette = "0.41.0"
tencentcloud-sdk-python-hunyuan = "~3.0.1294"
tiktoken = "~0.8.0"
tiktoken = "^0.9.0"
tokenizers = "~0.15.0"
transformers = "~4.35.0"
transformers = "~4.39.0"
unstructured = { version = "~0.16.1", extras = ["docx", "epub", "md", "msg", "ppt", "pptx"] }
validators = "0.21.0"
validators = "0.22.0"
volcengine-python-sdk = {extras = ["ark"], version = "~1.0.98"}
websocket-client = "~1.7.0"
xinference-client = "0.15.2"
@@ -112,7 +112,7 @@ safetensors = "~0.4.3"
# [ Tools ] dependency group
############################################################
[tool.poetry.group.tools.dependencies]
arxiv = "2.1.0"
arxiv = "2.2.0"
cloudscraper = "1.2.71"
duckduckgo-search = "~6.3.0"
jsonpath-ng = "1.6.1"
@@ -166,7 +166,7 @@ tcvectordb = "1.3.2"
tidb-vector = "0.0.9"
upstash-vector = "0.6.0"
volcengine-compat = "~1.0.156"
weaviate-client = "~3.21.0"
weaviate-client = "~3.26.0"
############################################################
# [ Dev ] dependency group

View File

@@ -406,10 +406,8 @@ class AccountService:
raise PasswordResetRateLimitExceededError()
code = "".join([str(random.randint(0, 9)) for _ in range(6)])
token = TokenManager.generate_token(
account=account, email=email, token_type="reset_password", additional_data={"code": code}
)
code, token = cls.generate_reset_password_token(account_email, account)
send_reset_password_mail_task.delay(
language=language,
to=account_email,
@@ -418,6 +416,22 @@ class AccountService:
cls.reset_password_rate_limiter.increment_rate_limit(account_email)
return token
@classmethod
def generate_reset_password_token(
cls,
email: str,
account: Optional[Account] = None,
code: Optional[str] = None,
additional_data: dict[str, Any] = {},
):
if not code:
code = "".join([str(random.randint(0, 9)) for _ in range(6)])
additional_data["code"] = code
token = TokenManager.generate_token(
account=account, email=email, token_type="reset_password", additional_data=additional_data
)
return code, token
@classmethod
def revoke_reset_password_token(cls, token: str):
TokenManager.revoke_token(token, "reset_password")

View File

@@ -55,13 +55,19 @@ def _check_version_compatibility(imported_version: str) -> ImportStatus:
except version.InvalidVersion:
return ImportStatus.FAILED
# Compare major version and minor version
if current_ver.major != imported_ver.major or current_ver.minor != imported_ver.minor:
# If imported version is newer than current, always return PENDING
if imported_ver > current_ver:
return ImportStatus.PENDING
if current_ver.micro != imported_ver.micro:
# If imported version is older than current's major, return PENDING
if imported_ver.major < current_ver.major:
return ImportStatus.PENDING
# If imported version is older than current's minor, return COMPLETED_WITH_WARNINGS
if imported_ver.minor < current_ver.minor:
return ImportStatus.COMPLETED_WITH_WARNINGS
# If imported version equals or is older than current's micro, return COMPLETED
return ImportStatus.COMPLETED

View File

@@ -142,6 +142,9 @@ CELERY_WORKER_CLASS=
# it is recommended to set it to 360 to support a longer sse connection time.
GUNICORN_TIMEOUT=360
# The number of seconds to wait for requests on a Keep-Alive connection, default to 2
GUNICORN_KEEP_ALIVE=2
# The number of Celery workers. The default is 1, and can be set as needed.
CELERY_WORKER_AMOUNT=
@@ -932,3 +935,6 @@ MAX_SUBMIT_COUNT=100
# The maximum number of top-k value for RAG.
TOP_K_MAX_VALUE=10
# Prevent Clickjacking
ALLOW_EMBED=false

View File

@@ -1,8 +1,8 @@
x-shared-env: &shared-api-worker-env
x-shared-env: &shared-api-worker-env
services:
# API service
api:
image: langgenius/dify-api:0.15.3
image: langgenius/dify-api:0.15.8
restart: always
environment:
# Use the shared environment variables.
@@ -25,7 +25,7 @@ services:
# worker service
# The Celery worker for processing the queue.
worker:
image: langgenius/dify-api:0.15.3
image: langgenius/dify-api:0.15.8
restart: always
environment:
# Use the shared environment variables.
@@ -47,7 +47,7 @@ services:
# Frontend web application.
web:
image: langgenius/dify-web:0.15.3
image: langgenius/dify-web:0.15.8
restart: always
environment:
CONSOLE_API_URL: ${CONSOLE_API_URL:-}
@@ -56,6 +56,7 @@ services:
NEXT_TELEMETRY_DISABLED: ${NEXT_TELEMETRY_DISABLED:-0}
TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000}
CSP_WHITELIST: ${CSP_WHITELIST:-}
ALLOW_EMBED: ${ALLOW_EMBED:-false}
TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-}
INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: ${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH:-}
@@ -98,7 +99,7 @@ services:
# The DifySandbox
sandbox:
image: langgenius/dify-sandbox:0.2.10
image: langgenius/dify-sandbox:0.2.11
restart: always
environment:
# The DifySandbox configurations

View File

@@ -43,7 +43,7 @@ services:
# The DifySandbox
sandbox:
image: langgenius/dify-sandbox:0.2.10
image: langgenius/dify-sandbox:0.2.11
restart: always
environment:
# The DifySandbox configurations

View File

@@ -37,6 +37,7 @@ x-shared-env: &shared-api-worker-env
SERVER_WORKER_CONNECTIONS: ${SERVER_WORKER_CONNECTIONS:-10}
CELERY_WORKER_CLASS: ${CELERY_WORKER_CLASS:-}
GUNICORN_TIMEOUT: ${GUNICORN_TIMEOUT:-360}
GUNICORN_KEEP_ALIVE: ${GUNICORN_KEEP_ALIVE:-2}
CELERY_WORKER_AMOUNT: ${CELERY_WORKER_AMOUNT:-}
CELERY_AUTO_SCALE: ${CELERY_AUTO_SCALE:-false}
CELERY_MAX_WORKERS: ${CELERY_MAX_WORKERS:-}
@@ -389,11 +390,12 @@ x-shared-env: &shared-api-worker-env
CREATE_TIDB_SERVICE_JOB_ENABLED: ${CREATE_TIDB_SERVICE_JOB_ENABLED:-false}
MAX_SUBMIT_COUNT: ${MAX_SUBMIT_COUNT:-100}
TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-10}
ALLOW_EMBED: ${ALLOW_EMBED:-false}
services:
# API service
api:
image: langgenius/dify-api:0.15.3
image: langgenius/dify-api:0.15.8
restart: always
environment:
# Use the shared environment variables.
@@ -416,7 +418,7 @@ services:
# worker service
# The Celery worker for processing the queue.
worker:
image: langgenius/dify-api:0.15.3
image: langgenius/dify-api:0.15.8
restart: always
environment:
# Use the shared environment variables.
@@ -438,7 +440,7 @@ services:
# Frontend web application.
web:
image: langgenius/dify-web:0.15.3
image: langgenius/dify-web:0.15.8
restart: always
environment:
CONSOLE_API_URL: ${CONSOLE_API_URL:-}
@@ -447,6 +449,7 @@ services:
NEXT_TELEMETRY_DISABLED: ${NEXT_TELEMETRY_DISABLED:-0}
TEXT_GENERATION_TIMEOUT_MS: ${TEXT_GENERATION_TIMEOUT_MS:-60000}
CSP_WHITELIST: ${CSP_WHITELIST:-}
ALLOW_EMBED: ${ALLOW_EMBED:-false}
TOP_K_MAX_VALUE: ${TOP_K_MAX_VALUE:-}
INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: ${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH:-}
@@ -489,7 +492,7 @@ services:
# The DifySandbox
sandbox:
image: langgenius/dify-sandbox:0.2.10
image: langgenius/dify-sandbox:0.2.11
restart: always
environment:
# The DifySandbox configurations

View File

@@ -31,3 +31,6 @@ NEXT_PUBLIC_TOP_K_MAX_VALUE=10
# The maximum number of tokens for segmentation
NEXT_PUBLIC_INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH=4000
# Default is not allow to embed into iframe to prevent Clickjacking: https://owasp.org/www-community/attacks/Clickjacking
NEXT_PUBLIC_ALLOW_EMBED=

View File

@@ -186,15 +186,17 @@ const Apps = ({
<div className='w-[180px] h-8'></div>
</div>
<div className='relative flex flex-1 overflow-y-auto'>
{!searchKeywords && <div className='w-[200px] h-full p-4'>
<Sidebar current={currCategory as AppCategories} onClick={(category) => { setCurrCategory(category) }} onCreateFromBlank={onCreateFromBlank} />
{!searchKeywords && <div className='h-full w-[200px] p-4'>
<Sidebar current={currCategory as AppCategories} categories={categories} onClick={(category) => { setCurrCategory(category) }} onCreateFromBlank={onCreateFromBlank} />
</div>}
<div className='flex-1 h-full overflow-auto shrink-0 grow p-6 pt-2 border-l border-divider-burn'>
{searchFilteredList && searchFilteredList.length > 0 && <>
<div className='pt-4 pb-1'>
{searchKeywords
? <p className='title-md-semi-bold text-text-tertiary'>{searchFilteredList.length > 1 ? t('app.newApp.foundResults', { count: searchFilteredList.length }) : t('app.newApp.foundResult', { count: searchFilteredList.length })}</p>
: <AppCategoryLabel category={currCategory as AppCategories} className='title-md-semi-bold text-text-primary' />}
: <div className='flex h-[22px] items-center'>
<AppCategoryLabel category={currCategory as AppCategories} className='title-md-semi-bold text-text-primary' />
</div>}
</div>
<div
className={cn(

View File

@@ -1,39 +1,29 @@
'use client'
import { RiAppsFill, RiChatSmileAiFill, RiExchange2Fill, RiPassPendingFill, RiQuillPenAiFill, RiSpeakAiFill, RiStickyNoteAddLine, RiTerminalBoxFill, RiThumbUpFill } from '@remixicon/react'
import { RiStickyNoteAddLine, RiThumbUpLine } from '@remixicon/react'
import { useTranslation } from 'react-i18next'
import classNames from '@/utils/classnames'
import Divider from '@/app/components/base/divider'
export enum AppCategories {
RECOMMENDED = 'Recommended',
ASSISTANT = 'Assistant',
AGENT = 'Agent',
HR = 'HR',
PROGRAMMING = 'Programming',
WORKFLOW = 'Workflow',
WRITING = 'Writing',
}
type SidebarProps = {
current: AppCategories
onClick?: (category: AppCategories) => void
current: AppCategories | string
categories: string[]
onClick?: (category: AppCategories | string) => void
onCreateFromBlank?: () => void
}
export default function Sidebar({ current, onClick, onCreateFromBlank }: SidebarProps) {
export default function Sidebar({ current, categories, onClick, onCreateFromBlank }: SidebarProps) {
const { t } = useTranslation()
return <div className="w-full h-full flex flex-col">
<ul>
return <div className="flex h-full w-full flex-col">
<ul className='pt-0.5'>
<CategoryItem category={AppCategories.RECOMMENDED} active={current === AppCategories.RECOMMENDED} onClick={onClick} />
</ul>
<div className='px-3 pt-2 pb-1 system-xs-medium-uppercase text-text-tertiary'>{t('app.newAppFromTemplate.byCategories')}</div>
<ul className='flex-grow flex flex-col gap-0.5'>
<CategoryItem category={AppCategories.ASSISTANT} active={current === AppCategories.ASSISTANT} onClick={onClick} />
<CategoryItem category={AppCategories.AGENT} active={current === AppCategories.AGENT} onClick={onClick} />
<CategoryItem category={AppCategories.HR} active={current === AppCategories.HR} onClick={onClick} />
<CategoryItem category={AppCategories.PROGRAMMING} active={current === AppCategories.PROGRAMMING} onClick={onClick} />
<CategoryItem category={AppCategories.WORKFLOW} active={current === AppCategories.WORKFLOW} onClick={onClick} />
<CategoryItem category={AppCategories.WRITING} active={current === AppCategories.WRITING} onClick={onClick} />
<div className='system-xs-medium-uppercase mb-0.5 mt-3 px-3 pb-1 pt-2 text-text-tertiary'>{t('app.newAppFromTemplate.byCategories')}</div>
<ul className='flex grow flex-col gap-0.5'>
{categories.map(category => (<CategoryItem key={category} category={category} active={current === category} onClick={onClick} />))}
</ul>
<Divider bgStyle='gradient' />
<div className='px-3 py-1 flex items-center gap-1 text-text-tertiary cursor-pointer' onClick={onCreateFromBlank}>
@@ -45,47 +35,26 @@ export default function Sidebar({ current, onClick, onCreateFromBlank }: Sidebar
type CategoryItemProps = {
active: boolean
category: AppCategories
onClick?: (category: AppCategories) => void
category: AppCategories | string
onClick?: (category: AppCategories | string) => void
}
function CategoryItem({ category, active, onClick }: CategoryItemProps) {
return <li
className={classNames('p-1 pl-3 rounded-lg flex items-center gap-2 group cursor-pointer hover:bg-state-base-hover [&.active]:bg-state-base-active', active && 'active')}
className={classNames('p-1 pl-3 h-8 rounded-lg flex items-center gap-2 group cursor-pointer hover:bg-state-base-hover [&.active]:bg-state-base-active', active && 'active')}
onClick={() => { onClick?.(category) }}>
<div className='w-5 h-5 inline-flex items-center justify-center rounded-md border border-divider-regular bg-components-icon-bg-midnight-solid group-[.active]:bg-components-icon-bg-blue-solid'>
<AppCategoryIcon category={category} />
</div>
{category === AppCategories.RECOMMENDED && <div className='inline-flex h-5 w-5 items-center justify-center rounded-md'>
<RiThumbUpLine className='h-4 w-4 text-components-menu-item-text group-[.active]:text-components-menu-item-text-active' />
</div>}
<AppCategoryLabel category={category}
className={classNames('system-sm-medium text-components-menu-item-text group-[.active]:text-components-menu-item-text-active group-hover:text-components-menu-item-text-hover', active && 'system-sm-semibold')} />
</li >
}
type AppCategoryLabelProps = {
category: AppCategories
category: AppCategories | string
className?: string
}
export function AppCategoryLabel({ category, className }: AppCategoryLabelProps) {
const { t } = useTranslation()
return <span className={className}>{t(`app.newAppFromTemplate.sidebar.${category}`)}</span>
}
type AppCategoryIconProps = {
category: AppCategories
}
function AppCategoryIcon({ category }: AppCategoryIconProps) {
if (category === AppCategories.AGENT)
return <RiSpeakAiFill className='w-3.5 h-3.5 text-components-avatar-shape-fill-stop-100' />
if (category === AppCategories.ASSISTANT)
return <RiChatSmileAiFill className='w-3.5 h-3.5 text-components-avatar-shape-fill-stop-100' />
if (category === AppCategories.HR)
return <RiPassPendingFill className='w-3.5 h-3.5 text-components-avatar-shape-fill-stop-100' />
if (category === AppCategories.PROGRAMMING)
return <RiTerminalBoxFill className='w-3.5 h-3.5 text-components-avatar-shape-fill-stop-100' />
if (category === AppCategories.RECOMMENDED)
return <RiThumbUpFill className='w-3.5 h-3.5 text-components-avatar-shape-fill-stop-100' />
if (category === AppCategories.WRITING)
return <RiQuillPenAiFill className='w-3.5 h-3.5 text-components-avatar-shape-fill-stop-100' />
if (category === AppCategories.WORKFLOW)
return <RiExchange2Fill className='w-3.5 h-3.5 text-components-avatar-shape-fill-stop-100' />
return <RiAppsFill className='w-3.5 h-3.5 text-components-avatar-shape-fill-stop-100' />
return <span className={className}>{category === AppCategories.RECOMMENDED ? t('app.newAppFromTemplate.sidebar.Recommended') : category}</span>
}

View File

@@ -312,7 +312,7 @@ function AppPreview({ mode }: { mode: AppMode }) {
'chat': {
title: t('app.types.chatbot'),
description: t('app.newApp.chatbotUserDescription'),
link: 'https://docs.dify.ai/guides/application-orchestrate/conversation-application?fallback=true',
link: 'https://docs.dify.ai/guides/application-orchestrate#application_type',
},
'advanced-chat': {
title: t('app.types.advanced'),

View File

@@ -0,0 +1,46 @@
import { useTranslation } from 'react-i18next'
import Modal from '@/app/components/base/modal'
import Button from '@/app/components/base/button'
type DSLConfirmModalProps = {
versions?: {
importedVersion: string
systemVersion: string
}
onCancel: () => void
onConfirm: () => void
confirmDisabled?: boolean
}
const DSLConfirmModal = ({
versions = { importedVersion: '', systemVersion: '' },
onCancel,
onConfirm,
confirmDisabled = false,
}: DSLConfirmModalProps) => {
const { t } = useTranslation()
return (
<Modal
isShow
onClose={() => onCancel()}
className='w-[480px]'
>
<div className='flex flex-col items-start gap-2 self-stretch pb-4'>
<div className='title-2xl-semi-bold text-text-primary'>{t('app.newApp.appCreateDSLErrorTitle')}</div>
<div className='system-md-regular flex grow flex-col text-text-secondary'>
<div>{t('app.newApp.appCreateDSLErrorPart1')}</div>
<div>{t('app.newApp.appCreateDSLErrorPart2')}</div>
<br />
<div>{t('app.newApp.appCreateDSLErrorPart3')}<span className='system-md-medium'>{versions.importedVersion}</span></div>
<div>{t('app.newApp.appCreateDSLErrorPart4')}<span className='system-md-medium'>{versions.systemVersion}</span></div>
</div>
</div>
<div className='flex items-start justify-end gap-2 self-stretch pt-6'>
<Button variant='secondary' onClick={() => onCancel()}>{t('app.newApp.Cancel')}</Button>
<Button variant='primary' destructive onClick={onConfirm} disabled={confirmDisabled}>{t('app.newApp.Confirm')}</Button>
</div>
</Modal>
)
}
export default DSLConfirmModal

View File

@@ -24,7 +24,7 @@ const OPTION_MAP = {
iframe: {
getContent: (url: string, token: string) =>
`<iframe
src="${url}/chatbot/${token}"
src="${url}/chat/${token}"
style="width: 100%; height: 100%; min-height: 700px"
frameborder="0"
allow="microphone">
@@ -35,12 +35,12 @@ const OPTION_MAP = {
`<script>
window.difyChatbotConfig = {
token: '${token}'${isTestEnv
? `,
? `,
isDev: true`
: ''}${IS_CE_EDITION
? `,
: ''}${IS_CE_EDITION
? `,
baseUrl: '${url}'`
: ''}
: ''}
}
</script>
<script

View File

@@ -11,10 +11,12 @@ import { useLocalStorageState } from 'ahooks'
import produce from 'immer'
import type {
ChatConfig,
ChatItem,
Feedback,
} from '../types'
import { CONVERSATION_ID_INFO } from '../constants'
import { getPrevChatList, getProcessedInputsFromUrlParams } from '../utils'
import { buildChatItemTree, getProcessedInputsFromUrlParams } from '../utils'
import { getProcessedFilesFromResponse } from '../../file-uploader/utils'
import {
fetchAppInfo,
fetchAppMeta,
@@ -32,6 +34,33 @@ import { useToastContext } from '@/app/components/base/toast'
import { changeLanguage } from '@/i18n/i18next-config'
import { InputVarType } from '@/app/components/workflow/types'
import { TransferMethod } from '@/types/app'
import { addFileInfos, sortAgentSorts } from '@/app/components/tools/utils'
function getFormattedChatList(messages: any[]) {
const newChatList: ChatItem[] = []
messages.forEach((item) => {
const questionFiles = item.message_files?.filter((file: any) => file.belongs_to === 'user') || []
newChatList.push({
id: `question-${item.id}`,
content: item.query,
isAnswer: false,
message_files: getProcessedFilesFromResponse(questionFiles.map((item: any) => ({ ...item, related_id: item.id }))),
parentMessageId: item.parent_message_id || undefined,
})
const answerFiles = item.message_files?.filter((file: any) => file.belongs_to === 'assistant') || []
newChatList.push({
id: item.id,
content: item.answer,
agent_thoughts: addFileInfos(item.agent_thoughts ? sortAgentSorts(item.agent_thoughts) : item.agent_thoughts, item.message_files),
feedback: item.feedback,
isAnswer: true,
citation: item.retriever_resources,
message_files: getProcessedFilesFromResponse(answerFiles.map((item: any) => ({ ...item, related_id: item.id }))),
parentMessageId: `question-${item.id}`,
})
})
return newChatList
}
export const useEmbeddedChatbot = () => {
const isInstalledApp = false
@@ -77,7 +106,7 @@ export const useEmbeddedChatbot = () => {
const appPrevChatList = useMemo(
() => (currentConversationId && appChatListData?.data.length)
? getPrevChatList(appChatListData.data)
? buildChatItemTree(getFormattedChatList(appChatListData.data))
: [],
[appChatListData, currentConversationId],
)

View File

@@ -1,6 +1,8 @@
import { UUID_NIL } from './constants'
import type { IChatItem } from './chat/type'
import type { ChatItem, ChatItemInTree } from './types'
import { addFileInfos, sortAgentSorts } from '../../tools/utils'
import { getProcessedFilesFromResponse } from '../file-uploader/utils'
async function decodeBase64AndDecompress(base64String: string) {
const binaryString = atob(base64String)
@@ -19,6 +21,60 @@ function getProcessedInputsFromUrlParams(): Record<string, any> {
return inputs
}
function appendQAToChatList(chatList: ChatItem[], item: any) {
// we append answer first and then question since will reverse the whole chatList later
const answerFiles = item.message_files?.filter((file: any) => file.belongs_to === 'assistant') || []
chatList.push({
id: item.id,
content: item.answer,
agent_thoughts: addFileInfos(item.agent_thoughts ? sortAgentSorts(item.agent_thoughts) : item.agent_thoughts, item.message_files),
feedback: item.feedback,
isAnswer: true,
citation: item.retriever_resources,
message_files: getProcessedFilesFromResponse(answerFiles.map((item: any) => ({ ...item, related_id: item.id }))),
})
const questionFiles = item.message_files?.filter((file: any) => file.belongs_to === 'user') || []
chatList.push({
id: `question-${item.id}`,
content: item.query,
isAnswer: false,
message_files: getProcessedFilesFromResponse(questionFiles.map((item: any) => ({ ...item, related_id: item.id }))),
})
}
/**
* Computes the latest thread messages from all messages of the conversation.
* Same logic as backend codebase `api/core/prompt/utils/extract_thread_messages.py`
*
* @param fetchedMessages - The history chat list data from the backend, sorted by created_at in descending order. This includes all flattened history messages of the conversation.
* @returns An array of ChatItems representing the latest thread.
*/
function getPrevChatList(fetchedMessages: any[]) {
const ret: ChatItem[] = []
let nextMessageId = null
for (const item of fetchedMessages) {
if (!item.parent_message_id) {
appendQAToChatList(ret, item)
break
}
if (!nextMessageId) {
appendQAToChatList(ret, item)
nextMessageId = item.parent_message_id
}
else {
if (item.id === nextMessageId || nextMessageId === UUID_NIL) {
appendQAToChatList(ret, item)
nextMessageId = item.parent_message_id
}
}
}
return ret.reverse()
}
function isValidGeneratedAnswer(item?: ChatItem | ChatItemInTree): boolean {
return !!item && item.isAnswer && !item.id.startsWith('answer-placeholder-') && !item.isOpeningStatement
}
@@ -164,6 +220,7 @@ function getThreadMessages(tree: ChatItemInTree[], targetMessageId?: string): Ch
export {
getProcessedInputsFromUrlParams,
isValidGeneratedAnswer,
getPrevChatList,
getLastAnswer,
buildChatItemTree,
getThreadMessages,

View File

@@ -10,6 +10,7 @@ import SyntaxHighlighter from 'react-syntax-highlighter'
import { atelierHeathLight } from 'react-syntax-highlighter/dist/esm/styles/hljs'
import { Component, memo, useMemo, useRef, useState } from 'react'
import type { CodeComponent } from 'react-markdown/lib/ast-to-react'
import SVGRenderer from './svg-gallery'
import cn from '@/utils/classnames'
import CopyBtn from '@/app/components/base/copy-btn'
import SVGBtn from '@/app/components/base/svg'
@@ -18,7 +19,7 @@ import ImageGallery from '@/app/components/base/image-gallery'
import { useChatContext } from '@/app/components/base/chat/chat/context'
import VideoGallery from '@/app/components/base/video-gallery'
import AudioGallery from '@/app/components/base/audio-gallery'
import SVGRenderer from '@/app/components/base/svg-gallery'
// import SVGRenderer from '@/app/components/base/svg-gallery'
import MarkdownButton from '@/app/components/base/markdown-blocks/button'
import MarkdownForm from '@/app/components/base/markdown-blocks/form'

View File

@@ -1,5 +1,6 @@
import { useEffect, useRef, useState } from 'react'
import { SVG } from '@svgdotjs/svg.js'
import DOMPurify from 'dompurify'
import ImagePreview from '@/app/components/base/image-uploader/image-preview'
export const SVGRenderer = ({ content }: { content: string }) => {
@@ -44,7 +45,7 @@ export const SVGRenderer = ({ content }: { content: string }) => {
svgRef.current.style.width = `${Math.min(originalWidth, 298)}px`
const rootElement = draw.svg(content)
const rootElement = draw.svg(DOMPurify.sanitize(content))
rootElement.click(() => {
setImagePreview(svgToDataURL(svgElement as Element))

View File

@@ -1,12 +1,10 @@
'use client'
import React, { useMemo, useState } from 'react'
import { useRouter } from 'next/navigation'
import React, { useCallback, useMemo, useState } from 'react'
import { useTranslation } from 'react-i18next'
import { useContext } from 'use-context-selector'
import useSWR from 'swr'
import { useDebounceFn } from 'ahooks'
import Toast from '../../base/toast'
import s from './style.module.css'
import cn from '@/utils/classnames'
import ExploreContext from '@/context/explore-context'
@@ -14,17 +12,17 @@ import type { App } from '@/models/explore'
import Category from '@/app/components/explore/category'
import AppCard from '@/app/components/explore/app-card'
import { fetchAppDetail, fetchAppList } from '@/service/explore'
import { importDSL } from '@/service/apps'
import { useTabSearchParams } from '@/hooks/use-tab-searchparams'
import CreateAppModal from '@/app/components/explore/create-app-modal'
import AppTypeSelector from '@/app/components/app/type-selector'
import type { CreateAppModalProps } from '@/app/components/explore/create-app-modal'
import Loading from '@/app/components/base/loading'
import { NEED_REFRESH_APP_LIST_KEY } from '@/config'
import { useAppContext } from '@/context/app-context'
import { getRedirection } from '@/utils/app-redirection'
import Input from '@/app/components/base/input'
import { DSLImportMode } from '@/models/app'
import {
DSLImportMode,
} from '@/models/app'
import { useImportDSL } from '@/hooks/use-import-dsl'
import DSLConfirmModal from '@/app/components/app/create-from-dsl-modal/dsl-confirm-modal'
type AppsProps = {
pageType?: PageType
@@ -41,8 +39,6 @@ const Apps = ({
onSuccess,
}: AppsProps) => {
const { t } = useTranslation()
const { isCurrentWorkspaceEditor } = useAppContext()
const { push } = useRouter()
const { hasEditPermission } = useContext(ExploreContext)
const allCategoriesEn = t('explore.apps.allCategories', { lng: 'en' })
@@ -117,6 +113,14 @@ const Apps = ({
const [currApp, setCurrApp] = React.useState<App | null>(null)
const [isShowCreateModal, setIsShowCreateModal] = React.useState(false)
const {
handleImportDSL,
handleImportDSLConfirm,
versions,
isFetching,
} = useImportDSL()
const [showDSLConfirmModal, setShowDSLConfirmModal] = useState(false)
const onCreate: CreateAppModalProps['onConfirm'] = async ({
name,
icon_type,
@@ -127,31 +131,31 @@ const Apps = ({
const { export_data } = await fetchAppDetail(
currApp?.app.id as string,
)
try {
const app = await importDSL({
mode: DSLImportMode.YAML_CONTENT,
yaml_content: export_data,
name,
icon_type,
icon,
icon_background,
description,
})
setIsShowCreateModal(false)
Toast.notify({
type: 'success',
message: t('app.newApp.appCreated'),
})
if (onSuccess)
onSuccess()
localStorage.setItem(NEED_REFRESH_APP_LIST_KEY, '1')
getRedirection(isCurrentWorkspaceEditor, { id: app.app_id }, push)
}
catch (e) {
Toast.notify({ type: 'error', message: t('app.newApp.appCreateFailed') })
const payload = {
mode: DSLImportMode.YAML_CONTENT,
yaml_content: export_data,
name,
icon_type,
icon,
icon_background,
description,
}
await handleImportDSL(payload, {
onSuccess: () => {
setIsShowCreateModal(false)
},
onPending: () => {
setShowDSLConfirmModal(true)
},
})
}
const onConfirmDSL = useCallback(async () => {
await handleImportDSLConfirm({
onSuccess,
})
}, [handleImportDSLConfirm, onSuccess])
if (!categories || categories.length === 0) {
return (
<div className="flex h-full items-center">
@@ -234,9 +238,20 @@ const Apps = ({
appDescription={currApp?.app.description || ''}
show={isShowCreateModal}
onConfirm={onCreate}
confirmDisabled={isFetching}
onHide={() => setIsShowCreateModal(false)}
/>
)}
{
showDSLConfirmModal && (
<DSLConfirmModal
versions={versions}
onCancel={() => setShowDSLConfirmModal(false)}
onConfirm={onConfirmDSL}
confirmDisabled={isFetching}
/>
)
}
</div>
)
}

View File

@@ -33,6 +33,7 @@ export type CreateAppModalProps = {
description: string
use_icon_as_answer_icon?: boolean
}) => Promise<void>
confirmDisabled?: boolean
onHide: () => void
}
@@ -48,6 +49,7 @@ const CreateAppModal = ({
appMode,
appUseIconAsAnswerIcon,
onConfirm,
confirmDisabled,
onHide,
}: CreateAppModalProps) => {
const { t } = useTranslation()
@@ -145,7 +147,7 @@ const CreateAppModal = ({
{!isEditModal && isAppsFull && <AppsFull loc='app-explore-create' />}
</div>
<div className='flex flex-row-reverse'>
<Button disabled={!isEditModal && isAppsFull} className='w-24 ml-2' variant='primary' onClick={submit}>{!isEditModal ? t('common.operation.create') : t('common.operation.save')}</Button>
<Button disabled={(!isEditModal && isAppsFull) || !name.trim() || confirmDisabled} className='w-24 ml-2' variant='primary' onClick={submit}>{!isEditModal ? t('common.operation.create') : t('common.operation.save')}</Button>
<Button className='w-24' onClick={onHide}>{t('common.operation.cancel')}</Button>
</div>
</Modal>

View File

@@ -39,7 +39,11 @@ export default function CheckCode() {
}
setIsLoading(true)
const ret = await verifyResetPasswordCode({ email, code, token })
ret.is_valid && router.push(`/reset-password/set-password?${searchParams.toString()}`)
if (ret.is_valid) {
const params = new URLSearchParams(searchParams)
params.set('token', encodeURIComponent(ret.token))
router.push(`/reset-password/set-password?${params.toString()}`)
}
}
catch (error) { console.error(error) }
finally {

View File

@@ -23,6 +23,7 @@ export NEXT_TELEMETRY_DISABLED=${NEXT_TELEMETRY_DISABLED}
export NEXT_PUBLIC_TEXT_GENERATION_TIMEOUT_MS=${TEXT_GENERATION_TIMEOUT_MS}
export NEXT_PUBLIC_CSP_WHITELIST=${CSP_WHITELIST}
export NEXT_PUBLIC_ALLOW_EMBED=${ALLOW_EMBED}
export NEXT_PUBLIC_TOP_K_MAX_VALUE=${TOP_K_MAX_VALUE}
export NEXT_PUBLIC_INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH=${INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH}

158
web/hooks/use-import-dsl.ts Normal file
View File

@@ -0,0 +1,158 @@
import {
useCallback,
useRef,
useState,
} from 'react'
import { useTranslation } from 'react-i18next'
import { useRouter } from 'next/navigation'
import type {
DSLImportMode,
DSLImportResponse,
} from '@/models/app'
import { DSLImportStatus } from '@/models/app'
import {
importDSL,
importDSLConfirm,
} from '@/service/apps'
import type { AppIconType } from '@/types/app'
import { useToastContext } from '@/app/components/base/toast'
import { getRedirection } from '@/utils/app-redirection'
import { useSelector } from '@/context/app-context'
import { NEED_REFRESH_APP_LIST_KEY } from '@/config'
type DSLPayload = {
mode: DSLImportMode
yaml_content?: string
yaml_url?: string
name?: string
icon_type?: AppIconType
icon?: string
icon_background?: string
description?: string
}
type ResponseCallback = {
onSuccess?: () => void
onPending?: (payload: DSLImportResponse) => void
onFailed?: () => void
}
export const useImportDSL = () => {
const { t } = useTranslation()
const { notify } = useToastContext()
const [isFetching, setIsFetching] = useState(false)
const isCurrentWorkspaceEditor = useSelector(s => s.isCurrentWorkspaceEditor)
const { push } = useRouter()
const [versions, setVersions] = useState<{ importedVersion: string; systemVersion: string }>()
const importIdRef = useRef<string>('')
const handleImportDSL = useCallback(async (
payload: DSLPayload,
{
onSuccess,
onPending,
onFailed,
}: ResponseCallback,
) => {
if (isFetching)
return
setIsFetching(true)
try {
const response = await importDSL(payload)
if (!response)
return
const {
id,
status,
app_id,
imported_dsl_version,
current_dsl_version,
} = response
if (status === DSLImportStatus.COMPLETED || status === DSLImportStatus.COMPLETED_WITH_WARNINGS) {
if (!app_id)
return
notify({
type: status === DSLImportStatus.COMPLETED ? 'success' : 'warning',
message: t(status === DSLImportStatus.COMPLETED ? 'app.newApp.appCreated' : 'app.newApp.caution'),
children: status === DSLImportStatus.COMPLETED_WITH_WARNINGS && t('app.newApp.appCreateDSLWarning'),
})
onSuccess?.()
localStorage.setItem(NEED_REFRESH_APP_LIST_KEY, '1')
getRedirection(isCurrentWorkspaceEditor, { id: app_id }, push)
}
else if (status === DSLImportStatus.PENDING) {
setVersions({
importedVersion: imported_dsl_version ?? '',
systemVersion: current_dsl_version ?? '',
})
importIdRef.current = id
onPending?.(response)
}
else {
notify({ type: 'error', message: t('app.newApp.appCreateFailed') })
onFailed?.()
}
}
catch {
notify({ type: 'error', message: t('app.newApp.appCreateFailed') })
onFailed?.()
}
finally {
setIsFetching(false)
}
}, [t, notify, isCurrentWorkspaceEditor, push, isFetching])
const handleImportDSLConfirm = useCallback(async (
{
onSuccess,
onFailed,
}: Pick<ResponseCallback, 'onSuccess' | 'onFailed'>,
) => {
if (isFetching)
return
setIsFetching(true)
if (!importIdRef.current)
return
try {
const response = await importDSLConfirm({
import_id: importIdRef.current,
})
const { status, app_id } = response
if (!app_id)
return
if (status === DSLImportStatus.COMPLETED) {
onSuccess?.()
notify({
type: 'success',
message: t('app.newApp.appCreated'),
})
localStorage.setItem(NEED_REFRESH_APP_LIST_KEY, '1')
getRedirection(isCurrentWorkspaceEditor, { id: app_id! }, push)
}
else if (status === DSLImportStatus.FAILED) {
notify({ type: 'error', message: t('app.newApp.appCreateFailed') })
onFailed?.()
}
}
catch {
notify({ type: 'error', message: t('app.newApp.appCreateFailed') })
onFailed?.()
}
finally {
setIsFetching(false)
}
}, [t, notify, isCurrentWorkspaceEditor, push, isFetching])
return {
handleImportDSL,
handleImportDSLConfirm,
versions,
isFetching,
}
}

View File

@@ -3,10 +3,26 @@ import { NextResponse } from 'next/server'
const NECESSARY_DOMAIN = '*.sentry.io http://localhost:* http://127.0.0.1:* https://analytics.google.com googletagmanager.com *.googletagmanager.com https://www.google-analytics.com https://api.github.com'
const wrapResponseWithXFrameOptions = (response: NextResponse, pathname: string) => {
// prevent clickjacking: https://owasp.org/www-community/attacks/Clickjacking
// Chatbot page should be allowed to be embedded in iframe. It's a feature
if (process.env.NEXT_PUBLIC_ALLOW_EMBED !== 'true' && !pathname.startsWith('/chat'))
response.headers.set('X-Frame-Options', 'DENY')
return response
}
export function middleware(request: NextRequest) {
const { pathname } = request.nextUrl
const requestHeaders = new Headers(request.headers)
const response = NextResponse.next({
request: {
headers: requestHeaders,
},
})
const isWhiteListEnabled = !!process.env.NEXT_PUBLIC_CSP_WHITELIST && process.env.NODE_ENV === 'production'
if (!isWhiteListEnabled)
return NextResponse.next()
return wrapResponseWithXFrameOptions(response, pathname)
const whiteList = `${process.env.NEXT_PUBLIC_CSP_WHITELIST} ${NECESSARY_DOMAIN}`
const nonce = Buffer.from(crypto.randomUUID()).toString('base64')
@@ -33,7 +49,6 @@ export function middleware(request: NextRequest) {
.replace(/\s{2,}/g, ' ')
.trim()
const requestHeaders = new Headers(request.headers)
requestHeaders.set('x-nonce', nonce)
requestHeaders.set(
@@ -41,17 +56,12 @@ export function middleware(request: NextRequest) {
contentSecurityPolicyHeaderValue,
)
const response = NextResponse.next({
request: {
headers: requestHeaders,
},
})
response.headers.set(
'Content-Security-Policy',
contentSecurityPolicyHeaderValue,
)
return response
return wrapResponseWithXFrameOptions(response, pathname)
}
export const config = {
@@ -73,4 +83,4 @@ export const config = {
// ],
},
],
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "dify-web",
"version": "0.15.3",
"version": "0.15.8",
"private": true,
"engines": {
"node": ">=18.17.0"
@@ -51,6 +51,7 @@
"crypto-js": "^4.2.0",
"dayjs": "^1.11.7",
"decimal.js": "^10.4.3",
"dompurify": "^3.2.4",
"echarts": "^5.5.1",
"echarts-for-react": "^3.0.2",
"elkjs": "^0.9.3",
@@ -70,7 +71,7 @@
"mermaid": "11.4.1",
"mime": "^4.0.4",
"negotiator": "^0.6.3",
"next": "^14.2.10",
"next": "^14.2.25",
"pinyin-pro": "^3.23.0",
"qrcode.react": "^3.1.0",
"qs": "^6.11.1",
@@ -160,7 +161,7 @@
"jest-environment-jsdom": "^29.7.0",
"lint-staged": "^13.2.2",
"magicast": "^0.3.4",
"postcss": "^8.4.31",
"postcss": "^8.4.47",
"sass": "^1.61.0",
"storybook": "^8.3.5",
"tailwindcss": "^3.4.4",
@@ -171,7 +172,10 @@
"resolutions": {
"@types/react": "~18.2.0",
"@types/react-dom": "~18.2.0",
"string-width": "4.2.3"
"string-width": "4.2.3",
"nanoid": "~3.3.8",
"esbuild": "~0.25.0",
"serialize-javascript": "~6.0.2"
},
"lint-staged": {
"**/*.js?(x)": [

View File

@@ -40,7 +40,7 @@ import type { SystemFeatures } from '@/types/feature'
type LoginSuccess = {
result: 'success'
data: { access_token: string;refresh_token: string }
data: { access_token: string; refresh_token: string }
}
type LoginFail = {
result: 'fail'
@@ -331,20 +331,20 @@ export const uploadRemoteFileInfo = (url: string, isPublic?: boolean) => {
export const sendEMailLoginCode = (email: string, language = 'en-US') =>
post<CommonResponse & { data: string }>('/email-code-login', { body: { email, language } })
export const emailLoginWithCode = (data: { email: string;code: string;token: string }) =>
export const emailLoginWithCode = (data: { email: string; code: string; token: string }) =>
post<LoginResponse>('/email-code-login/validity', { body: data })
export const sendResetPasswordCode = (email: string, language = 'en-US') =>
post<CommonResponse & { data: string;message?: string ;code?: string }>('/forgot-password', { body: { email, language } })
post<CommonResponse & { data: string; message?: string; code?: string }>('/forgot-password', { body: { email, language } })
export const verifyResetPasswordCode = (body: { email: string;code: string;token: string }) =>
post<CommonResponse & { is_valid: boolean }>('/forgot-password/validity', { body })
export const verifyResetPasswordCode = (body: { email: string; code: string; token: string }) =>
post<CommonResponse & { is_valid: boolean; token: string }>('/forgot-password/validity', { body })
export const sendDeleteAccountCode = () =>
get<CommonResponse & { data: string }>('/account/delete/verify')
export const verifyDeleteAccountCode = (body: { code: string;token: string }) =>
export const verifyDeleteAccountCode = (body: { code: string; token: string }) =>
post<CommonResponse & { is_valid: boolean }>('/account/delete', { body })
export const submitDeleteAccountFeedback = (body: { feedback: string;email: string }) =>
export const submitDeleteAccountFeedback = (body: { feedback: string; email: string }) =>
post<CommonResponse>('/account/delete/feedback', { body })

File diff suppressed because it is too large Load Diff