Compare commits
123 Commits
main
...
docker_mst
| Author | SHA1 | Date | |
|---|---|---|---|
| fc102073b1 | |||
| ab4c11042e | |||
| 27b0a8c6cd | |||
| dc218e1d4f | |||
| 0f8ef428b5 | |||
| 280a90b019 | |||
| 894cba6c02 | |||
| 5d7530b756 | |||
| 127daa3628 | |||
| 2f87c24e9d | |||
| 1746a025ed | |||
| 5c8bce6b4a | |||
| b256b245c7 | |||
| 088568cb05 | |||
| b42dda6054 | |||
| 59664a445a | |||
| 34d1de4be3 | |||
| d9614ae209 | |||
| da726556dd | |||
| 5ed669d568 | |||
| 901d91e83e | |||
| efb128ee28 | |||
| 6a75597134 | |||
| 5a573be0fc | |||
| bdd124376b | |||
| 5c8ec6479b | |||
| 3ac8b18c98 | |||
| 4a81d1f6f9 | |||
| a2df6214cc | |||
| 74b329352c | |||
| e334df0939 | |||
| 09ae0bc8c0 | |||
| ef7e2caf4f | |||
| 580714b300 | |||
| 43a263e109 | |||
| 3183dee8cb | |||
| a5160ce869 | |||
| 74d68340b5 | |||
| b8addc039a | |||
| 7c1c1ed9af | |||
| 8dd984401a | |||
| d12c973be9 | |||
| 75613c2476 | |||
| 6b73c1c0b7 | |||
| 743ed8813e | |||
| 24bf90778c | |||
| deeffc09d7 | |||
| df1741af8a | |||
| 2ec28bfc6e | |||
| b6644de6e1 | |||
| f00f654e7a | |||
| 637cdea796 | |||
| 1f0899c85e | |||
| 609386cc44 | |||
| 2f241bb51e | |||
| a7d565b38f | |||
| aaf92cb57d | |||
| 7d2cd8738f | |||
| a8aeeb9c68 | |||
| a690925a28 | |||
| d5e60c5109 | |||
| 1411d093da | |||
| 929a736b17 | |||
| 38b46874d3 | |||
| 5b3e39c31b | |||
| 900314a6b6 | |||
| cec0a25e7c | |||
| 83afafcb12 | |||
| b717de1069 | |||
| d18a371b44 | |||
| a96bfc7da1 | |||
| 07d9b61ad2 | |||
| 0bb34c50bb | |||
| 08aaf01016 | |||
| 274a54c889 | |||
| ed37509156 | |||
| 556b1d797c | |||
| 0eb6eb1e7a | |||
| f56d049bfe | |||
| 8e4d66e9d8 | |||
| 71af786bce | |||
| 53d0ee9f3a | |||
| 4f6556c2a2 | |||
| 823695aaa0 | |||
| cdb75fa4e3 | |||
| 71297abd93 | |||
| c9d9d6efaf | |||
| aff440648a | |||
| 4eea5eb880 | |||
| fcb43d1792 | |||
| b20127ced9 | |||
| 3897145dbe | |||
| a6ee35e220 | |||
| 9f8dc202ac | |||
| 63d6a8f264 | |||
| 7e4611d16b | |||
| 0ba25415ad | |||
| c839b09beb | |||
| e59a40e142 | |||
| d2ab014258 | |||
| 0b0043d98d | |||
| 6b16612214 | |||
| ed90316457 | |||
| 737f5dce00 | |||
| 46bf7e8fcc | |||
| 29ff05684a | |||
| 0165e52ccb | |||
| ee1c025cd5 | |||
| 819ac30a5a | |||
| 1fe2151030 | |||
| b1137bc1ad | |||
| 63add6a2e6 | |||
| ace9157b0f | |||
| 3a8c621f7b | |||
| f991ea2fac | |||
| 37a867797f | |||
| 46b782d912 | |||
| f7da32fa6a | |||
| a844d45e63 | |||
| 0e5a50cade | |||
| 2e4d5a64c9 | |||
| fc96042cae | |||
| f51756e4b5 |
115
README.md
115
README.md
@ -1,13 +1,110 @@
|
|||||||
efka
|
# ws_channel 模块 API 文档与交互逻辑
|
||||||
=====
|
|
||||||
|
|
||||||
An OTP application
|
## 注意websocket的数据格式为: text
|
||||||
|
|
||||||
1. 先解决数据的上行问题
|
## 一、模块概述
|
||||||
2. todo list
|
`ws_channel` 是基于 Erlang + Cowboy WebSocket 实现的 MQTT 相关交互模块,主要用于服务注册、主题订阅、指标数据上报、事件发送及消息广播等功能,通过 WebSocket 协议实现客户端与服务端的实时双向通信。
|
||||||
要解决连接断开重新连接的问题 !!!
|
|
||||||
|
|
||||||
Build
|
|
||||||
-----
|
|
||||||
|
|
||||||
$ rebar3 compile
|
## 三、核心 API 方法
|
||||||
|
客户端通过发送 **JSON 格式文本消息** 与服务端交互,消息格式遵循 JSON-RPC 风格(包含 `id`、`method`、`params` 字段)。
|
||||||
|
|
||||||
|
|
||||||
|
### 1. 服务注册(register)
|
||||||
|
#### 功能
|
||||||
|
注册服务并建立客户端与服务进程的关联,是后续操作(订阅、上报数据等)的前提。
|
||||||
|
|
||||||
|
#### 请求格式
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": <整数,请求唯一标识>,
|
||||||
|
"method": "register",
|
||||||
|
"params": {
|
||||||
|
"service_id": <二进制,服务唯一标识,必填>,
|
||||||
|
"meta_data": <映射,服务元数据,可选>,
|
||||||
|
"container_name": <二进制,容器名称,可选>
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 响应格式
|
||||||
|
- 成功响应:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": <与请求id一致>,
|
||||||
|
"result": "ok"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
- 失败处理:服务端直接关闭连接(因 `attach_channel` 失败)
|
||||||
|
|
||||||
|
### 2. 主题订阅(subscribe)
|
||||||
|
#### 功能
|
||||||
|
订阅指定主题,后续可接收该主题的广播消息。
|
||||||
|
|
||||||
|
#### 请求格式
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": <整数,请求唯一标识>,
|
||||||
|
"method": "subscribe",
|
||||||
|
"params": {
|
||||||
|
"topic": <二进制,订阅的主题名称,必填>
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 响应格式
|
||||||
|
- 成功响应:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": <与请求id一致>,
|
||||||
|
"result": "ok"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
- 失败响应:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": <与请求id一致>,
|
||||||
|
"error": {
|
||||||
|
"code": -1,
|
||||||
|
"message": "错误描述"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 处理逻辑
|
||||||
|
通过 `efka_subscription:subscribe(Topic, self())` 完成订阅,订阅成功后客户端会收到该主题的广播消息。
|
||||||
|
|
||||||
|
|
||||||
|
### 3. 指标数据上报(metric_data)
|
||||||
|
#### 功能
|
||||||
|
向服务进程上报设备指标数据。
|
||||||
|
|
||||||
|
#### 请求格式
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"method": "metric_data",
|
||||||
|
"params": {
|
||||||
|
"route_key": <二进制,路由键,必填>,
|
||||||
|
"metric": <指标数据,必填>
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 响应处理
|
||||||
|
服务端接收后无返回消息(处理逻辑:`efka_service:metric_data(ServicePid, DeviceUUID, RouteKey, Metric)`)
|
||||||
|
|
||||||
|
## 五、基础交互协议
|
||||||
|
1. **Ping/Pong 心跳**:
|
||||||
|
- 客户端发送 `ping` 消息
|
||||||
|
- 服务端回复 `pong` 消息,维持连接
|
||||||
|
|
||||||
|
2. **未知消息处理**:
|
||||||
|
- 客户端发送未定义格式的消息时,服务端记录错误并关闭连接
|
||||||
|
|
||||||
|
## 七、典型交互流程
|
||||||
|
1. 客户端发起 WebSocket 连接
|
||||||
|
2. 客户端发送 `register` 请求完成注册
|
||||||
|
3. 客户端发送 `subscribe` 请求订阅目标主题
|
||||||
|
4. 客户端通过 `metric_data` 上报指标数据 / 通过 `event` 发送事件
|
||||||
|
5. 服务端向客户端推送已订阅主题的消息(`publish` 方法)
|
||||||
|
6. 连接关闭(主动断开或异常终止)
|
||||||
@ -1,54 +0,0 @@
|
|||||||
%%%-------------------------------------------------------------------
|
|
||||||
%%% @author anlicheng
|
|
||||||
%%% @copyright (C) 2025, <COMPANY>
|
|
||||||
%%% @doc
|
|
||||||
%%% 扩展部分, 1: 支持基于topic的pub/sub机制; 2. 基于target的单点通讯和广播
|
|
||||||
%%% @end
|
|
||||||
%%% Created : 21. 4月 2025 17:28
|
|
||||||
%%%-------------------------------------------------------------------
|
|
||||||
-author("anlicheng").
|
|
||||||
|
|
||||||
%% efka主动发起的消息体类型, 消息大类
|
|
||||||
-define(PACKET_REQUEST, 16#01).
|
|
||||||
-define(PACKET_RESPONSE, 16#02).
|
|
||||||
|
|
||||||
%% 服务器基于pub/sub的消息, 消息大类
|
|
||||||
-define(PACKET_PUB, 16#03).
|
|
||||||
|
|
||||||
%% push调用不需要返回, 消息大类
|
|
||||||
-define(PACKET_COMMAND, 16#04).
|
|
||||||
|
|
||||||
%% 服务器端推送消息
|
|
||||||
-define(PACKET_ASYNC_CALL, 16#05).
|
|
||||||
-define(PACKET_ASYNC_CALL_REPLY, 16#06).
|
|
||||||
|
|
||||||
%% ping包,客户端主动发起
|
|
||||||
-define(PACKET_PING, 16#FF).
|
|
||||||
|
|
||||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
||||||
%%%% 二级分类定义
|
|
||||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
|
||||||
|
|
||||||
%% 主机端上报数据类型标识
|
|
||||||
-define(METHOD_AUTH, 16#01).
|
|
||||||
-define(METHOD_DATA, 16#02).
|
|
||||||
-define(METHOD_PING, 16#03).
|
|
||||||
-define(METHOD_INFORM, 16#04).
|
|
||||||
-define(METHOD_EVENT, 16#05).
|
|
||||||
-define(METHOD_PHASE, 16#06).
|
|
||||||
-define(METHOD_REQUEST_SERVICE_CONFIG, 16#07).
|
|
||||||
|
|
||||||
%%%% 命令类型子分类, 不需要返回值
|
|
||||||
|
|
||||||
%% 授权
|
|
||||||
-define(COMMAND_AUTH, 16#08).
|
|
||||||
|
|
||||||
%%%% 主动推送的消息类型子分类, 需要返回值
|
|
||||||
|
|
||||||
-define(PUSH_DEPLOY, 16#01).
|
|
||||||
-define(PUSH_START_SERVICE, 16#02).
|
|
||||||
-define(PUSH_STOP_SERVICE, 16#03).
|
|
||||||
|
|
||||||
-define(PUSH_SERVICE_CONFIG, 16#04).
|
|
||||||
-define(PUSH_INVOKE, 16#05).
|
|
||||||
-define(PUSH_TASK_LOG, 16#06).
|
|
||||||
@ -4,37 +4,21 @@
|
|||||||
%%% @doc
|
%%% @doc
|
||||||
%%%
|
%%%
|
||||||
%%% @end
|
%%% @end
|
||||||
%%% Created : 30. 4月 2025 11:16
|
%%% Created : 29. 9月 2025 15:27
|
||||||
%%%-------------------------------------------------------------------
|
%%%-------------------------------------------------------------------
|
||||||
-author("anlicheng").
|
-author("anlicheng").
|
||||||
|
|
||||||
|
-define(SERVICE_STOPPED, 0).
|
||||||
|
-define(SERVICE_RUNNING, 1).
|
||||||
|
|
||||||
%% 用来保存微服务
|
%% 用来保存微服务
|
||||||
-record(service, {
|
-record(service, {
|
||||||
service_id :: binary(),
|
service_id :: binary(),
|
||||||
tar_url :: binary(),
|
container_name :: binary(),
|
||||||
%% 工作目录
|
%% 配置信息, 微服务启动的时候自己注册的信息
|
||||||
root_dir :: string(),
|
meta_data = #{} :: map(),
|
||||||
%% 配置信息
|
|
||||||
config_json :: binary(),
|
|
||||||
%% 状态: 0: 停止, 1: 运行中
|
%% 状态: 0: 停止, 1: 运行中
|
||||||
status = 0
|
status = 0,
|
||||||
}).
|
create_ts = 0 :: integer(),
|
||||||
|
update_ts = 0 :: integer()
|
||||||
%% 数据缓存
|
|
||||||
-record(cache, {
|
|
||||||
id = 0 :: integer(),
|
|
||||||
method :: integer(),
|
|
||||||
data :: binary()
|
|
||||||
}).
|
|
||||||
|
|
||||||
%% 数据缓存
|
|
||||||
-record(task_log, {
|
|
||||||
task_id = 0 :: integer(),
|
|
||||||
logs = []:: list()
|
|
||||||
}).
|
|
||||||
|
|
||||||
%% id生成器
|
|
||||||
-record(id_generator, {
|
|
||||||
id,
|
|
||||||
value = 1
|
|
||||||
}).
|
}).
|
||||||
86
apps/efka/include/message.hrl
Normal file
86
apps/efka/include/message.hrl
Normal file
@ -0,0 +1,86 @@
|
|||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
%%% @author anlicheng
|
||||||
|
%%% @copyright (C) 2025, <COMPANY>
|
||||||
|
%%% @doc
|
||||||
|
%%% 扩展部分, 1: 支持基于topic的pub/sub机制; 2. 基于target的单点通讯和广播
|
||||||
|
%%% @end
|
||||||
|
%%% Created : 21. 4月 2025 17:28
|
||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
-author("anlicheng").
|
||||||
|
|
||||||
|
%% efka主动发起的消息体类型, 消息大类
|
||||||
|
-define(PACKET_REQUEST, 16#01).
|
||||||
|
-define(PACKET_RESPONSE, 16#02).
|
||||||
|
|
||||||
|
%% efka主动发起不需要返回的数据
|
||||||
|
-define(PACKET_CAST, 16#03).
|
||||||
|
|
||||||
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||||
|
%%%% 二级分类定义
|
||||||
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||||
|
|
||||||
|
%% 主机端上报数据类型标识
|
||||||
|
-define(MESSAGE_AUTH_REQUEST, 16#01).
|
||||||
|
-define(MESSAGE_AUTH_REPLY, 16#02).
|
||||||
|
|
||||||
|
-define(MESSAGE_COMMAND, 16#03).
|
||||||
|
-define(MESSAGE_DEPLOY, 16#04).
|
||||||
|
-define(MESSAGE_PUB, 16#05).
|
||||||
|
|
||||||
|
-define(MESSAGE_DATA, 16#06).
|
||||||
|
-define(MESSAGE_EVENT, 16#07).
|
||||||
|
|
||||||
|
%% efka主动上报的event-stream流, 单向消息,主要是: docker-create的实时处理逻辑上报
|
||||||
|
-define(MESSAGE_EVENT_STREAM, 16#08).
|
||||||
|
|
||||||
|
-define(MESSAGE_JSONRPC_REQUEST, 16#F0).
|
||||||
|
-define(MESSAGE_JSONRPC_REPLY, 16#F1).
|
||||||
|
|
||||||
|
%%%% 命令类型子分类, 不需要返回值
|
||||||
|
%% 授权
|
||||||
|
-define(COMMAND_AUTH, 16#08).
|
||||||
|
|
||||||
|
-record(auth_request, {
|
||||||
|
uuid :: binary(),
|
||||||
|
username :: binary(),
|
||||||
|
salt :: binary(),
|
||||||
|
token :: binary(),
|
||||||
|
timestamp :: integer()
|
||||||
|
}).
|
||||||
|
|
||||||
|
-record(auth_reply, {
|
||||||
|
code :: integer(),
|
||||||
|
payload :: binary()
|
||||||
|
}).
|
||||||
|
|
||||||
|
-record(pub, {
|
||||||
|
topic :: binary(),
|
||||||
|
qos = 0 :: integer(),
|
||||||
|
content :: binary()
|
||||||
|
}).
|
||||||
|
|
||||||
|
-record(command, {
|
||||||
|
command_type :: integer(),
|
||||||
|
command :: binary()
|
||||||
|
}).
|
||||||
|
|
||||||
|
-record(jsonrpc_request, {
|
||||||
|
method :: binary(),
|
||||||
|
params = <<>> :: any()
|
||||||
|
}).
|
||||||
|
|
||||||
|
-record(jsonrpc_reply, {
|
||||||
|
result :: any() | undefined,
|
||||||
|
error :: any() | undefined
|
||||||
|
}).
|
||||||
|
|
||||||
|
-record(data, {
|
||||||
|
route_key :: binary(),
|
||||||
|
metric :: binary()
|
||||||
|
}).
|
||||||
|
|
||||||
|
-record(task_event_stream, {
|
||||||
|
task_id :: integer(),
|
||||||
|
type :: binary(),
|
||||||
|
stream :: binary()
|
||||||
|
}).
|
||||||
@ -1,128 +0,0 @@
|
|||||||
%% -*- coding: utf-8 -*-
|
|
||||||
%% Automatically generated, do not edit
|
|
||||||
%% Generated by gpb_compile version 4.21.1
|
|
||||||
|
|
||||||
-ifndef(message_pb).
|
|
||||||
-define(message_pb, true).
|
|
||||||
|
|
||||||
-define(message_pb_gpb_version, "4.21.1").
|
|
||||||
|
|
||||||
|
|
||||||
-ifndef('AUTH_REQUEST_PB_H').
|
|
||||||
-define('AUTH_REQUEST_PB_H', true).
|
|
||||||
-record(auth_request,
|
|
||||||
{uuid = <<>> :: unicode:chardata() | undefined, % = 1, optional
|
|
||||||
username = <<>> :: unicode:chardata() | undefined, % = 2, optional
|
|
||||||
salt = <<>> :: unicode:chardata() | undefined, % = 4, optional
|
|
||||||
token = <<>> :: unicode:chardata() | undefined, % = 5, optional
|
|
||||||
timestamp = 0 :: non_neg_integer() | undefined % = 6, optional, 32 bits
|
|
||||||
}).
|
|
||||||
-endif.
|
|
||||||
|
|
||||||
-ifndef('AUTH_REPLY_PB_H').
|
|
||||||
-define('AUTH_REPLY_PB_H', true).
|
|
||||||
-record(auth_reply,
|
|
||||||
{code = 0 :: non_neg_integer() | undefined, % = 1, optional, 32 bits
|
|
||||||
message = <<>> :: unicode:chardata() | undefined % = 2, optional
|
|
||||||
}).
|
|
||||||
-endif.
|
|
||||||
|
|
||||||
-ifndef('PUB_PB_H').
|
|
||||||
-define('PUB_PB_H', true).
|
|
||||||
-record(pub,
|
|
||||||
{topic = <<>> :: unicode:chardata() | undefined, % = 1, optional
|
|
||||||
content = <<>> :: unicode:chardata() | undefined % = 2, optional
|
|
||||||
}).
|
|
||||||
-endif.
|
|
||||||
|
|
||||||
-ifndef('ASYNC_CALL_REPLY_PB_H').
|
|
||||||
-define('ASYNC_CALL_REPLY_PB_H', true).
|
|
||||||
-record(async_call_reply,
|
|
||||||
{code = 0 :: non_neg_integer() | undefined, % = 1, optional, 32 bits
|
|
||||||
result = <<>> :: unicode:chardata() | undefined, % = 2, optional
|
|
||||||
message = <<>> :: unicode:chardata() | undefined % = 3, optional
|
|
||||||
}).
|
|
||||||
-endif.
|
|
||||||
|
|
||||||
-ifndef('DEPLOY_PB_H').
|
|
||||||
-define('DEPLOY_PB_H', true).
|
|
||||||
-record(deploy,
|
|
||||||
{task_id = 0 :: non_neg_integer() | undefined, % = 1, optional, 32 bits
|
|
||||||
service_id = <<>> :: unicode:chardata() | undefined, % = 2, optional
|
|
||||||
tar_url = <<>> :: unicode:chardata() | undefined % = 3, optional
|
|
||||||
}).
|
|
||||||
-endif.
|
|
||||||
|
|
||||||
-ifndef('FETCH_TASK_LOG_PB_H').
|
|
||||||
-define('FETCH_TASK_LOG_PB_H', true).
|
|
||||||
-record(fetch_task_log,
|
|
||||||
{task_id = 0 :: non_neg_integer() | undefined % = 1, optional, 32 bits
|
|
||||||
}).
|
|
||||||
-endif.
|
|
||||||
|
|
||||||
-ifndef('INVOKE_PB_H').
|
|
||||||
-define('INVOKE_PB_H', true).
|
|
||||||
-record(invoke,
|
|
||||||
{service_id = <<>> :: unicode:chardata() | undefined, % = 1, optional
|
|
||||||
payload = <<>> :: unicode:chardata() | undefined, % = 2, optional
|
|
||||||
timeout = 0 :: non_neg_integer() | undefined % = 3, optional, 32 bits
|
|
||||||
}).
|
|
||||||
-endif.
|
|
||||||
|
|
||||||
-ifndef('PUSH_SERVICE_CONFIG_PB_H').
|
|
||||||
-define('PUSH_SERVICE_CONFIG_PB_H', true).
|
|
||||||
-record(push_service_config,
|
|
||||||
{service_id = <<>> :: unicode:chardata() | undefined, % = 1, optional
|
|
||||||
config_json = <<>> :: unicode:chardata() | undefined, % = 2, optional
|
|
||||||
timeout = 0 :: non_neg_integer() | undefined % = 3, optional, 32 bits
|
|
||||||
}).
|
|
||||||
-endif.
|
|
||||||
|
|
||||||
-ifndef('DATA_PB_H').
|
|
||||||
-define('DATA_PB_H', true).
|
|
||||||
-record(data,
|
|
||||||
{service_id = <<>> :: unicode:chardata() | undefined, % = 1, optional
|
|
||||||
device_uuid = <<>> :: unicode:chardata() | undefined, % = 2, optional
|
|
||||||
metric = <<>> :: unicode:chardata() | undefined % = 3, optional
|
|
||||||
}).
|
|
||||||
-endif.
|
|
||||||
|
|
||||||
-ifndef('PING_PB_H').
|
|
||||||
-define('PING_PB_H', true).
|
|
||||||
-record(ping,
|
|
||||||
{adcode = <<>> :: unicode:chardata() | undefined, % = 1, optional
|
|
||||||
boot_time = 0 :: non_neg_integer() | undefined, % = 2, optional, 32 bits
|
|
||||||
province = <<>> :: unicode:chardata() | undefined, % = 3, optional
|
|
||||||
city = <<>> :: unicode:chardata() | undefined, % = 4, optional
|
|
||||||
efka_version = <<>> :: unicode:chardata() | undefined, % = 5, optional
|
|
||||||
kernel_arch = <<>> :: unicode:chardata() | undefined, % = 6, optional
|
|
||||||
ips = [] :: [unicode:chardata()] | undefined, % = 7, repeated
|
|
||||||
cpu_core = 0 :: non_neg_integer() | undefined, % = 8, optional, 32 bits
|
|
||||||
cpu_load = 0 :: non_neg_integer() | undefined, % = 9, optional, 32 bits
|
|
||||||
cpu_temperature = 0.0 :: float() | integer() | infinity | '-infinity' | nan | undefined, % = 10, optional
|
|
||||||
disk = [] :: [integer()] | undefined, % = 11, repeated, 32 bits
|
|
||||||
memory = [] :: [integer()] | undefined, % = 12, repeated, 32 bits
|
|
||||||
interfaces = <<>> :: unicode:chardata() | undefined % = 13, optional
|
|
||||||
}).
|
|
||||||
-endif.
|
|
||||||
|
|
||||||
-ifndef('SERVICE_INFORM_PB_H').
|
|
||||||
-define('SERVICE_INFORM_PB_H', true).
|
|
||||||
-record(service_inform,
|
|
||||||
{service_id = <<>> :: unicode:chardata() | undefined, % = 1, optional
|
|
||||||
props = <<>> :: unicode:chardata() | undefined, % = 2, optional
|
|
||||||
status = 0 :: non_neg_integer() | undefined, % = 3, optional, 32 bits
|
|
||||||
timestamp = 0 :: non_neg_integer() | undefined % = 4, optional, 32 bits
|
|
||||||
}).
|
|
||||||
-endif.
|
|
||||||
|
|
||||||
-ifndef('EVENT_PB_H').
|
|
||||||
-define('EVENT_PB_H', true).
|
|
||||||
-record(event,
|
|
||||||
{service_id = <<>> :: unicode:chardata() | undefined, % = 1, optional
|
|
||||||
event_type = 0 :: non_neg_integer() | undefined, % = 2, optional, 32 bits
|
|
||||||
params = <<>> :: unicode:chardata() | undefined % = 3, optional
|
|
||||||
}).
|
|
||||||
-endif.
|
|
||||||
|
|
||||||
-endif.
|
|
||||||
79
apps/efka/src/channel/upload_channel.erl
Normal file
79
apps/efka/src/channel/upload_channel.erl
Normal file
@ -0,0 +1,79 @@
|
|||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
%%% @author anlicheng
|
||||||
|
%%% @copyright (C) 2025, <COMPANY>
|
||||||
|
%%% @doc
|
||||||
|
%%%
|
||||||
|
%%% @end
|
||||||
|
%%% Created : 12. 11月 2025 17:08
|
||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
-module(upload_channel).
|
||||||
|
-author("anlicheng").
|
||||||
|
|
||||||
|
%% API
|
||||||
|
-export([init/2]).
|
||||||
|
|
||||||
|
init(Req0, Opts) ->
|
||||||
|
Method = binary_to_list(cowboy_req:method(Req0)),
|
||||||
|
lager:debug("[upload_channel] method is: ~p", [Method]),
|
||||||
|
|
||||||
|
Headers = cowboy_req:headers(Req0),
|
||||||
|
lager:debug("headers is: ~p", [Headers]),
|
||||||
|
case maps:find(<<"content-type">>, Headers) of
|
||||||
|
{ok, <<"application/octet-stream">>} ->
|
||||||
|
Filename = maps:get(<<"x-filename">>, Headers),
|
||||||
|
case filename:extension(Filename) of
|
||||||
|
<<>> ->
|
||||||
|
Req = cowboy_req:reply(400, #{
|
||||||
|
<<"Content-Type">> => <<"text/html;charset=utf-8">>
|
||||||
|
}, <<"Miss file extension">>, Req0),
|
||||||
|
{ok, Req, Opts};
|
||||||
|
_ ->
|
||||||
|
Basename = filename:basename(Filename),
|
||||||
|
handle_raw_file(Req0, binary_to_list(Basename)),
|
||||||
|
Req2 = cowboy_req:reply(400, #{
|
||||||
|
<<"Content-Type">> => <<"text/html;charset=utf-8">>
|
||||||
|
}, <<"ok">>, Req0),
|
||||||
|
{ok, Req2, Opts}
|
||||||
|
end;
|
||||||
|
{ok, ContentType} ->
|
||||||
|
lager:debug("[upload_channel] unexpect content-type: ~p", [ContentType]),
|
||||||
|
Req = cowboy_req:reply(400, #{
|
||||||
|
<<"Content-Type">> => <<"text/html;charset=utf-8">>
|
||||||
|
}, <<"Expected application/octet-stream">>, Req0),
|
||||||
|
{ok, Req, Opts};
|
||||||
|
error ->
|
||||||
|
Req = cowboy_req:reply(400, #{
|
||||||
|
<<"Content-Type">> => <<"text/html;charset=utf-8">>
|
||||||
|
}, <<"Miss content-type header">>, Req0),
|
||||||
|
{ok, Req, Opts}
|
||||||
|
end.
|
||||||
|
|
||||||
|
%% 读取请求体
|
||||||
|
handle_raw_file(Req, Basename) ->
|
||||||
|
Filename = make_file(Basename),
|
||||||
|
{ok, IoDevice} = file:open(Filename, [write]),
|
||||||
|
ok = handle_raw_file0(Req, IoDevice),
|
||||||
|
ok = file:close(IoDevice).
|
||||||
|
|
||||||
|
handle_raw_file0(Req, IoDevice) ->
|
||||||
|
case cowboy_req:read_body(Req) of
|
||||||
|
{ok, Data, Req1} ->
|
||||||
|
file:write(IoDevice, Data),
|
||||||
|
ok;
|
||||||
|
{more, Data, Req1} ->
|
||||||
|
file:write(IoDevice, Data),
|
||||||
|
handle_raw_file0(Req1, IoDevice)
|
||||||
|
end.
|
||||||
|
|
||||||
|
make_file(Basename) when is_list(Basename) ->
|
||||||
|
{ok, UploadDir} = application:get_env(efka, upload_dir),
|
||||||
|
{{Y, M, D}, _} = calendar:local_time(),
|
||||||
|
DateDir = io_lib:format("~p-~p-~p", [Y, M, D]),
|
||||||
|
BaseDir = UploadDir ++ DateDir,
|
||||||
|
case filelib:is_dir(BaseDir) of
|
||||||
|
true ->
|
||||||
|
ok;
|
||||||
|
false ->
|
||||||
|
ok = file:make_dir(BaseDir)
|
||||||
|
end,
|
||||||
|
BaseDir ++ "/" ++ Basename.
|
||||||
257
apps/efka/src/channel/ws_channel.erl
Normal file
257
apps/efka/src/channel/ws_channel.erl
Normal file
@ -0,0 +1,257 @@
|
|||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
%%% @author licheng5
|
||||||
|
%%% @copyright (C) 2021, <COMPANY>
|
||||||
|
%%% @doc
|
||||||
|
%%%
|
||||||
|
%%% @end
|
||||||
|
%%% Created : 11. 1月 2021 上午12:17
|
||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
-module(ws_channel).
|
||||||
|
-author("licheng5").
|
||||||
|
-include("efka_tables.hrl").
|
||||||
|
|
||||||
|
%% API
|
||||||
|
-export([init/2]).
|
||||||
|
-export([websocket_init/1, websocket_handle/2, websocket_info/2, terminate/3]).
|
||||||
|
|
||||||
|
%% 最大的等待时间
|
||||||
|
-define(PENDING_TIMEOUT, 10 * 1000).
|
||||||
|
|
||||||
|
-record(state, {
|
||||||
|
service_id :: undefined | binary(),
|
||||||
|
service_pid :: undefined | pid(),
|
||||||
|
|
||||||
|
stream_id = 1,
|
||||||
|
%% #{stream_id => {StreamPid, StreamRef}}
|
||||||
|
stream_map = #{},
|
||||||
|
|
||||||
|
is_registered = false :: boolean()
|
||||||
|
}).
|
||||||
|
|
||||||
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||||
|
%% 逻辑处理方法
|
||||||
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||||
|
|
||||||
|
init(Req, Opts) ->
|
||||||
|
{cowboy_websocket, Req, Opts}.
|
||||||
|
|
||||||
|
websocket_init(_State) ->
|
||||||
|
lager:debug("[ws_channel] get a new connection"),
|
||||||
|
%% 初始状态为true
|
||||||
|
{ok, #state{}}.
|
||||||
|
|
||||||
|
websocket_handle(ping, State) ->
|
||||||
|
{reply, pong, State};
|
||||||
|
|
||||||
|
websocket_handle({text, Data}, State) ->
|
||||||
|
Request = jiffy:decode(Data, [return_maps]),
|
||||||
|
lager:debug("[ws_channle] get request: ~p", [Request]),
|
||||||
|
handle_request(Request, State);
|
||||||
|
|
||||||
|
websocket_handle(Info, State) ->
|
||||||
|
lager:error("[ws_channel] get a unknown message: ~p, channel will closed", [Info]),
|
||||||
|
{ok, State}.
|
||||||
|
|
||||||
|
%% 订阅的消息
|
||||||
|
websocket_info({topic_broadcast, Topic, Content}, State = #state{}) ->
|
||||||
|
Req = iolist_to_binary(jiffy:encode(#{
|
||||||
|
<<"method">> => <<"publish">>,
|
||||||
|
<<"params">> => #{<<"topic">> => Topic, <<"content">> => Content}
|
||||||
|
}, [force_utf8])),
|
||||||
|
|
||||||
|
lager:debug("[ws_channel] will publish topic: ~p, message: ~p", [Topic, Req]),
|
||||||
|
|
||||||
|
{reply, {text, Req}, State};
|
||||||
|
|
||||||
|
%% service进程关闭
|
||||||
|
websocket_info({'DOWN', _Ref, process, ServicePid, Reason}, State = #state{service_pid = ServicePid}) ->
|
||||||
|
lager:debug("[ws_channel] container_pid: ~p, exited: ~p", [ServicePid, Reason]),
|
||||||
|
{stop, State#state{service_pid = undefined}};
|
||||||
|
|
||||||
|
%% stream进程关闭
|
||||||
|
websocket_info({'DOWN', _Ref, process, StreamPid, Reason}, State = #state{stream_map = StreamMap}) ->
|
||||||
|
case search_stream_id(StreamPid, StreamMap) of
|
||||||
|
error ->
|
||||||
|
{ok, State};
|
||||||
|
{ok, StreamId} ->
|
||||||
|
case Reason of
|
||||||
|
normal ->
|
||||||
|
{ok, State#state{stream_map = maps:remove(StreamId, StreamMap)}};
|
||||||
|
_ ->
|
||||||
|
PushReply = json_push(#{
|
||||||
|
<<"stream_reply">> => #{
|
||||||
|
<<"stream_id">> => StreamId,
|
||||||
|
<<"result">> => <<"task failed">>
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
{reply, {text, PushReply}, State#state{stream_map = maps:remove(StreamId, StreamMap)}}
|
||||||
|
end
|
||||||
|
end;
|
||||||
|
|
||||||
|
%% stream任务完成
|
||||||
|
websocket_info({stream_reply, StreamPid, Reply}, State = #state{stream_map = StreamMap}) ->
|
||||||
|
case search_stream_id(StreamPid, StreamMap) of
|
||||||
|
error ->
|
||||||
|
{ok, State};
|
||||||
|
{ok, StreamId} ->
|
||||||
|
PushReply = json_push(#{
|
||||||
|
<<"stream_reply">> => #{
|
||||||
|
<<"stream_id">> => StreamId,
|
||||||
|
<<"result">> => Reply
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
{reply, {text, PushReply}, State}
|
||||||
|
end;
|
||||||
|
|
||||||
|
%% 处理关闭信号
|
||||||
|
websocket_info({stop, Reason}, State) ->
|
||||||
|
lager:debug("[ws_channel] the channel will be closed with reason: ~p", [Reason]),
|
||||||
|
{stop, State};
|
||||||
|
|
||||||
|
%% 处理其他未知消息
|
||||||
|
websocket_info(Info, State) ->
|
||||||
|
lager:debug("[ws_channel] channel get unknown info: ~p", [Info]),
|
||||||
|
{ok, State}.
|
||||||
|
|
||||||
|
%% 进程关闭事件
|
||||||
|
terminate(Reason, _Req, State = #state{service_id = ServiceId, is_registered = IsRegistered}) ->
|
||||||
|
case IsRegistered of
|
||||||
|
true ->
|
||||||
|
ok = service_model:change_status(ServiceId, 0);
|
||||||
|
false ->
|
||||||
|
ok
|
||||||
|
end,
|
||||||
|
lager:debug("[ws_channel] channel close with reason: ~p, state is: ~p", [Reason, State]),
|
||||||
|
ok.
|
||||||
|
|
||||||
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||||
|
%% helper methods
|
||||||
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||||
|
|
||||||
|
%% 注册, 要建立程序和容器之间的关系
|
||||||
|
handle_request(#{<<"id">> := Id, <<"method">> := <<"register">>, <<"params">> := Params = #{<<"service_id">> := ServiceId}}, State) ->
|
||||||
|
{ok, ServicePid} = efka_service_sup:start_service(ServiceId),
|
||||||
|
case efka_service:attach_channel(ServicePid, self()) of
|
||||||
|
ok ->
|
||||||
|
Reply = json_result(Id, <<"ok">>),
|
||||||
|
erlang:monitor(process, ServicePid),
|
||||||
|
|
||||||
|
%% 更新微服务的状态
|
||||||
|
MetaData = maps:get(<<"meta_data">>, Params, #{}),
|
||||||
|
ContainerName = maps:get(<<"container_name">>, Params, <<>>),
|
||||||
|
ok = service_model:insert(#service{
|
||||||
|
service_id = ServiceId,
|
||||||
|
container_name = ContainerName,
|
||||||
|
status = ?SERVICE_RUNNING,
|
||||||
|
meta_data = MetaData,
|
||||||
|
create_ts = efka_util:timestamp(),
|
||||||
|
update_ts = efka_util:timestamp()
|
||||||
|
}),
|
||||||
|
|
||||||
|
{reply, {text, Reply}, State#state{service_id = ServiceId, service_pid = ServicePid, is_registered = true}};
|
||||||
|
{error, Error} ->
|
||||||
|
lager:warning("[ws_channel] service_id: ~p, attach_channel get error: ~p", [ServiceId, Error]),
|
||||||
|
{stop, State}
|
||||||
|
end;
|
||||||
|
|
||||||
|
%% 订阅事件
|
||||||
|
handle_request(#{<<"id">> := Id, <<"method">> := <<"subscribe">>, <<"params">> := #{<<"topic">> := Topic}}, State = #state{is_registered = true}) ->
|
||||||
|
Reply = case efka_subscription:subscribe(Topic, self()) of
|
||||||
|
ok ->
|
||||||
|
json_result(Id, <<"ok">>);
|
||||||
|
{error, Reason} ->
|
||||||
|
json_error(Id, -1, Reason)
|
||||||
|
end,
|
||||||
|
{reply, {text, Reply}, State};
|
||||||
|
|
||||||
|
%% 文件上传
|
||||||
|
handle_request(#{<<"id">> := Id, <<"method">> := <<"new_stream">>,
|
||||||
|
<<"params">> := #{<<"file_name">> := Filename0, <<"file_size">> := FileSize}}, State = #state{stream_id = StreamId, stream_map = StreamMap, is_registered = true}) ->
|
||||||
|
Filename = filename:basename(binary_to_list(Filename0)),
|
||||||
|
|
||||||
|
{ok, {StreamPid, StreamRef}} = efka_stream:start_monitor(self()),
|
||||||
|
{ok, Path} = efka_stream:setup(StreamPid, Filename, FileSize),
|
||||||
|
|
||||||
|
Reply = json_result(Id, #{
|
||||||
|
<<"stream_id">> => StreamId,
|
||||||
|
<<"path">> => Path
|
||||||
|
}),
|
||||||
|
{reply, {text, Reply}, State#state{stream_id = StreamId + 1, stream_map = maps:put(StreamId, {StreamPid, StreamRef}, StreamMap)}};
|
||||||
|
|
||||||
|
handle_request(#{<<"method">> := <<"stream_chunk">>,
|
||||||
|
<<"params">> := #{<<"stream_id">> := StreamId, <<"chunk_data">> := ChunkData}}, State = #state{stream_map = StreamMap, is_registered = true}) ->
|
||||||
|
case maps:find(StreamId, StreamMap) of
|
||||||
|
error ->
|
||||||
|
{ok, State};
|
||||||
|
{ok, {StreamPid, _}} ->
|
||||||
|
case ChunkData =:= <<>> of
|
||||||
|
true ->
|
||||||
|
efka_stream:finish(StreamPid);
|
||||||
|
false ->
|
||||||
|
efka_stream:data(StreamPid, ChunkData)
|
||||||
|
end,
|
||||||
|
{ok, State}
|
||||||
|
end;
|
||||||
|
|
||||||
|
%% 数据项
|
||||||
|
handle_request(#{<<"method">> := <<"metric_data">>,
|
||||||
|
<<"params">> := #{<<"route_key">> := RouteKey, <<"metric">> := Metric0}}, State = #state{service_pid = ServicePid, is_registered = true}) ->
|
||||||
|
case map_metric(Metric0) of
|
||||||
|
{ok, Metric} ->
|
||||||
|
efka_service:metric_data(ServicePid, RouteKey, Metric);
|
||||||
|
error ->
|
||||||
|
lager:debug("[ws_channel] metric_data get invalid metric: ~p", Metric0)
|
||||||
|
end,
|
||||||
|
{ok, State}.
|
||||||
|
|
||||||
|
-spec json_result(Id :: integer(), Result :: term()) -> binary().
|
||||||
|
json_result(Id, Result) when is_integer(Id) ->
|
||||||
|
Response = #{
|
||||||
|
<<"id">> => Id,
|
||||||
|
<<"result">> => Result
|
||||||
|
},
|
||||||
|
jiffy:encode(Response, [force_utf8]).
|
||||||
|
|
||||||
|
-spec json_error(Id :: integer(), Code :: integer(), Message :: binary()) -> binary().
|
||||||
|
json_error(Id, Code, Message) when is_integer(Id), is_integer(Code), is_binary(Message) ->
|
||||||
|
Response = #{
|
||||||
|
<<"id">> => Id,
|
||||||
|
<<"error">> => #{<<"code">> => Code, <<"message">> => Message}
|
||||||
|
},
|
||||||
|
jiffy:encode(Response, [force_utf8]).
|
||||||
|
|
||||||
|
-spec json_push(Result :: term()) -> binary().
|
||||||
|
json_push(Result) ->
|
||||||
|
Response = #{
|
||||||
|
<<"push">> => Result
|
||||||
|
},
|
||||||
|
jiffy:encode(Response, [force_utf8]).
|
||||||
|
|
||||||
|
-spec search_stream_id(StreamPid :: pid(), StreamMap :: map()) -> error | {ok, StreamId :: integer()}.
|
||||||
|
search_stream_id(StreamPid, StreamMap) when is_pid(StreamPid), is_map(StreamMap) ->
|
||||||
|
StreamIds = lists:filtermap(fun({StreamId, {StreamPid0, _}}) ->
|
||||||
|
case StreamPid0 =:= StreamPid of
|
||||||
|
true ->
|
||||||
|
{true, StreamId};
|
||||||
|
false ->
|
||||||
|
false
|
||||||
|
end
|
||||||
|
end, maps:to_list(StreamMap)),
|
||||||
|
case StreamIds of
|
||||||
|
[] ->
|
||||||
|
error;
|
||||||
|
[StreamId|_] ->
|
||||||
|
{ok, StreamId}
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec map_metric(Metric :: any()) -> {ok, binary()} | error.
|
||||||
|
map_metric(Metric) when is_binary(Metric) ->
|
||||||
|
{ok, Metric};
|
||||||
|
map_metric(Metric) when is_map(Metric) orelse is_list(Metric) ->
|
||||||
|
{ok, jiffy:encode(Metric, [force_utf8])};
|
||||||
|
map_metric(Metric) when is_integer(Metric) ->
|
||||||
|
{ok, integer_to_binary(Metric)};
|
||||||
|
map_metric(Metric) when is_float(Metric) ->
|
||||||
|
{ok, erlang:float_to_binary(Metric, [compact, {decimals, 10}])};
|
||||||
|
map_metric(_) ->
|
||||||
|
error.
|
||||||
466
apps/efka/src/docker/docker_commands.erl
Normal file
466
apps/efka/src/docker/docker_commands.erl
Normal file
@ -0,0 +1,466 @@
|
|||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
%%% @author anlicheng
|
||||||
|
%%% @copyright (C) 2025, <COMPANY>
|
||||||
|
%%% @doc
|
||||||
|
%%%
|
||||||
|
%%% @end
|
||||||
|
%%% Created : 15. 9月 2025 16:11
|
||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
-module(docker_commands).
|
||||||
|
-author("anlicheng").
|
||||||
|
|
||||||
|
%% API
|
||||||
|
-export([pull_image/2, check_image_exist/1]).
|
||||||
|
-export([create_container/3, check_container_exist/1, is_container_running/1,
|
||||||
|
start_container/1, stop_container/1, remove_container/1, kill_container/1,
|
||||||
|
get_containers/0]).
|
||||||
|
|
||||||
|
-spec pull_image(Image :: binary(), Callback :: fun((Msg :: any()) -> no_return())) -> ok | {error, Reason :: any()}.
|
||||||
|
pull_image(Image, Callback) when is_binary(Image), is_function(Callback, 1) ->
|
||||||
|
Url = lists:flatten(io_lib:format("/images/create?fromImage=~s", [binary_to_list(Image)])),
|
||||||
|
docker_http:stream_request(Callback, "POST", Url, <<>>, []).
|
||||||
|
|
||||||
|
-spec check_image_exist(Image :: binary()) -> boolean().
|
||||||
|
check_image_exist(Image) when is_binary(Image) ->
|
||||||
|
Url = lists:flatten(io_lib:format("/images/~s/json", [binary_to_list(Image)])),
|
||||||
|
case docker_http:request("GET", Url, <<"">>, []) of
|
||||||
|
{ok, 200, _Headers, Resp} ->
|
||||||
|
M = catch jiffy:decode(Resp, [return_maps]),
|
||||||
|
is_map(M) andalso maps:is_key(<<"Id">>, M);
|
||||||
|
_ ->
|
||||||
|
false
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec create_container(ContainerName :: binary(), ContainerDir :: string(), Config :: map()) -> {ok, ContainerId :: binary()} | {error, Reason :: any()}.
|
||||||
|
create_container(ContainerName, ContainerDir, Config) when is_binary(ContainerName), is_list(ContainerDir), is_map(Config) ->
|
||||||
|
Url = lists:flatten(io_lib:format("/containers/create?name=~s", [binary_to_list(ContainerName)])),
|
||||||
|
%% 挂载预留的目录,用来作为配置文件的存放
|
||||||
|
ConfigFile = list_to_binary(docker_helper:get_config_file(ContainerDir)),
|
||||||
|
|
||||||
|
%% 增加自定义的用来放配置文件的目录
|
||||||
|
Volumes0 = maps:get(<<"volumes">>, Config, []),
|
||||||
|
Volumes = [<<ConfigFile/binary, ":/usr/local/etc/service.conf">>|Volumes0],
|
||||||
|
NewConfig = Config#{<<"volumes">> => Volumes},
|
||||||
|
|
||||||
|
Options = build_options(ContainerName, NewConfig),
|
||||||
|
display_options(Options),
|
||||||
|
|
||||||
|
Body = iolist_to_binary(jiffy:encode(Options, [force_utf8])),
|
||||||
|
true = is_binary(Body),
|
||||||
|
|
||||||
|
Headers = [
|
||||||
|
{<<"Content-Type">>, <<"application/json">>}
|
||||||
|
],
|
||||||
|
case docker_http:request("POST", Url, Body, Headers) of
|
||||||
|
{ok, 201, _Headers, Resp} ->
|
||||||
|
case catch jiffy:decode(Resp, [return_maps]) of
|
||||||
|
#{<<"Id">> := ContainerId} when is_binary(ContainerId) ->
|
||||||
|
{ok, ContainerId};
|
||||||
|
_ ->
|
||||||
|
{error, Resp}
|
||||||
|
end;
|
||||||
|
{ok, _StatusCode, _, ErrorResp} ->
|
||||||
|
case catch jiffy:decode(ErrorResp, [return_maps]) of
|
||||||
|
#{<<"message">> := Msg} ->
|
||||||
|
{error, Msg};
|
||||||
|
_ ->
|
||||||
|
{error, ErrorResp}
|
||||||
|
end;
|
||||||
|
{error, Reason} ->
|
||||||
|
{error, Reason}
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec is_container_running(ContainerId :: binary()) -> boolean().
|
||||||
|
is_container_running(ContainerId) when is_binary(ContainerId) ->
|
||||||
|
case inspect_container(ContainerId) of
|
||||||
|
{ok, #{<<"State">> := #{<<"Running">> := Running}}} ->
|
||||||
|
Running;
|
||||||
|
{error, _} ->
|
||||||
|
false
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec check_container_exist(ContainerName :: binary()) -> boolean().
|
||||||
|
check_container_exist(ContainerName) when is_binary(ContainerName) ->
|
||||||
|
case inspect_container(ContainerName) of
|
||||||
|
{ok, #{<<"Id">> := Id}} when is_binary(Id) ->
|
||||||
|
true;
|
||||||
|
_ ->
|
||||||
|
false
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec start_container(ContainerName :: binary()) -> ok | {error, Reason :: binary()}.
|
||||||
|
start_container(ContainerName) when is_binary(ContainerName) ->
|
||||||
|
Url = lists:flatten(io_lib:format("/containers/~s/start", [binary_to_list(ContainerName)])),
|
||||||
|
Headers = [
|
||||||
|
{<<"Content-Type">>, <<"application/json">>}
|
||||||
|
],
|
||||||
|
case docker_http:request("POST", Url, <<>>, Headers) of
|
||||||
|
{ok, 204, _Headers, _} ->
|
||||||
|
ok;
|
||||||
|
{ok, 304, _Headers, _} ->
|
||||||
|
{error, <<"container already started">>};
|
||||||
|
{ok, _StatusCode, _Header, ErrorResp} ->
|
||||||
|
case catch jiffy:decode(ErrorResp) of
|
||||||
|
#{<<"message">> := Msg} ->
|
||||||
|
{error, Msg};
|
||||||
|
_ ->
|
||||||
|
{error, ErrorResp}
|
||||||
|
end;
|
||||||
|
{error, Reason} ->
|
||||||
|
{error, Reason}
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec stop_container(ContainerName :: binary()) -> ok | {error, Reason :: binary()}.
|
||||||
|
stop_container(ContainerName) when is_binary(ContainerName) ->
|
||||||
|
Url = lists:flatten(io_lib:format("/containers/~s/stop", [binary_to_list(ContainerName)])),
|
||||||
|
Headers = [
|
||||||
|
{<<"Content-Type">>, <<"application/json">>}
|
||||||
|
],
|
||||||
|
case docker_http:request("POST", Url, <<>>, Headers) of
|
||||||
|
{ok, 204, _Headers, _} ->
|
||||||
|
ok;
|
||||||
|
{ok, 304, _Headers, _} ->
|
||||||
|
{error, <<"container already stopped">>};
|
||||||
|
{ok, _StatusCode, _Header, ErrorResp} ->
|
||||||
|
case catch jiffy:decode(ErrorResp) of
|
||||||
|
#{<<"message">> := Msg} ->
|
||||||
|
{error, Msg};
|
||||||
|
_ ->
|
||||||
|
{error, ErrorResp}
|
||||||
|
end;
|
||||||
|
{error, Reason} ->
|
||||||
|
{error, Reason}
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec kill_container(ContainerName :: binary()) -> ok | {error, Reason :: binary()}.
|
||||||
|
kill_container(ContainerName) when is_binary(ContainerName) ->
|
||||||
|
Url = lists:flatten(io_lib:format("/containers/~s/kill", [binary_to_list(ContainerName)])),
|
||||||
|
Headers = [
|
||||||
|
{<<"Content-Type">>, <<"application/json">>}
|
||||||
|
],
|
||||||
|
case docker_http:request("POST", Url, <<>>, Headers) of
|
||||||
|
{ok, 204, _Headers, _} ->
|
||||||
|
ok;
|
||||||
|
{ok, _StatusCode, _Header, ErrorResp} ->
|
||||||
|
case catch jiffy:decode(ErrorResp) of
|
||||||
|
#{<<"message">> := Msg} ->
|
||||||
|
{error, Msg};
|
||||||
|
_ ->
|
||||||
|
{error, ErrorResp}
|
||||||
|
end;
|
||||||
|
{error, Reason} ->
|
||||||
|
{error, Reason}
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec remove_container(ContainerName :: binary()) -> ok | {error, Reason :: binary()}.
|
||||||
|
remove_container(ContainerName) when is_binary(ContainerName) ->
|
||||||
|
Url = lists:flatten(io_lib:format("/containers/~s", [binary_to_list(ContainerName)])),
|
||||||
|
Headers = [
|
||||||
|
{<<"Content-Type">>, <<"application/json">>}
|
||||||
|
],
|
||||||
|
case docker_http:request("DELETE", Url, <<>>, Headers) of
|
||||||
|
{ok, 204, _Headers, _} ->
|
||||||
|
ok;
|
||||||
|
{ok, 304, _Headers, _} ->
|
||||||
|
{error, <<"container already stopped">>};
|
||||||
|
{ok, _StatusCode, _Header, ErrorResp} ->
|
||||||
|
case catch jiffy:decode(ErrorResp) of
|
||||||
|
#{<<"message">> := Msg} ->
|
||||||
|
{error, Msg};
|
||||||
|
_ ->
|
||||||
|
{error, ErrorResp}
|
||||||
|
end;
|
||||||
|
{error, Reason} ->
|
||||||
|
{error, Reason}
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec get_containers() -> {ok, Containers :: [map()]} | {error, Reason :: binary()}.
|
||||||
|
get_containers() ->
|
||||||
|
Url = "/containers/json?all=true",
|
||||||
|
Headers = [
|
||||||
|
{<<"Content-Type">>, <<"application/json">>}
|
||||||
|
],
|
||||||
|
case docker_http:request("GET", Url, <<>>, Headers) of
|
||||||
|
{ok, 200, _Headers, ContainersBin} ->
|
||||||
|
Containers = jiffy:decode(ContainersBin, [return_maps]),
|
||||||
|
{ok, Containers};
|
||||||
|
{ok, _StatusCode, _Header, ErrorResp} ->
|
||||||
|
case catch jiffy:decode(ErrorResp) of
|
||||||
|
#{<<"message">> := Msg} ->
|
||||||
|
{error, Msg};
|
||||||
|
_ ->
|
||||||
|
{error, ErrorResp}
|
||||||
|
end;
|
||||||
|
{error, Reason} ->
|
||||||
|
{error, Reason}
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec inspect_container(ContainerId :: binary()) -> {ok, Json :: map()} | {error, Error :: any()}.
|
||||||
|
inspect_container(ContainerId) when is_binary(ContainerId) ->
|
||||||
|
Url = lists:flatten(io_lib:format("/containers/~s/json", [binary_to_list(ContainerId)])),
|
||||||
|
Headers = [
|
||||||
|
{<<"Content-Type">>, <<"application/json">>}
|
||||||
|
],
|
||||||
|
case docker_http:request("GET", Url, <<>>, Headers) of
|
||||||
|
{ok, 200, _Headers, Resp} ->
|
||||||
|
Json = jiffy:decode(Resp, [return_maps]),
|
||||||
|
{ok, Json};
|
||||||
|
{ok, _StatusCode, _Header, ErrorResp} ->
|
||||||
|
case catch jiffy:decode(ErrorResp) of
|
||||||
|
#{<<"message">> := Msg} ->
|
||||||
|
{error, Msg};
|
||||||
|
_ ->
|
||||||
|
{error, ErrorResp}
|
||||||
|
end;
|
||||||
|
{error, Reason} ->
|
||||||
|
{error, Reason}
|
||||||
|
end.
|
||||||
|
|
||||||
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||||
|
%%% helper methods
|
||||||
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||||
|
|
||||||
|
%% 构建最终 JSON Map
|
||||||
|
build_options(ContainerName, Config) when is_binary(ContainerName), is_map(Config) ->
|
||||||
|
%% 容器名称作为环境变量,注入到运行时环境中
|
||||||
|
Envs0 = maps:get(<<"envs">>, Config, []),
|
||||||
|
Envs = [<<"CONTAINER_NAME=", ContainerName/binary>>|Envs0],
|
||||||
|
#{
|
||||||
|
<<"Image">> => maps:get(<<"image">>, Config, <<>>),
|
||||||
|
<<"Cmd">> => maps:get(<<"command">>, Config, []),
|
||||||
|
<<"Entrypoint">> => maps:get(<<"entrypoint">>, Config, []),
|
||||||
|
<<"Env">> => Envs,
|
||||||
|
<<"Labels">> => maps:get(<<"labels">>, Config, #{}),
|
||||||
|
<<"Volumes">> => build_volumes(Config),
|
||||||
|
<<"User">> => maps:get(<<"user">>, Config, <<>>),
|
||||||
|
<<"WorkingDir">> => maps:get(<<"working_dir">>, Config, <<>>),
|
||||||
|
<<"Hostname">> => maps:get(<<"hostname">>, Config, <<>>),
|
||||||
|
<<"ExposedPorts">> => build_expose(Config),
|
||||||
|
<<"NetworkingConfig">> => build_networks(Config),
|
||||||
|
<<"Healthcheck">> => build_healthcheck(Config),
|
||||||
|
<<"HostConfig">> => fold_merge([
|
||||||
|
build_network_mode(Config),
|
||||||
|
build_binds(Config),
|
||||||
|
build_restart(Config),
|
||||||
|
build_privileged(Config),
|
||||||
|
build_cap_add_drop(Config),
|
||||||
|
build_devices(Config),
|
||||||
|
build_memory(Config),
|
||||||
|
build_cpu(Config),
|
||||||
|
build_ulimits(Config),
|
||||||
|
build_tmpfs(Config),
|
||||||
|
build_sysctls(Config),
|
||||||
|
build_extra_hosts(Config)
|
||||||
|
])
|
||||||
|
}.
|
||||||
|
|
||||||
|
%% 工具函数
|
||||||
|
fold_merge(List) ->
|
||||||
|
lists:foldl(fun maps:merge/2, #{}, List).
|
||||||
|
|
||||||
|
%% --- 构建子字段 ---
|
||||||
|
build_expose(Config) ->
|
||||||
|
Ports = maps:get(<<"expose">>, Config, []),
|
||||||
|
case Ports of
|
||||||
|
[] -> #{};
|
||||||
|
_ ->
|
||||||
|
maps:from_list([{<<P/binary, "/tcp">>, #{}} || P <- Ports])
|
||||||
|
end.
|
||||||
|
|
||||||
|
build_volumes(Config) ->
|
||||||
|
Vols = maps:get(<<"volumes">>, Config, []),
|
||||||
|
case Vols of
|
||||||
|
[] ->
|
||||||
|
#{};
|
||||||
|
_ ->
|
||||||
|
maps:from_list(lists:map(fun(V) ->
|
||||||
|
[_Host, Cont] = binary:split(V, <<":">>, []),
|
||||||
|
{Cont, #{}}
|
||||||
|
end, Vols))
|
||||||
|
end.
|
||||||
|
|
||||||
|
build_binds(Config) ->
|
||||||
|
Vols = maps:get(<<"volumes">>, Config, []),
|
||||||
|
case Vols of
|
||||||
|
[] ->
|
||||||
|
#{};
|
||||||
|
_ ->
|
||||||
|
#{<<"Binds">> => Vols}
|
||||||
|
end.
|
||||||
|
|
||||||
|
build_networks(Config) ->
|
||||||
|
Nets = maps:get(<<"networks">>, Config, []),
|
||||||
|
case Nets of
|
||||||
|
[] -> #{};
|
||||||
|
_ ->
|
||||||
|
NetCfg = maps:from_list([{N, #{}} || N <- Nets]),
|
||||||
|
#{<<"EndpointsConfig">> => NetCfg}
|
||||||
|
end.
|
||||||
|
|
||||||
|
build_network_mode(Config) ->
|
||||||
|
NetworkMode = maps:get(<<"network_mode">>, Config, <<"bridge">>),
|
||||||
|
#{<<"NetworkMode">> => NetworkMode}.
|
||||||
|
|
||||||
|
parse_mem(Val) ->
|
||||||
|
case binary:last(Val) of
|
||||||
|
$m ->
|
||||||
|
N = binary:part(Val, {0, byte_size(Val)-1}),
|
||||||
|
list_to_integer(binary_to_list(N)) * 1024 * 1024;
|
||||||
|
$g ->
|
||||||
|
N = binary:part(Val, {0, byte_size(Val)-1}),
|
||||||
|
list_to_integer(binary_to_list(N)) * 1024 * 1024 * 1024;
|
||||||
|
_ ->
|
||||||
|
list_to_integer(binary_to_list(Val))
|
||||||
|
end.
|
||||||
|
|
||||||
|
build_healthcheck(Config) ->
|
||||||
|
HC = maps:get(<<"healthcheck">>, Config, #{}),
|
||||||
|
case maps:size(HC) of
|
||||||
|
0 ->
|
||||||
|
#{};
|
||||||
|
_ ->
|
||||||
|
#{
|
||||||
|
<<"Test">> => maps:get(<<"test">>, HC, []),
|
||||||
|
<<"Interval">> => parse_duration(maps:get(<<"interval">>, HC, <<"0s">>)),
|
||||||
|
<<"Timeout">> => parse_duration(maps:get(<<"timeout">>, HC, <<"0s">>)),
|
||||||
|
<<"Retries">> => maps:get(<<"retries">>, HC, 0)
|
||||||
|
}
|
||||||
|
end.
|
||||||
|
|
||||||
|
parse_duration(Bin) ->
|
||||||
|
%% "30s" -> 30000000000
|
||||||
|
Sz = byte_size(Bin),
|
||||||
|
NBin = binary:part(Bin, {0, Sz-1}),
|
||||||
|
N = list_to_integer(binary_to_list(NBin)),
|
||||||
|
case binary:last(Bin) of
|
||||||
|
$s ->
|
||||||
|
N * 1000000000;
|
||||||
|
$m ->
|
||||||
|
N * 60000000000;
|
||||||
|
_ ->
|
||||||
|
N
|
||||||
|
end.
|
||||||
|
|
||||||
|
%% --- 构建子字段 ---
|
||||||
|
|
||||||
|
build_restart(Config) ->
|
||||||
|
case maps:get(<<"restart">>, Config, undefined) of
|
||||||
|
undefined ->
|
||||||
|
#{};
|
||||||
|
Policy ->
|
||||||
|
#{<<"RestartPolicy">> => #{<<"Name">> => Policy}}
|
||||||
|
end.
|
||||||
|
|
||||||
|
build_privileged(Config) ->
|
||||||
|
case maps:get(<<"privileged">>, Config, false) of
|
||||||
|
true ->
|
||||||
|
#{<<"Privileged">> => true};
|
||||||
|
_ ->
|
||||||
|
#{}
|
||||||
|
end.
|
||||||
|
|
||||||
|
build_cap_add_drop(Config) ->
|
||||||
|
Add = maps:get(<<"cap_add">>, Config, []),
|
||||||
|
Drop = maps:get(<<"cap_drop">>, Config, []),
|
||||||
|
case {Add, Drop} of
|
||||||
|
{[], []} ->
|
||||||
|
#{};
|
||||||
|
_ ->
|
||||||
|
#{<<"CapAdd">> => Add, <<"CapDrop">> => Drop}
|
||||||
|
end.
|
||||||
|
|
||||||
|
build_devices(Config) ->
|
||||||
|
Devs = maps:get(<<"devices">>, Config, []),
|
||||||
|
case Devs of
|
||||||
|
[] ->
|
||||||
|
#{};
|
||||||
|
_ ->
|
||||||
|
DevObjs = [#{<<"PathOnHost">> => H, <<"PathInContainer">> => C,
|
||||||
|
<<"CgroupPermissions">> => <<"rwm">>}
|
||||||
|
|| D <- Devs,
|
||||||
|
[H, C] <- [binary:split(D, <<":">>, [])]],
|
||||||
|
#{<<"Devices">> => DevObjs}
|
||||||
|
end.
|
||||||
|
|
||||||
|
build_memory(Config) ->
|
||||||
|
Mem = maps:get(<<"mem_limit">>, Config, undefined),
|
||||||
|
MemRes = maps:get(<<"mem_reservation">>, Config, undefined),
|
||||||
|
HCfg = #{},
|
||||||
|
HCfg1 = if
|
||||||
|
Mem /= undefined ->
|
||||||
|
maps:put(<<"Memory">>, parse_mem(Mem), HCfg);
|
||||||
|
true ->
|
||||||
|
HCfg
|
||||||
|
end,
|
||||||
|
if
|
||||||
|
MemRes /= undefined ->
|
||||||
|
maps:put(<<"MemoryReservation">>, parse_mem(MemRes), HCfg1);
|
||||||
|
true ->
|
||||||
|
HCfg1
|
||||||
|
end.
|
||||||
|
|
||||||
|
|
||||||
|
build_cpu(Config) ->
|
||||||
|
CPU = maps:get(<<"cpus">>, Config, undefined),
|
||||||
|
Shares = maps:get(<<"cpu_shares">>, Config, undefined),
|
||||||
|
HCfg = #{},
|
||||||
|
HCfg1 = if
|
||||||
|
CPU /= undefined ->
|
||||||
|
maps:put(<<"NanoCpus">>, trunc(CPU * 1000000000), HCfg);
|
||||||
|
true ->
|
||||||
|
HCfg
|
||||||
|
end,
|
||||||
|
if
|
||||||
|
Shares /= undefined ->
|
||||||
|
maps:put(<<"CpuShares">>, Shares, HCfg1);
|
||||||
|
true ->
|
||||||
|
HCfg1
|
||||||
|
end.
|
||||||
|
|
||||||
|
build_ulimits(Config) ->
|
||||||
|
UL = maps:get(<<"ulimits">>, Config, #{}),
|
||||||
|
case maps:size(UL) of
|
||||||
|
0 ->
|
||||||
|
#{};
|
||||||
|
_ ->
|
||||||
|
ULList = lists:map(fun({K, V}) ->
|
||||||
|
[S1, H1] = binary:split(V, <<":">>, []),
|
||||||
|
S = list_to_integer(binary_to_list(S1)),
|
||||||
|
H = list_to_integer(binary_to_list(H1)),
|
||||||
|
#{<<"Name">> => K, <<"Soft">> => S, <<"Hard">> => H}
|
||||||
|
end, maps:to_list(UL)),
|
||||||
|
|
||||||
|
#{<<"Ulimits">> => ULList}
|
||||||
|
end.
|
||||||
|
|
||||||
|
build_sysctls(Config) ->
|
||||||
|
SC = maps:get(<<"sysctls">>, Config, #{}),
|
||||||
|
case maps:size(SC) of
|
||||||
|
0 ->
|
||||||
|
#{};
|
||||||
|
_ ->
|
||||||
|
#{<<"Sysctls">> => SC}
|
||||||
|
end.
|
||||||
|
|
||||||
|
build_tmpfs(Config) ->
|
||||||
|
Tmp = maps:get(<<"tmpfs">>, Config, []),
|
||||||
|
case Tmp of
|
||||||
|
[] ->
|
||||||
|
#{};
|
||||||
|
_ ->
|
||||||
|
#{<<"Tmpfs">> => maps:from_list([{T, <<>>} || T <- Tmp])}
|
||||||
|
end.
|
||||||
|
|
||||||
|
build_extra_hosts(Config) ->
|
||||||
|
Hosts = maps:get(<<"extra_hosts">>, Config, []),
|
||||||
|
case Hosts of
|
||||||
|
[] ->
|
||||||
|
#{};
|
||||||
|
_ ->
|
||||||
|
#{<<"ExtraHosts">> => Hosts}
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec display_options(Options :: map()) -> no_return().
|
||||||
|
display_options(Options) when is_map(Options) ->
|
||||||
|
lager:debug("deploy options: ~p", [jiffy:encode(Options, [force_utf8])]),
|
||||||
|
lists:foreach(fun({K, V}) -> lager:debug("~p => ~p", [K, V]) end, maps:to_list(Options)).
|
||||||
117
apps/efka/src/docker/docker_deployer.erl
Normal file
117
apps/efka/src/docker/docker_deployer.erl
Normal file
@ -0,0 +1,117 @@
|
|||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
%%% @author anlicheng
|
||||||
|
%%% @copyright (C) 2025, <COMPANY>
|
||||||
|
%%% @doc
|
||||||
|
%%%
|
||||||
|
%%% @end
|
||||||
|
%%% Created : 07. 5月 2025 15:47
|
||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
-module(docker_deployer).
|
||||||
|
-author("anlicheng").
|
||||||
|
-dialyzer([{nowarn_function, normalize_image/1}]).
|
||||||
|
|
||||||
|
%% API
|
||||||
|
-export([start_monitor/3]).
|
||||||
|
-export([deploy/3]).
|
||||||
|
|
||||||
|
-define(TASK_SUCCESS, <<"success">>).
|
||||||
|
-define(TASK_FAIL, <<"fail">>).
|
||||||
|
|
||||||
|
%%%===================================================================
|
||||||
|
%%% API
|
||||||
|
%%%===================================================================
|
||||||
|
|
||||||
|
%% @doc Spawns the server and registers the local name (unique)
|
||||||
|
-spec(start_monitor(TaskId :: integer(), ContainerDir :: string(), Config :: map()) -> {ok, {Pid :: pid(), MRef :: reference()}}).
|
||||||
|
start_monitor(TaskId, ContainerDir, Config) when is_integer(TaskId), is_list(ContainerDir), is_map(Config) ->
|
||||||
|
{ok, spawn_monitor(?MODULE, deploy, [TaskId, ContainerDir, Config])}.
|
||||||
|
|
||||||
|
%%%===================================================================
|
||||||
|
%%% Internal functions
|
||||||
|
%%%===================================================================
|
||||||
|
|
||||||
|
%{
|
||||||
|
% "image": "nginx:latest",
|
||||||
|
% "container_name": "my_nginx",
|
||||||
|
% "ports": ["8080:80", "443:443"],
|
||||||
|
% "volumes": ["/host/data:/data", "/host/log:/var/log"],
|
||||||
|
% "envs": ["ENV1=val1", "ENV2=val2"],
|
||||||
|
% "entrypoint": ["/docker-entrypoint.sh"],
|
||||||
|
% "command": ["nginx", "-g", "daemon off;"],
|
||||||
|
% "restart": "always"
|
||||||
|
%}
|
||||||
|
-spec deploy(TaskId :: integer(), ContainerDir :: string(), Config :: map()) -> no_return().
|
||||||
|
deploy(TaskId, ContainerDir, Config) when is_integer(TaskId), is_list(ContainerDir), is_map(Config) ->
|
||||||
|
%% 尝试拉取镜像
|
||||||
|
ContainerName = maps:get(<<"container_name">>, Config),
|
||||||
|
trace_log(TaskId, <<"info">>, <<"开始部署容器:"/utf8, ContainerName/binary>>),
|
||||||
|
|
||||||
|
case docker_commands:check_container_exist(ContainerName) of
|
||||||
|
true ->
|
||||||
|
trace_log(TaskId, <<"info">>, <<"本地容器已经存在:"/utf8, ContainerName/binary>>),
|
||||||
|
efka_remote_agent:close_task_event_stream(TaskId, ?TASK_FAIL);
|
||||||
|
false ->
|
||||||
|
Image0 = maps:get(<<"image">>, Config),
|
||||||
|
Image = normalize_image(Image0),
|
||||||
|
|
||||||
|
trace_log(TaskId, <<"info">>, <<"使用镜像:"/utf8, Image/binary>>),
|
||||||
|
PullResult = case docker_commands:check_image_exist(Image) of
|
||||||
|
true ->
|
||||||
|
trace_log(TaskId, <<"info">>, <<"镜像本地已存在:"/utf8, Image/binary>>),
|
||||||
|
ok;
|
||||||
|
false ->
|
||||||
|
trace_log(TaskId, <<"info">>, <<"开始拉取镜像:"/utf8, Image/binary>>),
|
||||||
|
CB = fun
|
||||||
|
({message, M}) ->
|
||||||
|
trace_log(TaskId, <<"info">>, M);
|
||||||
|
({error, Error}) ->
|
||||||
|
trace_log(TaskId, <<"error">>, Error)
|
||||||
|
end,
|
||||||
|
docker_commands:pull_image(Image, CB)
|
||||||
|
end,
|
||||||
|
|
||||||
|
case PullResult of
|
||||||
|
ok ->
|
||||||
|
trace_log(TaskId, <<"info">>, <<"开始创建容器: "/utf8, ContainerName/binary>>),
|
||||||
|
case docker_commands:create_container(ContainerName, ContainerDir, Config) of
|
||||||
|
{ok, ContainerId} ->
|
||||||
|
%% 创建容器对应的配置文件
|
||||||
|
ConfigFile = docker_helper:get_config_file(ContainerDir),
|
||||||
|
case file:open(ConfigFile, [write, exclusive]) of
|
||||||
|
{ok, FD} ->
|
||||||
|
ok = file:write(FD, <<>>),
|
||||||
|
file:close(FD);
|
||||||
|
{error, Reason} ->
|
||||||
|
Reason1 = list_to_binary(io_lib:format("~p", [Reason])),
|
||||||
|
trace_log(TaskId, <<"notice">>, <<"创建配置文件失败: "/utf8, Reason1/binary>>)
|
||||||
|
end,
|
||||||
|
ShortContainerId = binary:part(ContainerId, 1, 12),
|
||||||
|
trace_log(TaskId, <<"info">>, <<"容器创建成功: "/utf8, ShortContainerId/binary>>),
|
||||||
|
trace_log(TaskId, <<"info">>, <<"任务完成"/utf8>>),
|
||||||
|
efka_remote_agent:close_task_event_stream(TaskId, ?TASK_SUCCESS);
|
||||||
|
{error, Reason} ->
|
||||||
|
trace_log(TaskId, <<"error">>, <<"容器创建失败: "/utf8, Reason/binary>>),
|
||||||
|
trace_log(TaskId, <<"error">>, <<"任务失败"/utf8>>),
|
||||||
|
efka_remote_agent:close_task_event_stream(TaskId, ?TASK_FAIL)
|
||||||
|
end;
|
||||||
|
{error, Reason} ->
|
||||||
|
trace_log(TaskId, <<"error">>, <<"镜像拉取失败: "/utf8, Reason/binary>>),
|
||||||
|
efka_remote_agent:close_task_event_stream(TaskId, ?TASK_FAIL)
|
||||||
|
end
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec normalize_image(binary()) -> binary().
|
||||||
|
normalize_image(Image) when is_binary(Image) ->
|
||||||
|
Parts = binary:split(Image, <<"/">>, [global]),
|
||||||
|
{PrefixParts, [Last]} = lists:split(length(Parts) - 1, Parts),
|
||||||
|
NormalizedLast = case binary:split(Last, <<":">>, [global]) of
|
||||||
|
[_Name] -> <<Last/binary, ":latest">>;
|
||||||
|
[_Name, _Tag] -> Last
|
||||||
|
end,
|
||||||
|
iolist_to_binary(lists:join(<<"/">>, PrefixParts ++ [NormalizedLast])).
|
||||||
|
|
||||||
|
-spec trace_log(TaskId :: integer(), Level :: binary(), Msg :: binary()) -> no_return().
|
||||||
|
trace_log(TaskId, Level, Msg) when is_integer(TaskId), is_binary(Level), is_binary(Msg) ->
|
||||||
|
efka_remote_agent:task_event_stream(TaskId, Level, Msg),
|
||||||
|
Info = iolist_to_binary([<<"task_id=">>, integer_to_binary(TaskId), <<" ">>, Level, <<" ">>, Msg]),
|
||||||
|
efka_logger:write(Info).
|
||||||
@ -4,16 +4,16 @@
|
|||||||
%%% @doc
|
%%% @doc
|
||||||
%%%
|
%%%
|
||||||
%%% @end
|
%%% @end
|
||||||
%%% Created : 09. 5月 2025 16:45
|
%%% Created : 16. 9月 2025 16:48
|
||||||
%%%-------------------------------------------------------------------
|
%%%-------------------------------------------------------------------
|
||||||
-module(efka_inetd_task_log).
|
-module(docker_events).
|
||||||
-author("anlicheng").
|
-author("anlicheng").
|
||||||
|
|
||||||
-behaviour(gen_server).
|
-behaviour(gen_server).
|
||||||
|
|
||||||
%% API
|
%% API
|
||||||
-export([start_link/0]).
|
-export([start_link/0]).
|
||||||
-export([stash/2, flush/1, get_logs/1]).
|
-export([monitor_container/2]).
|
||||||
|
|
||||||
%% gen_server callbacks
|
%% gen_server callbacks
|
||||||
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).
|
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).
|
||||||
@ -21,30 +21,18 @@
|
|||||||
-define(SERVER, ?MODULE).
|
-define(SERVER, ?MODULE).
|
||||||
|
|
||||||
-record(state, {
|
-record(state, {
|
||||||
%% #{task_id => queue:new()}
|
port,
|
||||||
pending_map = #{}
|
%% 观察者
|
||||||
|
monitors = #{}
|
||||||
}).
|
}).
|
||||||
|
|
||||||
%%%===================================================================
|
%%%===================================================================
|
||||||
%%% API
|
%%% API
|
||||||
%%%===================================================================
|
%%%===================================================================
|
||||||
|
|
||||||
-spec stash(TaskId :: integer(), Items :: binary() | [binary()]) -> no_return().
|
-spec monitor_container(ReceiverPid :: pid(), ContainerId :: binary()) -> no_return().
|
||||||
stash(TaskId, Log) when is_integer(TaskId), is_binary(Log) ->
|
monitor_container(ReceiverPid, ContainerId) when is_pid(ReceiverPid), is_binary(ContainerId) ->
|
||||||
stash(TaskId, [Log]);
|
gen_server:cast(?SERVER, {monitor_container, ReceiverPid, ContainerId}).
|
||||||
stash(TaskId, Items) when is_integer(TaskId), is_list(Items) ->
|
|
||||||
{{Y, M, D}, {H, I, S}} = calendar:local_time(),
|
|
||||||
TimePrefix = iolist_to_binary(io_lib:format("[~b-~2..0b-~2..0b ~2..0b:~2..0b:~2..0b]", [Y, M, D, H, I, S])),
|
|
||||||
Log = iolist_to_binary([TimePrefix, <<" ">>, lists:join(<<" ">>, Items)]),
|
|
||||||
gen_server:cast(?SERVER, {stash, TaskId, Log}).
|
|
||||||
|
|
||||||
-spec flush(TaskId :: integer()) -> no_return().
|
|
||||||
flush(TaskId) when is_integer(TaskId) ->
|
|
||||||
gen_server:cast(?SERVER, {flush, TaskId}).
|
|
||||||
|
|
||||||
-spec get_logs(TaskId :: integer()) -> {ok, Logs :: list()}.
|
|
||||||
get_logs(TaskId) when is_integer(TaskId) ->
|
|
||||||
gen_server:call(?SERVER, {get_logs, TaskId}).
|
|
||||||
|
|
||||||
%% @doc Spawns the server and registers the local name (unique)
|
%% @doc Spawns the server and registers the local name (unique)
|
||||||
-spec(start_link() ->
|
-spec(start_link() ->
|
||||||
@ -62,6 +50,8 @@ start_link() ->
|
|||||||
{ok, State :: #state{}} | {ok, State :: #state{}, timeout() | hibernate} |
|
{ok, State :: #state{}} | {ok, State :: #state{}, timeout() | hibernate} |
|
||||||
{stop, Reason :: term()} | ignore).
|
{stop, Reason :: term()} | ignore).
|
||||||
init([]) ->
|
init([]) ->
|
||||||
|
process_flag(trap_exit, true),
|
||||||
|
try_attach_events(0),
|
||||||
{ok, #state{}}.
|
{ok, #state{}}.
|
||||||
|
|
||||||
%% @private
|
%% @private
|
||||||
@ -74,15 +64,8 @@ init([]) ->
|
|||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
{stop, Reason :: term(), Reply :: term(), NewState :: #state{}} |
|
{stop, Reason :: term(), Reply :: term(), NewState :: #state{}} |
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
handle_call({get_logs, TaskId}, _From, State = #state{pending_map = PendingMap}) ->
|
handle_call(_Request, _From, State = #state{}) ->
|
||||||
case maps:find(TaskId, PendingMap) of
|
{reply, ok, State}.
|
||||||
error ->
|
|
||||||
Logs = task_log_model:get_logs(TaskId),
|
|
||||||
{reply, {ok, Logs}, State};
|
|
||||||
{ok, Q} ->
|
|
||||||
Logs = queue:to_list(Q),
|
|
||||||
{reply, {ok, Logs}, State}
|
|
||||||
end.
|
|
||||||
|
|
||||||
%% @private
|
%% @private
|
||||||
%% @doc Handling cast messages
|
%% @doc Handling cast messages
|
||||||
@ -90,19 +73,9 @@ handle_call({get_logs, TaskId}, _From, State = #state{pending_map = PendingMap})
|
|||||||
{noreply, NewState :: #state{}} |
|
{noreply, NewState :: #state{}} |
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
handle_cast({stash, TaskId, Log}, State = #state{pending_map = PendingMap}) ->
|
handle_cast({monitor_container, ReceiverPid, ContainerId}, State = #state{monitors = Monitors}) ->
|
||||||
Q = maps:get(TaskId, PendingMap, queue:new()),
|
MRef = erlang:monitor(process, ReceiverPid),
|
||||||
NQ = queue:in(Log, Q),
|
{noreply, State#state{monitors = maps:put(ContainerId, {ReceiverPid, MRef}, Monitors)}}.
|
||||||
{noreply, State#state{pending_map = maps:put(TaskId, NQ, PendingMap)}};
|
|
||||||
handle_cast({flush, TaskId}, State = #state{pending_map = PendingMap}) ->
|
|
||||||
case maps:take(TaskId, PendingMap) of
|
|
||||||
error ->
|
|
||||||
{noreply, State};
|
|
||||||
{Q, NPendingMap} ->
|
|
||||||
Logs = queue:to_list(Q),
|
|
||||||
ok = task_log_model:insert(TaskId, Logs),
|
|
||||||
{noreply, State#state{pending_map = NPendingMap}}
|
|
||||||
end.
|
|
||||||
|
|
||||||
%% @private
|
%% @private
|
||||||
%% @doc Handling all non call/cast messages
|
%% @doc Handling all non call/cast messages
|
||||||
@ -110,8 +83,31 @@ handle_cast({flush, TaskId}, State = #state{pending_map = PendingMap}) ->
|
|||||||
{noreply, NewState :: #state{}} |
|
{noreply, NewState :: #state{}} |
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
handle_info(_Info, State = #state{}) ->
|
handle_info({timeout, _, attach_docker_events}, State = #state{port = undefined}) ->
|
||||||
{noreply, State}.
|
ExecCmd = "docker events --format \"{{json .}}\"",
|
||||||
|
case catch erlang:open_port({spawn, ExecCmd}, [exit_status, {line, 10239}, use_stdio, stderr_to_stdout, binary]) of
|
||||||
|
Port when is_port(Port) ->
|
||||||
|
{noreply, State#state{port = Port}};
|
||||||
|
_Error ->
|
||||||
|
try_attach_events(5000),
|
||||||
|
{noreply, State}
|
||||||
|
end;
|
||||||
|
handle_info({Port, {data, {eol, BinLine}}}, State = #state{port = Port}) ->
|
||||||
|
Event = catch jiffy:decode(BinLine, [return_maps]),
|
||||||
|
lager:debug("event: ~p", [Event]),
|
||||||
|
handle_event(Event, State),
|
||||||
|
{noreply, State};
|
||||||
|
|
||||||
|
%% 进程退出的时候删除掉管理的Pid
|
||||||
|
handle_info({'DOWN', MRef, process, _Pid, _Reason}, State = #state{monitors = Monitors}) ->
|
||||||
|
NMonitors = maps:filter(fun(_Key, {_, Ref}) -> MRef =/= Ref end, Monitors),
|
||||||
|
{noreply, State#state{monitors = NMonitors}};
|
||||||
|
|
||||||
|
%% Port退出的时候,尝试重启
|
||||||
|
handle_info({'EXIT', Port, Reason}, State = #state{port = Port}) ->
|
||||||
|
lager:warning("[efka_docker_events] exit with reason: ~p", [Reason]),
|
||||||
|
try_attach_events(5000),
|
||||||
|
{noreply, State#state{port = undefined}}.
|
||||||
|
|
||||||
%% @private
|
%% @private
|
||||||
%% @doc This function is called by a gen_server when it is about to
|
%% @doc This function is called by a gen_server when it is about to
|
||||||
@ -134,3 +130,22 @@ code_change(_OldVsn, State = #state{}, _Extra) ->
|
|||||||
%%%===================================================================
|
%%%===================================================================
|
||||||
%%% Internal functions
|
%%% Internal functions
|
||||||
%%%===================================================================
|
%%%===================================================================
|
||||||
|
handle_event(#{<<"Type">> := <<"container">>, <<"status">> := Status, <<"id">> := Id}, #state{monitors = Monitors}) ->
|
||||||
|
case maps:find(Id, Monitors) of
|
||||||
|
error ->
|
||||||
|
ok;
|
||||||
|
{ok, {ReceiverPid, _}} ->
|
||||||
|
case Status of
|
||||||
|
<<"start">> ->
|
||||||
|
ReceiverPid ! {docker_events, start};
|
||||||
|
<<"stop">> ->
|
||||||
|
ReceiverPid ! {docker_events, stop};
|
||||||
|
_ ->
|
||||||
|
ok
|
||||||
|
end
|
||||||
|
end;
|
||||||
|
handle_event(_, _) ->
|
||||||
|
ok.
|
||||||
|
|
||||||
|
try_attach_events(Timeout) ->
|
||||||
|
erlang:start_timer(Timeout, self(), attach_docker_events).
|
||||||
36
apps/efka/src/docker/docker_helper.erl
Normal file
36
apps/efka/src/docker/docker_helper.erl
Normal file
@ -0,0 +1,36 @@
|
|||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
%%% @author anlicheng
|
||||||
|
%%% @copyright (C) 2025, <COMPANY>
|
||||||
|
%%% @doc
|
||||||
|
%%%
|
||||||
|
%%% @end
|
||||||
|
%%% Created : 17. 9月 2025 14:50
|
||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
-module(docker_helper).
|
||||||
|
-author("anlicheng").
|
||||||
|
|
||||||
|
%% API
|
||||||
|
-export([ensure_container_dir/2, get_container_dir/2, get_config_file/1]).
|
||||||
|
|
||||||
|
-spec ensure_container_dir(RootDir :: string(), ContainerName :: binary()) -> {ok, ServerRootDir :: string()}.
|
||||||
|
ensure_container_dir(RootDir, ContainerName) when is_list(RootDir), is_binary(ContainerName) ->
|
||||||
|
%% 根目录
|
||||||
|
ContainerRootDir = RootDir ++ "/" ++ binary_to_list(ContainerName) ++ "/",
|
||||||
|
ok = filelib:ensure_dir(ContainerRootDir),
|
||||||
|
{ok, ContainerRootDir}.
|
||||||
|
|
||||||
|
-spec get_config_file(ContainerDir :: string()) -> ConfigFile :: string().
|
||||||
|
get_config_file(ContainerDir) when is_list(ContainerDir) ->
|
||||||
|
%% 根目录
|
||||||
|
ContainerDir ++ "service.conf".
|
||||||
|
|
||||||
|
-spec get_container_dir(RootDir :: string(), ContainerName :: binary()) -> {ok, ServerRootDir :: string()} | error.
|
||||||
|
get_container_dir(RootDir, ContainerName) when is_list(RootDir), is_binary(ContainerName) ->
|
||||||
|
%% 根目录
|
||||||
|
ContainerRootDir = RootDir ++ "/" ++ binary_to_list(ContainerName) ++ "/",
|
||||||
|
case filelib:is_dir(ContainerRootDir) of
|
||||||
|
true ->
|
||||||
|
{ok, ContainerRootDir};
|
||||||
|
false ->
|
||||||
|
error
|
||||||
|
end.
|
||||||
85
apps/efka/src/docker/docker_http.erl
Normal file
85
apps/efka/src/docker/docker_http.erl
Normal file
@ -0,0 +1,85 @@
|
|||||||
|
%%% docker_http.erl
|
||||||
|
-module(docker_http).
|
||||||
|
-export([request/4, stream_request/5]).
|
||||||
|
|
||||||
|
%% 通过 Unix Socket 调用 Docker API
|
||||||
|
-spec request(Method :: string(), Path :: string(), Body :: binary() | undefined, Headers :: list()) ->
|
||||||
|
{ok, StatusCode :: integer(), RespHeaders :: proplists:proplist(), RespBody :: binary()} | {error, any()}.
|
||||||
|
request(Method, Path, Body, Headers) when is_list(Method), is_list(Path), is_binary(Body), is_list(Headers) ->
|
||||||
|
SocketPath = "/var/run/docker.sock",
|
||||||
|
%% 使用 gun:open/2 + {local, Path} 方式
|
||||||
|
case gun:open_unix(SocketPath, #{}) of
|
||||||
|
{ok, ConnPid} ->
|
||||||
|
case gun:await_up(ConnPid) of
|
||||||
|
{ok, _} ->
|
||||||
|
%% 发送 HTTP 请求
|
||||||
|
StreamRef = gun:request(ConnPid, Method, Path, Headers, Body),
|
||||||
|
receive_response(ConnPid, StreamRef);
|
||||||
|
{error, Reason} ->
|
||||||
|
{error, Reason}
|
||||||
|
end;
|
||||||
|
{error, Reason} ->
|
||||||
|
{error, Reason}
|
||||||
|
end.
|
||||||
|
|
||||||
|
receive_response(ConnPid, StreamRef) ->
|
||||||
|
receive
|
||||||
|
{gun_response, ConnPid, StreamRef, nofin, Status, Headers} ->
|
||||||
|
receive_body(ConnPid, StreamRef, Status, Headers, <<>>);
|
||||||
|
{gun_response, ConnPid, StreamRef, fin, Status, Headers} ->
|
||||||
|
{ok, Status, Headers, <<>>};
|
||||||
|
{gun_down, ConnPid, _, Reason, _} ->
|
||||||
|
{error, {http_closed, Reason}}
|
||||||
|
after 5000 ->
|
||||||
|
{error, timeout}
|
||||||
|
end.
|
||||||
|
receive_body(ConnPid, StreamRef, Status, Headers, Acc) ->
|
||||||
|
receive
|
||||||
|
{gun_data, ConnPid, StreamRef, fin, Data} ->
|
||||||
|
{ok, Status, Headers, <<Acc/binary, Data/binary>>};
|
||||||
|
{gun_data, ConnPid, StreamRef, nofin, Data} ->
|
||||||
|
NewAcc = <<Acc/binary, Data/binary>>,
|
||||||
|
receive_body(ConnPid, StreamRef, Status, Headers, NewAcc)
|
||||||
|
after 10000 ->
|
||||||
|
{error, timeout22}
|
||||||
|
end.
|
||||||
|
|
||||||
|
%% 通过 Unix Socket 调用 Docker API
|
||||||
|
-spec stream_request(Callback :: any(), Method :: string(), Path :: string(), Body :: binary(), Headers :: list()) -> ok | {error, Reason :: any()}.
|
||||||
|
stream_request(Callback, Method, Path, Body, Headers) when is_list(Method), is_list(Path), is_binary(Body), is_list(Headers) ->
|
||||||
|
SocketPath = "/var/run/docker.sock",
|
||||||
|
case gun:open_unix(SocketPath, #{}) of
|
||||||
|
{ok, ConnPid} ->
|
||||||
|
case gun:await_up(ConnPid) of
|
||||||
|
{ok, _} ->
|
||||||
|
%% 发送 HTTP 请求
|
||||||
|
StreamRef = gun:request(ConnPid, Method, Path, Headers, Body),
|
||||||
|
receive_response(Callback, ConnPid, StreamRef);
|
||||||
|
{error, Reason} ->
|
||||||
|
{error, Reason}
|
||||||
|
end;
|
||||||
|
{error, Reason} ->
|
||||||
|
Callback({error, Reason}),
|
||||||
|
{error, Reason}
|
||||||
|
end.
|
||||||
|
|
||||||
|
receive_response(Callback, ConnPid, StreamRef) ->
|
||||||
|
receive
|
||||||
|
{gun_response, ConnPid, StreamRef, nofin, _Status, _Headers} ->
|
||||||
|
receive_body(Callback, ConnPid, StreamRef);
|
||||||
|
{gun_down, ConnPid, _, Reason, _} ->
|
||||||
|
Callback({error, Reason}),
|
||||||
|
{error, Reason}
|
||||||
|
after 5000 ->
|
||||||
|
Callback({error, <<"处理超时"/utf8>>}),
|
||||||
|
{error, timeout}
|
||||||
|
end.
|
||||||
|
receive_body(Callback, ConnPid, StreamRef) ->
|
||||||
|
receive
|
||||||
|
{gun_data, ConnPid, StreamRef, fin, Data} ->
|
||||||
|
Callback({message, Data}),
|
||||||
|
ok;
|
||||||
|
{gun_data, ConnPid, StreamRef, nofin, Data} ->
|
||||||
|
Callback({message, Data}),
|
||||||
|
receive_body(Callback, ConnPid, StreamRef)
|
||||||
|
end.
|
||||||
225
apps/efka/src/docker/docker_manager.erl
Normal file
225
apps/efka/src/docker/docker_manager.erl
Normal file
@ -0,0 +1,225 @@
|
|||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
%%% @author anlicheng
|
||||||
|
%%% @copyright (C) 2025, <COMPANY>
|
||||||
|
%%% @doc
|
||||||
|
%%% 微服务守护进程
|
||||||
|
%%% 1. 负责微服务的下载, 版本的管理
|
||||||
|
%%% 2. 目录管理等
|
||||||
|
%%% @end
|
||||||
|
%%% Created : 19. 4月 2025 14:55
|
||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
-module(docker_manager).
|
||||||
|
-author("anlicheng").
|
||||||
|
|
||||||
|
-behaviour(gen_server).
|
||||||
|
|
||||||
|
%% API
|
||||||
|
-export([start_link/0]).
|
||||||
|
-export([deploy/2, start_container/1, stop_container/1, config_container/2, kill_container/1, remove_container/1]).
|
||||||
|
-export([get_containers/0]).
|
||||||
|
|
||||||
|
%% gen_server callbacks
|
||||||
|
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).
|
||||||
|
|
||||||
|
-define(SERVER, ?MODULE).
|
||||||
|
|
||||||
|
-record(state, {
|
||||||
|
root_dir :: string(),
|
||||||
|
%% 建立任务到ref之间的映射, #{TaskPid => TaskId}
|
||||||
|
task_map = #{}
|
||||||
|
}).
|
||||||
|
|
||||||
|
%%%===================================================================
|
||||||
|
%%% API
|
||||||
|
%%%===================================================================
|
||||||
|
|
||||||
|
-spec get_containers() -> {ok, Containers :: [map()]} | {error, Reason :: binary()}.
|
||||||
|
get_containers() ->
|
||||||
|
gen_server:call(?SERVER, get_containers).
|
||||||
|
|
||||||
|
-spec deploy(TaskId :: integer(), Config :: map()) -> ok | {error, Reason :: binary()}.
|
||||||
|
deploy(TaskId, Config) when is_integer(TaskId), is_map(Config) ->
|
||||||
|
gen_server:call(?SERVER, {deploy, TaskId, Config}).
|
||||||
|
|
||||||
|
-spec config_container(ContainerName :: binary(), Config :: binary()) -> ok | {error, Reason :: binary()}.
|
||||||
|
config_container(ContainerName, Config) when is_binary(ContainerName), is_binary(Config) ->
|
||||||
|
gen_server:call(?SERVER, {config_container, ContainerName, Config}).
|
||||||
|
|
||||||
|
-spec start_container(ServiceId :: binary()) -> ok | {error, Reason :: term()}.
|
||||||
|
start_container(ContainerId) when is_binary(ContainerId) ->
|
||||||
|
gen_server:call(?SERVER, {start_container, ContainerId}).
|
||||||
|
|
||||||
|
-spec stop_container(ServiceId :: binary()) -> ok | {error, Reason :: term()}.
|
||||||
|
stop_container(ContainerId) when is_binary(ContainerId) ->
|
||||||
|
gen_server:call(?SERVER, {stop_container, ContainerId}).
|
||||||
|
|
||||||
|
-spec kill_container(ServiceId :: binary()) -> ok | {error, Reason :: term()}.
|
||||||
|
kill_container(ContainerId) when is_binary(ContainerId) ->
|
||||||
|
gen_server:call(?SERVER, {kill_container, ContainerId}).
|
||||||
|
|
||||||
|
-spec remove_container(ServiceId :: binary()) -> ok | {error, Reason :: term()}.
|
||||||
|
remove_container(ContainerId) when is_binary(ContainerId) ->
|
||||||
|
gen_server:call(?SERVER, {remove_container, ContainerId}).
|
||||||
|
|
||||||
|
%% @doc Spawns the server and registers the local name (unique)
|
||||||
|
-spec(start_link() ->
|
||||||
|
{ok, Pid :: pid()} | ignore | {error, Reason :: term()}).
|
||||||
|
start_link() ->
|
||||||
|
gen_server:start_link({local, ?SERVER}, ?MODULE, [], []).
|
||||||
|
|
||||||
|
%%%===================================================================
|
||||||
|
%%% gen_server callbacks
|
||||||
|
%%%===================================================================
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Initializes the server
|
||||||
|
-spec(init(Args :: term()) ->
|
||||||
|
{ok, State :: #state{}} | {ok, State :: #state{}, timeout() | hibernate} |
|
||||||
|
{stop, Reason :: term()} | ignore).
|
||||||
|
init([]) ->
|
||||||
|
erlang:process_flag(trap_exit, true),
|
||||||
|
{ok, RootDir} = application:get_env(efka, root_dir),
|
||||||
|
{ok, #state{root_dir = RootDir}}.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Handling call messages
|
||||||
|
-spec(handle_call(Request :: term(), From :: {pid(), Tag :: term()},
|
||||||
|
State :: #state{}) ->
|
||||||
|
{reply, Reply :: term(), NewState :: #state{}} |
|
||||||
|
{reply, Reply :: term(), NewState :: #state{}, timeout() | hibernate} |
|
||||||
|
{noreply, NewState :: #state{}} |
|
||||||
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
|
{stop, Reason :: term(), Reply :: term(), NewState :: #state{}} |
|
||||||
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
|
handle_call({deploy, TaskId, Config = #{<<"container_name">> := ContainerName}}, _From, State = #state{root_dir = RootDir, task_map = TaskMap}) ->
|
||||||
|
%% 创建目录
|
||||||
|
{ok, ContainerDir} = docker_helper:ensure_container_dir(RootDir, ContainerName),
|
||||||
|
{ok, {TaskPid, _Ref}} = docker_deployer:start_monitor(TaskId, ContainerDir, Config),
|
||||||
|
lager:debug("[docker_manager] start deploy task_id: ~p, config: ~p", [TaskId, Config]),
|
||||||
|
{reply, ok, State#state{task_map = maps:put(TaskPid, TaskId, TaskMap)}};
|
||||||
|
|
||||||
|
%% 处理容器关联的配置文件
|
||||||
|
handle_call({config_container, ContainerName, Config}, _From, State = #state{root_dir = RootDir}) ->
|
||||||
|
case docker_helper:get_container_dir(RootDir, ContainerName) of
|
||||||
|
{ok, ContainerDir} ->
|
||||||
|
%% 覆盖容器的配置文件
|
||||||
|
ConfigFile = docker_helper:get_config_file(ContainerDir),
|
||||||
|
case file:write_file(ConfigFile, Config, [write, binary]) of
|
||||||
|
ok ->
|
||||||
|
lager:warning("[docker_manager] write config file: ~p success", [ConfigFile]),
|
||||||
|
{reply, ok, State};
|
||||||
|
{error, Reason} ->
|
||||||
|
lager:warning("[docker_manager] write config file: ~p, get error: ~p", [ConfigFile, Reason]),
|
||||||
|
{reply, {error, <<"write config failed">>}, State}
|
||||||
|
end;
|
||||||
|
error ->
|
||||||
|
{reply, {error, <<"error">>}, State}
|
||||||
|
end;
|
||||||
|
|
||||||
|
%% 启动服务: 当前服务如果正常运行,则不允许重启
|
||||||
|
handle_call({start_container, ContainerId}, _From, State) ->
|
||||||
|
case docker_commands:start_container(ContainerId) of
|
||||||
|
ok ->
|
||||||
|
{reply, ok, State};
|
||||||
|
{error, Reason} ->
|
||||||
|
{reply, {error, Reason}, State}
|
||||||
|
end;
|
||||||
|
|
||||||
|
|
||||||
|
%% 停止服务, 主动停止的时候会改变服务配置的status字段
|
||||||
|
handle_call({stop_container, ContainerId}, _From, State = #state{}) ->
|
||||||
|
case docker_commands:stop_container(ContainerId) of
|
||||||
|
ok ->
|
||||||
|
{reply, ok, State};
|
||||||
|
{error, Reason} ->
|
||||||
|
{reply, {error, Reason}, State}
|
||||||
|
end;
|
||||||
|
|
||||||
|
%% 停止服务, 主动停止的时候会改变服务配置的status字段
|
||||||
|
handle_call({kill_container, ContainerId}, _From, State = #state{}) ->
|
||||||
|
case docker_commands:kill_container(ContainerId) of
|
||||||
|
ok ->
|
||||||
|
{reply, ok, State};
|
||||||
|
{error, Reason} ->
|
||||||
|
{reply, {error, Reason}, State}
|
||||||
|
end;
|
||||||
|
|
||||||
|
%% 停止服务, 主动停止的时候会改变服务配置的status字段
|
||||||
|
handle_call(get_containers, _From, State = #state{}) ->
|
||||||
|
case docker_commands:get_containers() of
|
||||||
|
{ok, Containers} ->
|
||||||
|
{reply, {ok, Containers}, State};
|
||||||
|
{error, Reason} ->
|
||||||
|
{reply, {error, Reason}, State}
|
||||||
|
end;
|
||||||
|
|
||||||
|
%% 停止服务, 主动停止的时候会改变服务配置的status字段
|
||||||
|
handle_call({remove_container, ContainerId}, _From, State = #state{}) ->
|
||||||
|
case docker_commands:remove_container(ContainerId) of
|
||||||
|
ok ->
|
||||||
|
{reply, ok, State};
|
||||||
|
{error, Reason} ->
|
||||||
|
{reply, {error, Reason}, State}
|
||||||
|
end;
|
||||||
|
|
||||||
|
handle_call(_Request, _From, State = #state{}) ->
|
||||||
|
{reply, ok, State}.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Handling cast messages
|
||||||
|
-spec(handle_cast(Request :: term(), State :: #state{}) ->
|
||||||
|
{noreply, NewState :: #state{}} |
|
||||||
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
|
handle_cast(_Request, State = #state{}) ->
|
||||||
|
{noreply, State}.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Handling all non call/cast messages
|
||||||
|
-spec(handle_info(Info :: timeout() | term(), State :: #state{}) ->
|
||||||
|
{noreply, NewState :: #state{}} |
|
||||||
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
|
handle_info({'DOWN', _Ref, process, TaskPid, Reason}, State = #state{task_map = TaskMap}) ->
|
||||||
|
case maps:take(TaskPid, TaskMap) of
|
||||||
|
error ->
|
||||||
|
{noreply, State};
|
||||||
|
{TaskId, NTaskMap} ->
|
||||||
|
case Reason of
|
||||||
|
normal ->
|
||||||
|
lager:debug("[docker_manager] task_id: ~p, exit normal", [TaskId]),
|
||||||
|
ok;
|
||||||
|
Error0 ->
|
||||||
|
Error = iolist_to_binary(io_lib:format("~p", [Error0])),
|
||||||
|
efka_remote_agent:task_event_stream(TaskId, <<"error">>, <<"任务失败: "/utf8, Error/binary>>),
|
||||||
|
efka_remote_agent:close_task_event_stream(TaskId, <<"task exited">>),
|
||||||
|
lager:notice("[docker_manager] task_id: ~p, exit with error: ~p", [TaskId, Error]),
|
||||||
|
ok
|
||||||
|
end,
|
||||||
|
{noreply, State#state{task_map = NTaskMap}}
|
||||||
|
end;
|
||||||
|
|
||||||
|
handle_info(_Info, State = #state{}) ->
|
||||||
|
{noreply, State}.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc This function is called by a gen_server when it is about to
|
||||||
|
%% terminate. It should be the opposite of Module:init/1 and do any
|
||||||
|
%% necessary cleaning up. When it returns, the gen_server terminates
|
||||||
|
%% with Reason. The return value is ignored.
|
||||||
|
-spec(terminate(Reason :: (normal | shutdown | {shutdown, term()} | term()),
|
||||||
|
State :: #state{}) -> term()).
|
||||||
|
terminate(_Reason, _State = #state{}) ->
|
||||||
|
ok.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Convert process state when code is changed
|
||||||
|
-spec(code_change(OldVsn :: term() | {down, term()}, State :: #state{},
|
||||||
|
Extra :: term()) ->
|
||||||
|
{ok, NewState :: #state{}} | {error, Reason :: term()}).
|
||||||
|
code_change(_OldVsn, State = #state{}, _Extra) ->
|
||||||
|
{ok, State}.
|
||||||
|
|
||||||
|
%%%===================================================================
|
||||||
|
%%% Internal functions
|
||||||
|
%%%===================================================================
|
||||||
@ -7,19 +7,16 @@
|
|||||||
[
|
[
|
||||||
sync,
|
sync,
|
||||||
jiffy,
|
jiffy,
|
||||||
%gpb,
|
|
||||||
mnesia,
|
|
||||||
parse_trans,
|
parse_trans,
|
||||||
lager,
|
lager,
|
||||||
|
cowboy,
|
||||||
|
ranch,
|
||||||
crypto,
|
crypto,
|
||||||
|
gun,
|
||||||
|
cowlib,
|
||||||
inets,
|
inets,
|
||||||
ssl,
|
ssl,
|
||||||
public_key,
|
public_key,
|
||||||
|
|
||||||
%erts,
|
|
||||||
%runtime_tools,
|
|
||||||
%observer,
|
|
||||||
|
|
||||||
kernel,
|
kernel,
|
||||||
stdlib
|
stdlib
|
||||||
]},
|
]},
|
||||||
|
|||||||
@ -1,435 +0,0 @@
|
|||||||
%%%-------------------------------------------------------------------
|
|
||||||
%%% @author anlicheng
|
|
||||||
%%% @copyright (C) 2025, <COMPANY>
|
|
||||||
%%% @doc
|
|
||||||
%%%
|
|
||||||
%%% @end
|
|
||||||
%%% Created : 21. 5月 2025 18:38
|
|
||||||
%%%-------------------------------------------------------------------
|
|
||||||
-module(efka_agent).
|
|
||||||
-author("anlicheng").
|
|
||||||
-include("message_pb.hrl").
|
|
||||||
-include("efka.hrl").
|
|
||||||
-include("efka_tables.hrl").
|
|
||||||
|
|
||||||
-behaviour(gen_statem).
|
|
||||||
|
|
||||||
%% API
|
|
||||||
-export([start_link/0]).
|
|
||||||
-export([metric_data/3, event/3, ping/13, request_service_config/2, await_reply/2]).
|
|
||||||
|
|
||||||
%% gen_statem callbacks
|
|
||||||
-export([init/1, handle_event/4, terminate/3, code_change/4, callback_mode/0]).
|
|
||||||
|
|
||||||
-define(SERVER, ?MODULE).
|
|
||||||
|
|
||||||
%% 标记当前agent的状态,只有在 activated 状态下才可以正常的发送数据
|
|
||||||
-define(STATE_DENIED, denied).
|
|
||||||
-define(STATE_CONNECTING, connecting).
|
|
||||||
-define(STATE_AUTH, auth).
|
|
||||||
%% 不能推送消息到服务,但是可以接受服务器的部分指令
|
|
||||||
-define(STATE_RESTRICTED, restricted).
|
|
||||||
%% 激活状态下
|
|
||||||
-define(STATE_ACTIVATED, activated).
|
|
||||||
|
|
||||||
-record(state, {
|
|
||||||
transport_pid :: undefined | pid(),
|
|
||||||
transport_ref :: undefined | reference(),
|
|
||||||
%% 服务器端推送的消息的未确认列表, 映射关系 #{Ref => PacketId}
|
|
||||||
push_inflight = #{},
|
|
||||||
%% 发送的请求的未确认列表, 映射关系 #{Ref => ReceiverPid}
|
|
||||||
request_inflight = #{}
|
|
||||||
}).
|
|
||||||
|
|
||||||
%%%===================================================================
|
|
||||||
%%% API
|
|
||||||
%%%===================================================================
|
|
||||||
|
|
||||||
%% 发送数据
|
|
||||||
-spec metric_data(ServiceId :: binary(), DeviceUUID::binary(), LineProtocolData :: binary()) -> no_return().
|
|
||||||
metric_data(ServiceId, DeviceUUID, LineProtocolData) when is_binary(ServiceId), is_binary(DeviceUUID), is_binary(LineProtocolData) ->
|
|
||||||
gen_statem:cast(?SERVER, {metric_data, ServiceId, DeviceUUID, LineProtocolData}).
|
|
||||||
|
|
||||||
-spec event(ServiceId :: binary(), EventType :: integer(), Params :: binary()) -> no_return().
|
|
||||||
event(ServiceId, EventType, Params) when is_binary(ServiceId), is_integer(EventType), is_binary(Params) ->
|
|
||||||
gen_statem:cast(?SERVER, {event, ServiceId, EventType, Params}).
|
|
||||||
|
|
||||||
ping(AdCode, BootTime, Province, City, EfkaVersion, KernelArch, Ips, CpuCore, CpuLoad, CpuTemperature, Disk, Memory, Interfaces) ->
|
|
||||||
gen_statem:cast(?SERVER, {ping, AdCode, BootTime, Province, City, EfkaVersion, KernelArch, Ips, CpuCore, CpuLoad, CpuTemperature, Disk, Memory, Interfaces}).
|
|
||||||
|
|
||||||
%% 请求微服务的配置
|
|
||||||
-spec request_service_config(ReceiverPid :: pid(), ServiceId :: binary()) -> {ok, Ref :: reference()} | {error, Reason :: term()}.
|
|
||||||
request_service_config(ReceiverPid, ServiceId) when is_binary(ServiceId) ->
|
|
||||||
gen_statem:call(?SERVER, {request_service_config, ReceiverPid, ServiceId}).
|
|
||||||
|
|
||||||
%% 等待消息的回复
|
|
||||||
-spec await_reply(Ref :: reference(), Timeout :: timeout()) -> {ok, Reply :: binary()} | {error, timeout}.
|
|
||||||
await_reply(Ref, Timeout) when is_reference(Ref), is_integer(Timeout) ->
|
|
||||||
receive
|
|
||||||
{request_reply, Ref, ReplyBin} ->
|
|
||||||
{ok, ReplyBin}
|
|
||||||
after Timeout ->
|
|
||||||
{error, timeout}
|
|
||||||
end.
|
|
||||||
|
|
||||||
%% @doc Creates a gen_statem process which calls Module:init/1 to
|
|
||||||
%% initialize. To ensure a synchronized start-up procedure, this
|
|
||||||
%% function does not return until Module:init/1 has returned.
|
|
||||||
start_link() ->
|
|
||||||
gen_statem:start_link({local, ?SERVER}, ?MODULE, [], []).
|
|
||||||
|
|
||||||
%%%===================================================================
|
|
||||||
%%% gen_statem callbacks
|
|
||||||
%%%===================================================================
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Whenever a gen_statem is started using gen_statem:start/[3,4] or
|
|
||||||
%% gen_statem:start_link/[3,4], this function is called by the new
|
|
||||||
%% process to initialize.
|
|
||||||
init([]) ->
|
|
||||||
erlang:start_timer(0, self(), create_transport),
|
|
||||||
{ok, ?STATE_DENIED, #state{}}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc This function is called by a gen_statem when it needs to find out
|
|
||||||
%% the callback mode of the callback module.
|
|
||||||
callback_mode() ->
|
|
||||||
handle_event_function.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc If callback_mode is handle_event_function, then whenever a
|
|
||||||
%% gen_statem receives an event from call/2, cast/2, or as a normal
|
|
||||||
%% process message, this function is called.
|
|
||||||
handle_event({call, From}, {request_service_config, ReceiverPid, ServiceId}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid, request_inflight = RequestInflight}) ->
|
|
||||||
Ref = efka_transport:request(TransportPid, ?METHOD_REQUEST_SERVICE_CONFIG, ServiceId),
|
|
||||||
{keep_state, State#state{request_inflight = maps:put(Ref, ReceiverPid, RequestInflight)}, [{reply, From, {ok, Ref}}]};
|
|
||||||
|
|
||||||
handle_event({call, From}, {request_service_config, _ReceiverPid, _ServiceId}, _, State) ->
|
|
||||||
{keep_state, State, [{reply, From, {error, <<"transport is not alive">>}}]};
|
|
||||||
|
|
||||||
%% 异步发送数据, 连接存在时候直接发送;否则缓存到mnesia
|
|
||||||
handle_event(cast, {metric_data, ServiceId, DeviceUUID, LineProtocolData}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
|
||||||
Packet = message_pb:encode_msg(#data{
|
|
||||||
service_id = ServiceId,
|
|
||||||
device_uuid = DeviceUUID,
|
|
||||||
metric = LineProtocolData
|
|
||||||
}),
|
|
||||||
efka_transport:send(TransportPid, ?METHOD_DATA, Packet),
|
|
||||||
{keep_state, State};
|
|
||||||
|
|
||||||
handle_event(cast, {metric_data, ServiceId, DeviceUUID, LineProtocolData}, _, State) ->
|
|
||||||
Packet = message_pb:encode_msg(#data{
|
|
||||||
service_id = ServiceId,
|
|
||||||
device_uuid = DeviceUUID,
|
|
||||||
metric = LineProtocolData
|
|
||||||
}),
|
|
||||||
ok = cache_model:insert(?METHOD_DATA, Packet),
|
|
||||||
{keep_state, State};
|
|
||||||
|
|
||||||
%% 异步发送事件
|
|
||||||
handle_event(cast, {event, ServiceId, EventType, Params}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
|
||||||
EventPacket = message_pb:encode_msg(#event{
|
|
||||||
service_id = ServiceId,
|
|
||||||
event_type = EventType,
|
|
||||||
params = Params
|
|
||||||
}),
|
|
||||||
efka_transport:send(TransportPid, ?METHOD_EVENT, EventPacket),
|
|
||||||
{keep_state, State};
|
|
||||||
handle_event(cast, {event, ServiceId, EventType, Params}, ?STATE_ACTIVATED, State) ->
|
|
||||||
EventPacket = message_pb:encode_msg(#event{
|
|
||||||
service_id = ServiceId,
|
|
||||||
event_type = EventType,
|
|
||||||
params = Params
|
|
||||||
}),
|
|
||||||
ok = cache_model:insert(?METHOD_EVENT, EventPacket),
|
|
||||||
{keep_state, State};
|
|
||||||
|
|
||||||
handle_event(cast, {ping, AdCode, BootTime, Province, City, EfkaVersion, KernelArch, Ips, CpuCore, CpuLoad, CpuTemperature, Disk, Memory, Interfaces}, ?STATE_ACTIVATED,
|
|
||||||
State = #state{transport_pid = TransportPid}) ->
|
|
||||||
|
|
||||||
Ping = message_pb:encode_msg(#ping{
|
|
||||||
adcode = AdCode,
|
|
||||||
boot_time = BootTime,
|
|
||||||
province = Province,
|
|
||||||
city = City,
|
|
||||||
efka_version = EfkaVersion,
|
|
||||||
kernel_arch = KernelArch,
|
|
||||||
ips = Ips,
|
|
||||||
cpu_core = CpuCore,
|
|
||||||
cpu_load = CpuLoad,
|
|
||||||
cpu_temperature = CpuTemperature,
|
|
||||||
disk = Disk,
|
|
||||||
memory = Memory,
|
|
||||||
interfaces = Interfaces
|
|
||||||
}),
|
|
||||||
efka_transport:send(TransportPid, ?METHOD_PING, Ping),
|
|
||||||
{keep_state, State};
|
|
||||||
|
|
||||||
%% 异步建立到服务器的连接
|
|
||||||
handle_event(info, {timeout, _, create_transport}, ?STATE_DENIED, State) ->
|
|
||||||
{ok, Props} = application:get_env(efka, tls_server),
|
|
||||||
Host = proplists:get_value(host, Props),
|
|
||||||
Port = proplists:get_value(port, Props),
|
|
||||||
{ok, {TransportPid, TransportRef}} = efka_transport:start_monitor(self(), Host, Port),
|
|
||||||
efka_transport:connect(TransportPid),
|
|
||||||
|
|
||||||
{next_state, ?STATE_CONNECTING, State#state{transport_pid = TransportPid, transport_ref = TransportRef}};
|
|
||||||
|
|
||||||
handle_event(info, {connect_reply, Reply}, ?STATE_CONNECTING, State = #state{transport_pid = TransportPid}) ->
|
|
||||||
case Reply of
|
|
||||||
ok ->
|
|
||||||
AuthBin = auth_request(),
|
|
||||||
efka_transport:auth_request(TransportPid, AuthBin),
|
|
||||||
{next_state, ?STATE_AUTH, State};
|
|
||||||
{error, Reason} ->
|
|
||||||
lager:debug("[efka_agent] connect failed, error: ~p, pid: ~p", [Reason, TransportPid]),
|
|
||||||
efka_transport:stop(TransportPid),
|
|
||||||
{next_state, ?STATE_DENIED, State#state{transport_pid = undefined}}
|
|
||||||
end;
|
|
||||||
|
|
||||||
handle_event(info, {auth_reply, Reply}, ?STATE_AUTH, State = #state{transport_pid = TransportPid}) ->
|
|
||||||
case Reply of
|
|
||||||
{ok, ReplyBin} ->
|
|
||||||
#auth_reply{code = Code, message = Message} = message_pb:decode_msg(ReplyBin, auth_reply),
|
|
||||||
case Code of
|
|
||||||
0 ->
|
|
||||||
lager:debug("[efka_agent] auth success, message: ~p", [Message]),
|
|
||||||
{next_state, ?STATE_ACTIVATED, State, [{next_event, info, flush_cache}]};
|
|
||||||
1 ->
|
|
||||||
%% 主机在后台的授权未通过;此时agent不能推送数据给云端服务器,但是云端服务器可以推送命令给agent
|
|
||||||
%% socket的连接状态需要维持
|
|
||||||
lager:debug("[efka_agent] auth denied, message: ~p", [Message]),
|
|
||||||
{next_state, ?STATE_RESTRICTED, State};
|
|
||||||
2 ->
|
|
||||||
% 其他类型的错误,需要间隔时间重试
|
|
||||||
lager:debug("[efka_agent] auth failed, message: ~p", [Message]),
|
|
||||||
efka_transport:stop(TransportPid),
|
|
||||||
{next_state, ?STATE_DENIED, State#state{transport_pid = undefined}};
|
|
||||||
_ ->
|
|
||||||
% 其他类型的错误,需要间隔时间重试
|
|
||||||
lager:debug("[efka_agent] auth failed, invalid message"),
|
|
||||||
efka_transport:stop(TransportPid),
|
|
||||||
{next_state, ?STATE_DENIED, State#state{transport_pid = undefined}}
|
|
||||||
end;
|
|
||||||
{error, Reason} ->
|
|
||||||
lager:debug("[efka_agent] auth_request failed, error: ~p", [Reason]),
|
|
||||||
efka_transport:stop(TransportPid),
|
|
||||||
{next_state, ?STATE_DENIED, State#state{transport_pid = undefined}}
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 将缓存中的数据推送到服务器端
|
|
||||||
handle_event(info, flush_cache, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
|
||||||
case cache_model:fetch_next() of
|
|
||||||
{ok, #cache{id = Id, method = Method, data = Packet}} ->
|
|
||||||
efka_transport:send(TransportPid, Method, Packet),
|
|
||||||
cache_model:delete(Id),
|
|
||||||
{keep_state, State, [{next_event, info, flush_cache}]};
|
|
||||||
error ->
|
|
||||||
{keep_state, State}
|
|
||||||
end;
|
|
||||||
handle_event(info, flush_cache, _, State) ->
|
|
||||||
{keep_state, State};
|
|
||||||
|
|
||||||
%% 云端服务器推送了消息
|
|
||||||
%% 激活消息
|
|
||||||
|
|
||||||
%% 微服务部署
|
|
||||||
handle_event(info, {server_async_call, PacketId, <<?PUSH_DEPLOY:8, DeployBin/binary>>}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
|
||||||
#deploy{task_id = TaskId, service_id = ServiceId, tar_url = TarUrl} = message_pb:decode_msg(DeployBin, deploy),
|
|
||||||
|
|
||||||
%% 短暂的等待,efka_inetd收到消息后就立即返回了
|
|
||||||
Reply = case efka_inetd:deploy(TaskId, ServiceId, TarUrl) of
|
|
||||||
ok ->
|
|
||||||
#async_call_reply{code = 1, result = <<"ok">>};
|
|
||||||
{error, Reason} when is_binary(Reason) ->
|
|
||||||
#async_call_reply{code = 0, message = Reason}
|
|
||||||
end,
|
|
||||||
efka_transport:async_call_reply(TransportPid, PacketId, message_pb:encode_msg(Reply)),
|
|
||||||
|
|
||||||
{keep_state, State};
|
|
||||||
|
|
||||||
%% 启动微服务
|
|
||||||
handle_event(info, {server_async_call, PacketId, <<?PUSH_START_SERVICE:8, ServiceId/binary>>}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
|
||||||
%% 短暂的等待,efka_inetd收到消息后就立即返回了
|
|
||||||
Reply = case efka_inetd:start_service(ServiceId) of
|
|
||||||
ok ->
|
|
||||||
#async_call_reply{code = 1, result = <<"ok">>};
|
|
||||||
{error, Reason} when is_binary(Reason) ->
|
|
||||||
#async_call_reply{code = 0, message = Reason}
|
|
||||||
end,
|
|
||||||
efka_transport:async_call_reply(TransportPid, PacketId, message_pb:encode_msg(Reply)),
|
|
||||||
|
|
||||||
{keep_state, State};
|
|
||||||
|
|
||||||
%% 停止微服务
|
|
||||||
handle_event(info, {server_async_call, PacketId, <<?PUSH_STOP_SERVICE:8, ServiceId/binary>>}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
|
||||||
%% 短暂的等待,efka_inetd收到消息后就立即返回了
|
|
||||||
Reply = case efka_inetd:stop_service(ServiceId) of
|
|
||||||
ok ->
|
|
||||||
#async_call_reply{code = 1, result = <<"ok">>};
|
|
||||||
{error, Reason} when is_binary(Reason) ->
|
|
||||||
#async_call_reply{code = 0, message = Reason}
|
|
||||||
end,
|
|
||||||
efka_transport:async_call_reply(TransportPid, PacketId, message_pb:encode_msg(Reply)),
|
|
||||||
|
|
||||||
{keep_state, State};
|
|
||||||
|
|
||||||
%% config.json配置信息
|
|
||||||
handle_event(info, {server_async_call, PacketId, <<?PUSH_SERVICE_CONFIG:8, ConfigBin/binary>>}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid, push_inflight = PushInflight}) ->
|
|
||||||
#push_service_config{service_id = ServiceId, config_json = ConfigJson, timeout = Timeout} = message_pb:decode_msg(ConfigBin, push_service_config),
|
|
||||||
|
|
||||||
case efka_service:get_pid(ServiceId) of
|
|
||||||
undefined ->
|
|
||||||
Reply = #async_call_reply{code = 0, message = <<"service not run">>},
|
|
||||||
efka_transport:async_call_reply(TransportPid, PacketId, message_pb:encode_msg(Reply)),
|
|
||||||
{keep_state, State};
|
|
||||||
ServicePid when is_pid(ServicePid) ->
|
|
||||||
Ref = make_ref(),
|
|
||||||
%% 将配置文件推送到对应的微服务
|
|
||||||
efka_service:push_config(ServicePid, Ref, ConfigJson),
|
|
||||||
%% 处理超时逻辑
|
|
||||||
erlang:start_timer(Timeout, self(), {request_timeout, Ref}),
|
|
||||||
|
|
||||||
{keep_state, State#state{push_inflight = maps:put(Ref, PacketId, PushInflight)}}
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 收到需要回复的指令
|
|
||||||
handle_event(info, {server_async_call, PacketId, <<?PUSH_INVOKE:8, InvokeBin/binary>>}, ?STATE_ACTIVATED, State = #state{push_inflight = PushInflight, transport_pid = TransportPid}) ->
|
|
||||||
#invoke{service_id = ServiceId, payload = Payload, timeout = Timeout} = message_pb:decode_msg(InvokeBin, invoke),
|
|
||||||
%% 消息发送到订阅系统
|
|
||||||
case efka_service:get_pid(ServiceId) of
|
|
||||||
undefined ->
|
|
||||||
Reply = #async_call_reply{code = 0, message = <<"micro_service not run">>},
|
|
||||||
efka_transport:async_call_reply(TransportPid, PacketId, message_pb:encode_msg(Reply)),
|
|
||||||
|
|
||||||
{keep_state, State};
|
|
||||||
ServicePid when is_pid(ServicePid) ->
|
|
||||||
Ref = make_ref(),
|
|
||||||
efka_service:invoke(ServicePid, Ref, Payload),
|
|
||||||
%% 处理超时逻辑
|
|
||||||
erlang:start_timer(Timeout, self(), {request_timeout, Ref}),
|
|
||||||
|
|
||||||
{keep_state, State#state{push_inflight = maps:put(Ref, PacketId, PushInflight)}}
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 处理task_log
|
|
||||||
handle_event(info, {server_async_call, PacketId, <<?PUSH_TASK_LOG:8, TaskLogBin/binary>>}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
|
||||||
#fetch_task_log{task_id = TaskId} = message_pb:decode_msg(TaskLogBin, fetch_task_log),
|
|
||||||
lager:debug("[efka_agent] get task_log request: ~p", [TaskId]),
|
|
||||||
{ok, Logs} = efka_inetd_task_log:get_logs(TaskId),
|
|
||||||
Reply = case length(Logs) > 0 of
|
|
||||||
true ->
|
|
||||||
Result = iolist_to_binary(jiffy:encode(Logs, [force_utf8])),
|
|
||||||
#async_call_reply{code = 1, result = Result};
|
|
||||||
false ->
|
|
||||||
#async_call_reply{code = 1, result = <<"[]">>}
|
|
||||||
end,
|
|
||||||
efka_transport:async_call_reply(TransportPid, PacketId, message_pb:encode_msg(Reply)),
|
|
||||||
|
|
||||||
{keep_state, State};
|
|
||||||
|
|
||||||
%% 处理命令
|
|
||||||
handle_event(info, {server_command, ?COMMAND_AUTH, <<Auth:8>>}, StateName, State = #state{transport_pid = TransportPid}) ->
|
|
||||||
case {Auth, StateName} of
|
|
||||||
{1, ?STATE_ACTIVATED} ->
|
|
||||||
{keep_state, State};
|
|
||||||
{1, ?STATE_DENIED} ->
|
|
||||||
%% 重新激活, 需要重新校验
|
|
||||||
AuthRequestBin = auth_request(),
|
|
||||||
efka_transport:auth_request(TransportPid, AuthRequestBin),
|
|
||||||
{next_state, ?STATE_AUTH, State};
|
|
||||||
{0, _} ->
|
|
||||||
%% 这个时候的主机应该是受限制的状态,不允许发送消息;但是能够接受服务器推送的消息
|
|
||||||
{next_state, ?STATE_RESTRICTED, State}
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 处理Pub/Sub机制
|
|
||||||
handle_event(info, {server_pub, Topic, Content}, ?STATE_ACTIVATED, State) ->
|
|
||||||
lager:debug("[efka_agent] get pub topic: ~p, content: ~p", [Topic, Content]),
|
|
||||||
%% 消息发送到订阅系统
|
|
||||||
efka_subscription:publish(Topic, Content),
|
|
||||||
{keep_state, State};
|
|
||||||
|
|
||||||
%% 收到来自efka_service的回复
|
|
||||||
handle_event(info, {service_reply, Ref, EmsReply}, ?STATE_ACTIVATED, State = #state{push_inflight = PushInflight, transport_pid = TransportPid}) ->
|
|
||||||
case maps:take(Ref, PushInflight) of
|
|
||||||
error ->
|
|
||||||
{keep_state, State};
|
|
||||||
{PacketId, NPushInflight} ->
|
|
||||||
Reply = case EmsReply of
|
|
||||||
{ok, Result} ->
|
|
||||||
#async_call_reply{code = 1, result = Result};
|
|
||||||
{error, Reason} ->
|
|
||||||
#async_call_reply{code = 0, message = Reason}
|
|
||||||
end,
|
|
||||||
efka_transport:async_call_reply(TransportPid, PacketId, message_pb:encode_msg(Reply)),
|
|
||||||
|
|
||||||
{keep_state, State#state{push_inflight = NPushInflight}}
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 收到来自服务器端的回复
|
|
||||||
handle_event(info, {server_reply, Ref, ReplyBin}, ?STATE_ACTIVATED, State = #state{request_inflight = RequestInflight}) ->
|
|
||||||
case maps:take(Ref, RequestInflight) of
|
|
||||||
error ->
|
|
||||||
{keep_state, State};
|
|
||||||
{ReceiverPid, NRequestInflight} ->
|
|
||||||
is_process_alive(ReceiverPid) andalso erlang:send(ReceiverPid, {request_reply, Ref, ReplyBin}),
|
|
||||||
{keep_state, State#state{push_inflight = NRequestInflight}}
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% todo 请求超时逻辑处理
|
|
||||||
handle_event(info, {timeout, _, {request_timeout, Ref}}, ?STATE_ACTIVATED, State = #state{push_inflight = PushInflight, transport_pid = TransportPid}) ->
|
|
||||||
case maps:take(Ref, PushInflight) of
|
|
||||||
error ->
|
|
||||||
{keep_state, State};
|
|
||||||
{PacketId, NPushInflight} ->
|
|
||||||
Reply = #async_call_reply{code = 0, message = <<"reqeust timeout">>, result = <<>>},
|
|
||||||
efka_transport:async_call_reply(TransportPid, PacketId, message_pb:encode_msg(Reply)),
|
|
||||||
|
|
||||||
{keep_state, State#state{push_inflight = NPushInflight}}
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% transport进程退出
|
|
||||||
handle_event(info, {'DOWN', MRef, process, TransportPid, Reason}, _, State = #state{transport_ref = MRef}) ->
|
|
||||||
lager:debug("[efka_agent] transport pid: ~p, exit with reason: ~p", [TransportPid, Reason]),
|
|
||||||
erlang:start_timer(5000, self(), create_transport),
|
|
||||||
{next_state, ?STATE_DENIED, State#state{transport_pid = undefined, transport_ref = undefined}}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc This function is called by a gen_statem when it is about to
|
|
||||||
%% terminate. It should be the opposite of Module:init/1 and do any
|
|
||||||
%% necessary cleaning up. When it returns, the gen_statem terminates with
|
|
||||||
%% Reason. The return value is ignored.
|
|
||||||
terminate(_Reason, _StateName, _State = #state{transport_pid = TransportPid}) ->
|
|
||||||
case is_pid(TransportPid) andalso is_process_alive(TransportPid) of
|
|
||||||
true ->
|
|
||||||
efka_transport:stop(TransportPid);
|
|
||||||
false ->
|
|
||||||
ok
|
|
||||||
end,
|
|
||||||
ok.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Convert process state when code is changed
|
|
||||||
code_change(_OldVsn, StateName, State = #state{}, _Extra) ->
|
|
||||||
{ok, StateName, State}.
|
|
||||||
|
|
||||||
%%%===================================================================
|
|
||||||
%%% Internal functions
|
|
||||||
%%%===================================================================
|
|
||||||
|
|
||||||
-spec auth_request() -> binary().
|
|
||||||
auth_request() ->
|
|
||||||
{ok, AuthInfo} = application:get_env(efka, auth),
|
|
||||||
UUID = proplists:get_value(uuid, AuthInfo),
|
|
||||||
Username = proplists:get_value(username, AuthInfo),
|
|
||||||
Salt = proplists:get_value(salt, AuthInfo),
|
|
||||||
Token = proplists:get_value(token, AuthInfo),
|
|
||||||
|
|
||||||
message_pb:encode_msg(#auth_request{
|
|
||||||
uuid = unicode:characters_to_binary(UUID),
|
|
||||||
username = unicode:characters_to_binary(Username),
|
|
||||||
salt = unicode:characters_to_binary(Salt),
|
|
||||||
token = unicode:characters_to_binary(Token),
|
|
||||||
timestamp = efka_util:timestamp()
|
|
||||||
}).
|
|
||||||
@ -11,44 +11,47 @@
|
|||||||
|
|
||||||
start(_StartType, _StartArgs) ->
|
start(_StartType, _StartArgs) ->
|
||||||
io:setopts([{encoding, unicode}]),
|
io:setopts([{encoding, unicode}]),
|
||||||
%% 启动mnesia数据库
|
ensure_upload_dir(),
|
||||||
start_mnesia(),
|
|
||||||
%% 加速内存的回收
|
%% 加速内存的回收
|
||||||
erlang:system_flag(fullsweep_after, 16),
|
erlang:system_flag(fullsweep_after, 16),
|
||||||
|
start_http_server(),
|
||||||
|
|
||||||
efka_sup:start_link().
|
efka_sup:start_link().
|
||||||
|
|
||||||
stop(_State) ->
|
stop(_State) ->
|
||||||
ok.
|
ok.
|
||||||
|
|
||||||
%% internal functions
|
%% 微服务和efka之间通过websocket协议通讯
|
||||||
|
start_http_server() ->
|
||||||
|
{ok, Props} = application:get_env(efka, http_server),
|
||||||
|
Acceptors = proplists:get_value(acceptors, Props, 50),
|
||||||
|
MaxConnections = proplists:get_value(max_connections, Props, 10240),
|
||||||
|
Backlog = proplists:get_value(backlog, Props, 1024),
|
||||||
|
Port = proplists:get_value(port, Props),
|
||||||
|
|
||||||
%% 启动内存数据库
|
Dispatcher = cowboy_router:compile([
|
||||||
start_mnesia() ->
|
{'_', [
|
||||||
%% 启动数据库
|
{"/ws", ws_channel, []},
|
||||||
ensure_mnesia_schema(),
|
{"/files/[...]", cowboy_static, {dir, "/usr/local/code/downloads"}},
|
||||||
ok = mnesia:start(),
|
{"/upload", upload_channel, []}
|
||||||
Tables = mnesia:system_info(tables),
|
]}
|
||||||
lager:debug("[efka_app] tables: ~p", [Tables]),
|
]),
|
||||||
%% 创建数据库表
|
|
||||||
not lists:member(id_generator, Tables) andalso id_generator_model:create_table(),
|
|
||||||
not lists:member(service, Tables) andalso service_model:create_table(),
|
|
||||||
not lists:member(cache, Tables) andalso cache_model:create_table(),
|
|
||||||
not lists:member(task_log, Tables) andalso task_log_model:create_table(),
|
|
||||||
ok.
|
|
||||||
|
|
||||||
-spec ensure_mnesia_schema() -> any().
|
TransOpts = [
|
||||||
ensure_mnesia_schema() ->
|
{port, Port},
|
||||||
case mnesia:system_info(use_dir) of
|
{num_acceptors, Acceptors},
|
||||||
|
{backlog, Backlog},
|
||||||
|
{max_connections, MaxConnections}
|
||||||
|
],
|
||||||
|
{ok, Pid} = cowboy:start_clear(ws_listener, TransOpts, #{env => #{dispatch => Dispatcher}}),
|
||||||
|
|
||||||
|
lager:debug("[efka_app] websocket server start at: ~p, pid is: ~p", [Port, Pid]).
|
||||||
|
|
||||||
|
ensure_upload_dir() ->
|
||||||
|
{ok, UploadDir} = application:get_env(efka, upload_dir),
|
||||||
|
case filelib:is_dir(UploadDir) of
|
||||||
true ->
|
true ->
|
||||||
lager:debug("[efka_app] mnesia schema exists"),
|
|
||||||
ok;
|
ok;
|
||||||
false ->
|
false ->
|
||||||
mnesia:stop(),
|
ok = file:make_dir(UploadDir)
|
||||||
case mnesia:create_schema([node()]) of
|
|
||||||
ok -> ok;
|
|
||||||
{error, {_, {already_exists, _}}} -> ok;
|
|
||||||
Error ->
|
|
||||||
lager:debug("[iot_app] create mnesia schema failed with error: ~p", [Error]),
|
|
||||||
throw({init_schema, Error})
|
|
||||||
end
|
|
||||||
end.
|
end.
|
||||||
@ -1,212 +0,0 @@
|
|||||||
%%%-------------------------------------------------------------------
|
|
||||||
%%% @author anlicheng
|
|
||||||
%%% @copyright (C) 2025, <COMPANY>
|
|
||||||
%%% @doc
|
|
||||||
%%% 微服务守护进程
|
|
||||||
%%% 1. 负责微服务的下载, 版本的管理
|
|
||||||
%%% 2. 目录管理等
|
|
||||||
%%% @end
|
|
||||||
%%% Created : 19. 4月 2025 14:55
|
|
||||||
%%%-------------------------------------------------------------------
|
|
||||||
-module(efka_inetd).
|
|
||||||
-author("anlicheng").
|
|
||||||
-include("efka_tables.hrl").
|
|
||||||
-include("message_pb.hrl").
|
|
||||||
|
|
||||||
-behaviour(gen_server).
|
|
||||||
|
|
||||||
%% API
|
|
||||||
-export([start_link/0]).
|
|
||||||
-export([deploy/3, start_service/1, stop_service/1]).
|
|
||||||
|
|
||||||
%% gen_server callbacks
|
|
||||||
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).
|
|
||||||
|
|
||||||
-define(SERVER, ?MODULE).
|
|
||||||
|
|
||||||
-record(state, {
|
|
||||||
root_dir :: string(),
|
|
||||||
%% 建立任务到ref之间的映射, #{TaskPid => {TaskId, ServiceId}}
|
|
||||||
task_map = #{}
|
|
||||||
}).
|
|
||||||
|
|
||||||
%%%===================================================================
|
|
||||||
%%% API
|
|
||||||
%%%===================================================================
|
|
||||||
|
|
||||||
-spec deploy(TaskId :: integer(), ServerId :: binary(), TarUrl :: binary()) -> ok | {error, Reason :: binary()}.
|
|
||||||
deploy(TaskId, ServerId, TarUrl) when is_integer(TaskId), is_binary(ServerId), is_binary(TarUrl) ->
|
|
||||||
gen_server:call(?SERVER, {deploy, TaskId, ServerId, TarUrl}).
|
|
||||||
|
|
||||||
-spec start_service(ServiceId :: binary()) -> ok | {error, Reason :: term()}.
|
|
||||||
start_service(ServiceId) when is_binary(ServiceId) ->
|
|
||||||
gen_server:call(?SERVER, {start_service, ServiceId}).
|
|
||||||
|
|
||||||
-spec stop_service(ServiceId :: binary()) -> ok | {error, Reason :: term()}.
|
|
||||||
stop_service(ServiceId) when is_binary(ServiceId) ->
|
|
||||||
gen_server:call(?SERVER, {stop_service, ServiceId}).
|
|
||||||
|
|
||||||
%% @doc Spawns the server and registers the local name (unique)
|
|
||||||
-spec(start_link() ->
|
|
||||||
{ok, Pid :: pid()} | ignore | {error, Reason :: term()}).
|
|
||||||
start_link() ->
|
|
||||||
gen_server:start_link({local, ?SERVER}, ?MODULE, [], []).
|
|
||||||
|
|
||||||
%%%===================================================================
|
|
||||||
%%% gen_server callbacks
|
|
||||||
%%%===================================================================
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Initializes the server
|
|
||||||
-spec(init(Args :: term()) ->
|
|
||||||
{ok, State :: #state{}} | {ok, State :: #state{}, timeout() | hibernate} |
|
|
||||||
{stop, Reason :: term()} | ignore).
|
|
||||||
init([]) ->
|
|
||||||
erlang:process_flag(trap_exit, true),
|
|
||||||
{ok, RootDir} = application:get_env(efka, root_dir),
|
|
||||||
{ok, #state{root_dir = RootDir}}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Handling call messages
|
|
||||||
-spec(handle_call(Request :: term(), From :: {pid(), Tag :: term()},
|
|
||||||
State :: #state{}) ->
|
|
||||||
{reply, Reply :: term(), NewState :: #state{}} |
|
|
||||||
{reply, Reply :: term(), NewState :: #state{}, timeout() | hibernate} |
|
|
||||||
{noreply, NewState :: #state{}} |
|
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
|
||||||
{stop, Reason :: term(), Reply :: term(), NewState :: #state{}} |
|
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
|
||||||
handle_call({deploy, TaskId, ServiceId, TarUrl}, _From, State = #state{root_dir = RootDir, task_map = TaskMap}) ->
|
|
||||||
%% 创建目录
|
|
||||||
{ok, ServiceRootDir} = ensure_dirs(RootDir, ServiceId),
|
|
||||||
|
|
||||||
ServicePid = efka_service:get_pid(ServiceId),
|
|
||||||
case is_pid(ServicePid) of
|
|
||||||
true ->
|
|
||||||
{reply, {error, <<"the service is running, stop first">>}, State};
|
|
||||||
false ->
|
|
||||||
case check_download_url(TarUrl) of
|
|
||||||
ok ->
|
|
||||||
{ok, TaskPid} = efka_inetd_task:start_link(TaskId, ServiceRootDir, ServiceId, TarUrl),
|
|
||||||
efka_inetd_task:deploy(TaskPid),
|
|
||||||
lager:debug("[efka_inetd] start task_id: ~p, tar_url: ~p", [TaskId, TarUrl]),
|
|
||||||
|
|
||||||
{reply, ok, State#state{task_map = maps:put(TaskPid, {TaskId, ServiceId}, TaskMap)}};
|
|
||||||
{error, Reason} ->
|
|
||||||
lager:debug("[efka_inetd] check_download_url: ~p, get error: ~p", [TarUrl, Reason]),
|
|
||||||
{reply, {error, <<"download url error">>}, State}
|
|
||||||
end
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 启动服务: 当前服务如果正常运行,则不允许重启
|
|
||||||
handle_call({start_service, ServiceId}, _From, State) ->
|
|
||||||
case efka_service:get_pid(ServiceId) of
|
|
||||||
undefined ->
|
|
||||||
case efka_service_sup:start_service(ServiceId) of
|
|
||||||
{ok, _} ->
|
|
||||||
%% 更新数据库状态, 状态是为了保证下次efka重启的时候,服务能够启动
|
|
||||||
ok = service_model:change_status(ServiceId, 1),
|
|
||||||
{reply, ok, State};
|
|
||||||
{error, Reason} ->
|
|
||||||
{reply, {error, Reason}, State}
|
|
||||||
end;
|
|
||||||
ServicePid when is_pid(ServicePid) ->
|
|
||||||
{reply, {error, <<"service is running">>}, State}
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 停止服务, 主动停止的时候会改变服务配置的status字段
|
|
||||||
handle_call({stop_service, ServiceId}, _From, State = #state{}) ->
|
|
||||||
case efka_service:get_pid(ServiceId) of
|
|
||||||
undefined ->
|
|
||||||
{reply, {error, <<"service not running">>}, State};
|
|
||||||
ServicePid when is_pid(ServicePid) ->
|
|
||||||
efka_service_sup:stop_service(ServiceId),
|
|
||||||
%% 主动停止的服务,需要更新数据库状态, 状态是为了保证下次efka重启的时候,不自动启动服务
|
|
||||||
ok = service_model:change_status(ServiceId, 0),
|
|
||||||
{reply, ok, State}
|
|
||||||
end;
|
|
||||||
|
|
||||||
handle_call(_Request, _From, State = #state{}) ->
|
|
||||||
{reply, ok, State}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Handling cast messages
|
|
||||||
-spec(handle_cast(Request :: term(), State :: #state{}) ->
|
|
||||||
{noreply, NewState :: #state{}} |
|
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
|
||||||
handle_cast(_Request, State = #state{}) ->
|
|
||||||
{noreply, State}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Handling all non call/cast messages
|
|
||||||
-spec(handle_info(Info :: timeout() | term(), State :: #state{}) ->
|
|
||||||
{noreply, NewState :: #state{}} |
|
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
|
||||||
handle_info({'EXIT', TaskPid, Reason}, State = #state{task_map = TaskMap}) ->
|
|
||||||
case maps:take(TaskPid, TaskMap) of
|
|
||||||
error ->
|
|
||||||
{noreply, State};
|
|
||||||
{{TaskId, ServiceId}, NTaskMap} ->
|
|
||||||
case Reason of
|
|
||||||
normal ->
|
|
||||||
lager:debug("[efka_inetd] service_id: ~p, task_pid: ~p, exit normal", [ServiceId, TaskPid]),
|
|
||||||
efka_inetd_task_log:flush(TaskId);
|
|
||||||
Error ->
|
|
||||||
lager:notice("[efka_inetd] service_id: ~p, task_pid: ~p, exit with error: ~p", [ServiceId, TaskPid, Error]),
|
|
||||||
efka_inetd_task_log:stash(TaskId, <<"task aborted">>),
|
|
||||||
efka_inetd_task_log:flush(TaskId)
|
|
||||||
end,
|
|
||||||
{noreply, State#state{task_map = NTaskMap}}
|
|
||||||
end;
|
|
||||||
|
|
||||||
handle_info(_Info, State = #state{}) ->
|
|
||||||
{noreply, State}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc This function is called by a gen_server when it is about to
|
|
||||||
%% terminate. It should be the opposite of Module:init/1 and do any
|
|
||||||
%% necessary cleaning up. When it returns, the gen_server terminates
|
|
||||||
%% with Reason. The return value is ignored.
|
|
||||||
-spec(terminate(Reason :: (normal | shutdown | {shutdown, term()} | term()),
|
|
||||||
State :: #state{}) -> term()).
|
|
||||||
terminate(_Reason, _State = #state{}) ->
|
|
||||||
ok.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Convert process state when code is changed
|
|
||||||
-spec(code_change(OldVsn :: term() | {down, term()}, State :: #state{},
|
|
||||||
Extra :: term()) ->
|
|
||||||
{ok, NewState :: #state{}} | {error, Reason :: term()}).
|
|
||||||
code_change(_OldVsn, State = #state{}, _Extra) ->
|
|
||||||
{ok, State}.
|
|
||||||
|
|
||||||
%%%===================================================================
|
|
||||||
%%% Internal functions
|
|
||||||
%%%===================================================================
|
|
||||||
|
|
||||||
-spec ensure_dirs(RootDir :: string(), ServerId :: binary()) -> {ok, ServerRootDir :: string()}.
|
|
||||||
ensure_dirs(RootDir, ServerId) when is_list(RootDir), is_binary(ServerId) ->
|
|
||||||
%% 根目录
|
|
||||||
ServiceRootDir = RootDir ++ "/" ++ binary_to_list(ServerId) ++ "/",
|
|
||||||
ok = filelib:ensure_dir(ServiceRootDir),
|
|
||||||
{ok, ServiceRootDir}.
|
|
||||||
|
|
||||||
%% 通过head请求先判定下载地址是否正确
|
|
||||||
-spec check_download_url(Url :: string() | binary()) -> ok | {error, Reason :: term()}.
|
|
||||||
check_download_url(Url) when is_binary(Url) ->
|
|
||||||
check_download_url(binary_to_list(Url));
|
|
||||||
check_download_url(Url) when is_list(Url) ->
|
|
||||||
SslOpts = [
|
|
||||||
{ssl, [
|
|
||||||
% 完全禁用证书验证
|
|
||||||
{verify, verify_none}
|
|
||||||
]}
|
|
||||||
],
|
|
||||||
case httpc:request(head, {Url, []}, SslOpts, [{sync, true}]) of
|
|
||||||
{ok, {{_, 200, "OK"}, _Headers, _}} ->
|
|
||||||
ok;
|
|
||||||
{error, Reason} ->
|
|
||||||
{error, Reason}
|
|
||||||
end.
|
|
||||||
@ -1,241 +0,0 @@
|
|||||||
%%%-------------------------------------------------------------------
|
|
||||||
%%% @author anlicheng
|
|
||||||
%%% @copyright (C) 2025, <COMPANY>
|
|
||||||
%%% @doc
|
|
||||||
%%%
|
|
||||||
%%% @end
|
|
||||||
%%% Created : 07. 5月 2025 15:47
|
|
||||||
%%%-------------------------------------------------------------------
|
|
||||||
-module(efka_inetd_task).
|
|
||||||
-author("anlicheng").
|
|
||||||
-include("efka_tables.hrl").
|
|
||||||
|
|
||||||
-behaviour(gen_server).
|
|
||||||
|
|
||||||
%% API
|
|
||||||
-export([start_link/4]).
|
|
||||||
-export([deploy/1]).
|
|
||||||
|
|
||||||
%% gen_server callbacks
|
|
||||||
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).
|
|
||||||
|
|
||||||
-define(SERVER, ?MODULE).
|
|
||||||
|
|
||||||
-record(state, {
|
|
||||||
service_root_dir :: string(),
|
|
||||||
task_id :: integer(),
|
|
||||||
service_id :: binary(),
|
|
||||||
tar_url :: binary()
|
|
||||||
}).
|
|
||||||
|
|
||||||
%%%===================================================================
|
|
||||||
%%% API
|
|
||||||
%%%===================================================================
|
|
||||||
|
|
||||||
-spec deploy(Pid :: pid()) -> no_return().
|
|
||||||
deploy(Pid) when is_pid(Pid) ->
|
|
||||||
gen_server:cast(Pid, deploy).
|
|
||||||
|
|
||||||
%% @doc Spawns the server and registers the local name (unique)
|
|
||||||
-spec(start_link(TaskId :: integer(), ServiceRootDir :: string(), ServiceId :: binary(), TarUrl :: binary()) ->
|
|
||||||
{ok, Pid :: pid()} | ignore | {error, Reason :: term()}).
|
|
||||||
start_link(TaskId, ServiceRootDir, ServiceId, TarUrl) when is_integer(TaskId), is_list(ServiceRootDir), is_binary(ServiceId), is_binary(TarUrl) ->
|
|
||||||
gen_server:start_link(?MODULE, [TaskId, ServiceRootDir, ServiceId, TarUrl], []).
|
|
||||||
|
|
||||||
%%%===================================================================
|
|
||||||
%%% gen_server callbacks
|
|
||||||
%%%===================================================================
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Initializes the server
|
|
||||||
-spec(init(Args :: term()) ->
|
|
||||||
{ok, State :: #state{}} | {ok, State :: #state{}, timeout() | hibernate} |
|
|
||||||
{stop, Reason :: term()} | ignore).
|
|
||||||
init([TaskId, ServiceRootDir, ServiceId, TarUrl]) ->
|
|
||||||
{ok, #state{task_id = TaskId, service_root_dir = ServiceRootDir, service_id = ServiceId, tar_url = TarUrl}}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Handling call messages
|
|
||||||
-spec(handle_call(Request :: term(), From :: {pid(), Tag :: term()},
|
|
||||||
State :: #state{}) ->
|
|
||||||
{reply, Reply :: term(), NewState :: #state{}} |
|
|
||||||
{reply, Reply :: term(), NewState :: #state{}, timeout() | hibernate} |
|
|
||||||
{noreply, NewState :: #state{}} |
|
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
|
||||||
{stop, Reason :: term(), Reply :: term(), NewState :: #state{}} |
|
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
|
||||||
handle_call(_Request, _From, State = #state{}) ->
|
|
||||||
{reply, ok, State}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Handling cast messages
|
|
||||||
-spec(handle_cast(Request :: term(), State :: #state{}) ->
|
|
||||||
{noreply, NewState :: #state{}} |
|
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
|
||||||
handle_cast(deploy, State = #state{task_id = TaskId, service_root_dir = ServiceRootDir, service_id = ServiceId, tar_url = TarUrl}) ->
|
|
||||||
do_deploy(TaskId, ServiceRootDir, ServiceId, TarUrl),
|
|
||||||
{stop, normal, State};
|
|
||||||
handle_cast(_Request, State) ->
|
|
||||||
{stop, normal, State}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Handling all non call/cast messages
|
|
||||||
-spec(handle_info(Info :: timeout() | term(), State :: #state{}) ->
|
|
||||||
{noreply, NewState :: #state{}} |
|
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
|
||||||
handle_info(_Info, State = #state{}) ->
|
|
||||||
{noreply, State}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc This function is called by a gen_server when it is about to
|
|
||||||
%% terminate. It should be the opposite of Module:init/1 and do any
|
|
||||||
%% necessary cleaning up. When it returns, the gen_server terminates
|
|
||||||
%% with Reason. The return value is ignored.
|
|
||||||
-spec(terminate(Reason :: (normal | shutdown | {shutdown, term()} | term()),
|
|
||||||
State :: #state{}) -> term()).
|
|
||||||
terminate(_Reason, _State = #state{}) ->
|
|
||||||
ok.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Convert process state when code is changed
|
|
||||||
-spec(code_change(OldVsn :: term() | {down, term()}, State :: #state{},
|
|
||||||
Extra :: term()) ->
|
|
||||||
{ok, NewState :: #state{}} | {error, Reason :: term()}).
|
|
||||||
code_change(_OldVsn, State = #state{}, _Extra) ->
|
|
||||||
{ok, State}.
|
|
||||||
|
|
||||||
%%%===================================================================
|
|
||||||
%%% Internal functions
|
|
||||||
%%%===================================================================
|
|
||||||
|
|
||||||
-spec do_deploy(TaskId :: integer(), ServiceRootDir :: string(), ServiceId :: binary(), TarUrl :: binary()) -> no_return().
|
|
||||||
do_deploy(TaskId, ServiceRootDir, ServiceId, TarUrl) when is_integer(TaskId), is_list(ServiceRootDir), is_binary(ServiceId), is_binary(TarUrl) ->
|
|
||||||
case download(binary_to_list(TarUrl), ServiceRootDir) of
|
|
||||||
{ok, TarFile, CostTs} ->
|
|
||||||
Log = io_lib:format("download: ~p completed, cost time: ~p(ms)", [binary_to_list(TarUrl), CostTs]),
|
|
||||||
efka_inetd_task_log:stash(TaskId, list_to_binary(Log)),
|
|
||||||
|
|
||||||
%% 创建工作目录
|
|
||||||
WorkDir = ServiceRootDir ++ "/work_dir/",
|
|
||||||
case filelib:ensure_dir(WorkDir) of
|
|
||||||
ok ->
|
|
||||||
%% 清理目录下的文件
|
|
||||||
catch delete_directory(WorkDir),
|
|
||||||
case tar_extract(TarFile, WorkDir) of
|
|
||||||
ok ->
|
|
||||||
%% 更新数据
|
|
||||||
ok = service_model:insert(#service{
|
|
||||||
service_id = ServiceId,
|
|
||||||
tar_url = TarUrl,
|
|
||||||
%% 工作目录
|
|
||||||
root_dir = ServiceRootDir,
|
|
||||||
config_json = <<"">>,
|
|
||||||
%% 状态: 0: 停止, 1: 运行中
|
|
||||||
status = 0
|
|
||||||
}),
|
|
||||||
efka_inetd_task_log:stash(TaskId, <<"deploy success">>);
|
|
||||||
{error, Reason} ->
|
|
||||||
TarLog = io_lib:format("tar decompression: ~p, error: ~p", [filename:basename(TarFile), Reason]),
|
|
||||||
efka_inetd_task_log:stash(TaskId, list_to_binary(TarLog))
|
|
||||||
end;
|
|
||||||
{error, Reason} ->
|
|
||||||
DownloadLog = io_lib:format("make work_dir error: ~p", [Reason]),
|
|
||||||
efka_inetd_task_log:stash(TaskId, list_to_binary(DownloadLog))
|
|
||||||
end;
|
|
||||||
{error, Reason} ->
|
|
||||||
DownloadLog = io_lib:format("download: ~p, error: ~p", [binary_to_list(TarUrl), Reason]),
|
|
||||||
efka_inetd_task_log:stash(TaskId, list_to_binary(DownloadLog))
|
|
||||||
end.
|
|
||||||
|
|
||||||
%% 递归删除目录下的问题
|
|
||||||
-spec delete_directory(string()) -> ok | {error, term()}.
|
|
||||||
delete_directory(Dir) when is_list(Dir) ->
|
|
||||||
% 递归删除目录内容
|
|
||||||
case file:list_dir(Dir) of
|
|
||||||
{ok, Files} ->
|
|
||||||
lists:foreach(fun(File) ->
|
|
||||||
FullPath = filename:join(Dir, File),
|
|
||||||
case filelib:is_dir(FullPath) of
|
|
||||||
true ->
|
|
||||||
delete_directory(FullPath);
|
|
||||||
false ->
|
|
||||||
file:delete(FullPath)
|
|
||||||
end
|
|
||||||
end, Files),
|
|
||||||
% 删除空目录
|
|
||||||
file:del_dir(Dir);
|
|
||||||
{error, enoent} ->
|
|
||||||
ok;
|
|
||||||
{error, Reason} ->
|
|
||||||
{error, Reason}
|
|
||||||
end.
|
|
||||||
|
|
||||||
%% 解压文件到指定目录
|
|
||||||
-spec tar_extract(string(), string()) -> ok | {error, term()}.
|
|
||||||
tar_extract(TarFile, TargetDir) when is_list(TarFile), is_list(TargetDir) ->
|
|
||||||
%% 判断文件的后缀名来判断, options: verbose
|
|
||||||
erl_tar:extract(TarFile, [compressed, {cwd, TargetDir}]).
|
|
||||||
|
|
||||||
%% 下载文件
|
|
||||||
-spec download(Url :: string(), TargetDir :: string()) ->
|
|
||||||
{ok, TarFile :: string(), CostTs :: integer()} | {error, Reason :: term()}.
|
|
||||||
download(Url, TargetDir) when is_list(Url), is_list(TargetDir) ->
|
|
||||||
SslOpts = [
|
|
||||||
{ssl, [
|
|
||||||
% 完全禁用证书验证
|
|
||||||
{verify, verify_none}
|
|
||||||
]}
|
|
||||||
],
|
|
||||||
|
|
||||||
TargetFile = get_filename_from_url(Url),
|
|
||||||
FullFilename = TargetDir ++ TargetFile,
|
|
||||||
|
|
||||||
StartTs = os:timestamp(),
|
|
||||||
case httpc:request(get, {Url, []}, SslOpts, [{sync, false}, {stream, self}]) of
|
|
||||||
{ok, RequestId} ->
|
|
||||||
case receive_data(RequestId, FullFilename) of
|
|
||||||
ok ->
|
|
||||||
EndTs = os:timestamp(),
|
|
||||||
%% 计算操作的时间,单位为毫秒
|
|
||||||
CostMs = timer:now_diff(EndTs, StartTs) div 1000,
|
|
||||||
{ok, FullFilename, CostMs};
|
|
||||||
{error, Reason} ->
|
|
||||||
%% 出错需要删除掉文件
|
|
||||||
file:delete(FullFilename),
|
|
||||||
{error, Reason}
|
|
||||||
end;
|
|
||||||
{error, Reason} ->
|
|
||||||
{error, Reason}
|
|
||||||
end.
|
|
||||||
|
|
||||||
%% 处理头部信息, 解析可能的文件名
|
|
||||||
receive_data(RequestId, FullFilename) ->
|
|
||||||
receive
|
|
||||||
{http, {RequestId, stream_start, _Headers}} ->
|
|
||||||
{ok, File} = file:open(FullFilename, [write, binary]),
|
|
||||||
receive_data0(RequestId, File);
|
|
||||||
{http, {RequestId, {{_, 404, Status}, _Headers, Body}}} ->
|
|
||||||
lager:debug("[efka_downloader] http_status: ~p, body: ~p", [Status, Body]),
|
|
||||||
{error, Status}
|
|
||||||
end.
|
|
||||||
%% 接受文件数据
|
|
||||||
receive_data0(RequestId, File) ->
|
|
||||||
receive
|
|
||||||
{http, {RequestId, {error, Reason}}} ->
|
|
||||||
ok = file:close(File),
|
|
||||||
{error, Reason};
|
|
||||||
{http, {RequestId, stream_end, _Headers}} ->
|
|
||||||
ok = file:close(File),
|
|
||||||
ok;
|
|
||||||
{http, {RequestId, stream, Data}} ->
|
|
||||||
file:write(File, Data),
|
|
||||||
receive_data0(RequestId, File)
|
|
||||||
end.
|
|
||||||
|
|
||||||
-spec get_filename_from_url(Url :: string()) -> string().
|
|
||||||
get_filename_from_url(Url) when is_list(Url) ->
|
|
||||||
URIMap = uri_string:parse(Url),
|
|
||||||
Path = maps:get(path, URIMap),
|
|
||||||
filename:basename(Path).
|
|
||||||
175
apps/efka/src/efka_logger.erl
Normal file
175
apps/efka/src/efka_logger.erl
Normal file
@ -0,0 +1,175 @@
|
|||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
%%% @author aresei
|
||||||
|
%%% @copyright (C) 2023, <COMPANY>
|
||||||
|
%%% @doc
|
||||||
|
%%%
|
||||||
|
%%% @end
|
||||||
|
%%% Created : 07. 9月 2023 17:07
|
||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
-module(efka_logger).
|
||||||
|
-author("aresei").
|
||||||
|
|
||||||
|
-behaviour(gen_server).
|
||||||
|
|
||||||
|
%% API
|
||||||
|
-export([start_link/1, write/1, write_lines/1]).
|
||||||
|
|
||||||
|
%% gen_server callbacks
|
||||||
|
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).
|
||||||
|
|
||||||
|
-define(SERVER, ?MODULE).
|
||||||
|
|
||||||
|
-record(state, {
|
||||||
|
file_name :: string(),
|
||||||
|
date :: calendar:date(),
|
||||||
|
file
|
||||||
|
}).
|
||||||
|
|
||||||
|
%%%===================================================================
|
||||||
|
%%% API
|
||||||
|
%%%===================================================================
|
||||||
|
|
||||||
|
-spec write(Data :: binary()) -> no_return().
|
||||||
|
write(Data) when is_binary(Data) ->
|
||||||
|
gen_server:cast(?SERVER, {write, Data}).
|
||||||
|
|
||||||
|
write_lines(Lines) when is_list(Lines) ->
|
||||||
|
gen_server:cast(?SERVER, {write_lines, Lines}).
|
||||||
|
|
||||||
|
%% @doc Spawns the server and registers the local name (unique)
|
||||||
|
-spec(start_link(FileName :: string()) ->
|
||||||
|
{ok, Pid :: pid()} | ignore | {error, Reason :: term()}).
|
||||||
|
start_link(FileName) when is_list(FileName) ->
|
||||||
|
gen_server:start_link({local, ?SERVER}, ?MODULE, [FileName], []).
|
||||||
|
|
||||||
|
%%%===================================================================
|
||||||
|
%%% gen_server callbacks
|
||||||
|
%%%===================================================================
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Initializes the server
|
||||||
|
-spec(init(Args :: term()) ->
|
||||||
|
{ok, State :: #state{}} | {ok, State :: #state{}, timeout() | hibernate} |
|
||||||
|
{stop, Reason :: term()} | ignore).
|
||||||
|
init([FileName]) ->
|
||||||
|
ensure_dir(),
|
||||||
|
FilePath = make_file(FileName),
|
||||||
|
{ok, File} = file:open(FilePath, [append, binary]),
|
||||||
|
|
||||||
|
{ok, #state{file = File, file_name = FileName, date = get_date()}}.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Handling call messages
|
||||||
|
-spec(handle_call(Request :: term(), From :: {pid(), Tag :: term()},
|
||||||
|
State :: #state{}) ->
|
||||||
|
{reply, Reply :: term(), NewState :: #state{}} |
|
||||||
|
{reply, Reply :: term(), NewState :: #state{}, timeout() | hibernate} |
|
||||||
|
{noreply, NewState :: #state{}} |
|
||||||
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
|
{stop, Reason :: term(), Reply :: term(), NewState :: #state{}} |
|
||||||
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
|
handle_call(_Request, _From, State = #state{}) ->
|
||||||
|
{reply, ok, State}.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Handling cast messages
|
||||||
|
-spec(handle_cast(Request :: term(), State :: #state{}) ->
|
||||||
|
{noreply, NewState :: #state{}} |
|
||||||
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
|
handle_cast({write, Data}, State = #state{file = OldFile, file_name = FileName, date = Date}) ->
|
||||||
|
Line = <<(time_prefix())/binary, " ", (format(Data))/binary, $\n>>,
|
||||||
|
case maybe_new_file(Date) of
|
||||||
|
true ->
|
||||||
|
file:close(OldFile),
|
||||||
|
|
||||||
|
FilePath = make_file(FileName),
|
||||||
|
{ok, File} = file:open(FilePath, [append, binary]),
|
||||||
|
ok = file:write(File, Line),
|
||||||
|
{noreply, State#state{file = File, date = get_date()}};
|
||||||
|
false ->
|
||||||
|
ok = file:write(OldFile, Line),
|
||||||
|
{noreply, State}
|
||||||
|
end;
|
||||||
|
handle_cast({write_lines, Lines}, State = #state{file = OldFile, file_name = FileName, date = Date}) ->
|
||||||
|
Data = iolist_to_binary(lists:join(<<$\n>>, Lines)),
|
||||||
|
case maybe_new_file(Date) of
|
||||||
|
true ->
|
||||||
|
file:close(OldFile),
|
||||||
|
|
||||||
|
FilePath = make_file(FileName),
|
||||||
|
{ok, File} = file:open(FilePath, [append, binary]),
|
||||||
|
ok = file:write(File, Data),
|
||||||
|
{noreply, State#state{file = File, date = get_date()}};
|
||||||
|
false ->
|
||||||
|
ok = file:write(OldFile, Data),
|
||||||
|
{noreply, State}
|
||||||
|
end.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Handling all non call/cast messages
|
||||||
|
-spec(handle_info(Info :: timeout() | term(), State :: #state{}) ->
|
||||||
|
{noreply, NewState :: #state{}} |
|
||||||
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
|
handle_info(_Info, State = #state{}) ->
|
||||||
|
{noreply, State}.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc This function is called by a gen_server when it is about to
|
||||||
|
%% terminate. It should be the opposite of Module:init/1 and do any
|
||||||
|
%% necessary cleaning up. When it returns, the gen_server terminates
|
||||||
|
%% with Reason. The return value is ignored.
|
||||||
|
-spec(terminate(Reason :: (normal | shutdown | {shutdown, term()} | term()),
|
||||||
|
State :: #state{}) -> term()).
|
||||||
|
terminate(_Reason, _State = #state{}) ->
|
||||||
|
ok.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Convert process state when code is changed
|
||||||
|
-spec(code_change(OldVsn :: term() | {down, term()}, State :: #state{},
|
||||||
|
Extra :: term()) ->
|
||||||
|
{ok, NewState :: #state{}} | {error, Reason :: term()}).
|
||||||
|
code_change(_OldVsn, State = #state{}, _Extra) ->
|
||||||
|
{ok, State}.
|
||||||
|
|
||||||
|
%%%===================================================================
|
||||||
|
%%% Internal functions
|
||||||
|
%%%===================================================================
|
||||||
|
|
||||||
|
format(Data) when is_binary(Data) ->
|
||||||
|
iolist_to_binary(Data);
|
||||||
|
format(Items) when is_list(Items) ->
|
||||||
|
iolist_to_binary(lists:join(<<"\t">>, Items)).
|
||||||
|
|
||||||
|
time_prefix() ->
|
||||||
|
{{Y, M, D}, {H, I, S}} = calendar:local_time(),
|
||||||
|
iolist_to_binary(io_lib:format("[~b-~2..0b-~2..0b ~2..0b:~2..0b:~2..0b]", [Y, M, D, H, I, S])).
|
||||||
|
|
||||||
|
-spec make_file(LogFile :: string()) -> string().
|
||||||
|
make_file(LogFile) when is_list(LogFile) ->
|
||||||
|
{Year, Month, Day} = erlang:date(),
|
||||||
|
Suffix = io_lib:format("~b~2..0b~2..0b", [Year, Month, Day]),
|
||||||
|
RootDir = code:root_dir() ++ "/log/",
|
||||||
|
lists:flatten(RootDir ++ LogFile ++ "." ++ Suffix).
|
||||||
|
|
||||||
|
ensure_dir() ->
|
||||||
|
RootDir = code:root_dir() ++ "/log/",
|
||||||
|
case filelib:is_dir(RootDir) of
|
||||||
|
true ->
|
||||||
|
ok;
|
||||||
|
false ->
|
||||||
|
file:make_dir(RootDir)
|
||||||
|
end.
|
||||||
|
|
||||||
|
%% 获取日期信息
|
||||||
|
-spec get_date() -> Date :: calendar:date().
|
||||||
|
get_date() ->
|
||||||
|
{Date, _} = calendar:local_time(),
|
||||||
|
Date.
|
||||||
|
|
||||||
|
%% 通过日志判断是否需要生成新的日志文件
|
||||||
|
-spec maybe_new_file(Date :: calendar:date()) -> boolean().
|
||||||
|
maybe_new_file({Y, M, D}) ->
|
||||||
|
{{Y0, M0, D0}, _} = calendar:local_time(),
|
||||||
|
not (Y =:= Y0 andalso M =:= M0 andalso D =:= D0).
|
||||||
@ -1,124 +0,0 @@
|
|||||||
%%%-------------------------------------------------------------------
|
|
||||||
%%% @author anlicheng
|
|
||||||
%%% @copyright (C) 2025, <COMPANY>
|
|
||||||
%%% @doc
|
|
||||||
%%% 用于管理manifest.json配置文件
|
|
||||||
%%% @end
|
|
||||||
%%% Created : 05. 5月 2025 22:39
|
|
||||||
%%%-------------------------------------------------------------------
|
|
||||||
-module(efka_manifest).
|
|
||||||
-author("anlicheng").
|
|
||||||
|
|
||||||
-record(manifest, {
|
|
||||||
work_dir = "" :: string(),
|
|
||||||
id = <<"">> :: binary(),
|
|
||||||
exec = <<"">>:: binary(),
|
|
||||||
args = [],
|
|
||||||
health_check = <<"">>
|
|
||||||
}).
|
|
||||||
|
|
||||||
-type manifest() :: #manifest{}.
|
|
||||||
|
|
||||||
-export_type([manifest/0]).
|
|
||||||
|
|
||||||
%% API
|
|
||||||
-export([new/1, startup/1]).
|
|
||||||
|
|
||||||
-spec new(ServiceRootDir :: string()) -> {ok, #manifest{}} | {error, Reason :: binary()}.
|
|
||||||
new(ServiceRootDir) when is_list(ServiceRootDir) ->
|
|
||||||
WorkDir = ServiceRootDir ++ "/work_dir/",
|
|
||||||
case file:read_file(WorkDir ++ "manifest.json") of
|
|
||||||
{ok, ManifestInfo} ->
|
|
||||||
Settings = catch jiffy:decode(ManifestInfo, [return_maps]),
|
|
||||||
case check_manifest(Settings) of
|
|
||||||
{ok, Manifest} ->
|
|
||||||
{ok, Manifest#manifest{work_dir = WorkDir}};
|
|
||||||
{error, Reason} ->
|
|
||||||
{error, Reason}
|
|
||||||
end;
|
|
||||||
{error, Reason} ->
|
|
||||||
{error, Reason}
|
|
||||||
end.
|
|
||||||
|
|
||||||
-spec startup(Manifest :: #manifest{}) -> {ok, Port :: port()} | {error, Reason :: binary()}.
|
|
||||||
startup(#manifest{id = Id, work_dir = WorkDir, exec = ExecCmd0, args = Args0}) ->
|
|
||||||
PortSettings = [
|
|
||||||
{cd, WorkDir},
|
|
||||||
{args, [binary_to_list(A) || A <- Args0]},
|
|
||||||
exit_status
|
|
||||||
],
|
|
||||||
ExecCmd = binary_to_list(ExecCmd0),
|
|
||||||
RealExecCmd = filename:absname_join(WorkDir, ExecCmd),
|
|
||||||
lager:debug("[efka_manifest] service_id: ~p, real command is: ~p", [Id, RealExecCmd]),
|
|
||||||
case catch erlang:open_port({spawn_executable, RealExecCmd}, PortSettings) of
|
|
||||||
Port when is_port(Port) ->
|
|
||||||
{ok, Port};
|
|
||||||
_Other ->
|
|
||||||
{error, <<"exec command startup failed">>}
|
|
||||||
end.
|
|
||||||
|
|
||||||
%% 检查配置是否合法
|
|
||||||
-spec check_manifest(Manifest :: map()) -> {ok, #manifest{}} | {error, Reason :: binary()}.
|
|
||||||
check_manifest(Manifest) when is_map(Manifest) ->
|
|
||||||
RequiredKeys = [<<"id">>, <<"exec">>, <<"args">>, <<"health_check">>],
|
|
||||||
check_manifest0(RequiredKeys, Manifest, #manifest{});
|
|
||||||
check_manifest(_Manifest) ->
|
|
||||||
{error, <<"invalid manifest json">>}.
|
|
||||||
|
|
||||||
check_manifest0([], _Settings, Manifest) ->
|
|
||||||
{ok, Manifest};
|
|
||||||
check_manifest0([<<"id">>|T], Settings, Manifest) ->
|
|
||||||
case maps:find(<<"id">>, Settings) of
|
|
||||||
error ->
|
|
||||||
{error, <<"miss service_id">>};
|
|
||||||
{ok, Id} when is_binary(Id) ->
|
|
||||||
check_manifest0(T, Settings, Manifest#manifest{id = Id});
|
|
||||||
{ok, _} ->
|
|
||||||
{error, <<"service_id is not string">>}
|
|
||||||
end;
|
|
||||||
check_manifest0([<<"health_check">>|T], Settings, Manifest) ->
|
|
||||||
case maps:find(<<"health_check">>, Settings) of
|
|
||||||
error ->
|
|
||||||
{error, <<"miss health_check">>};
|
|
||||||
{ok, Url} when is_binary(Url) ->
|
|
||||||
case is_url(Url) of
|
|
||||||
true ->
|
|
||||||
check_manifest0(T, Settings, Manifest#manifest{health_check = Url});
|
|
||||||
false ->
|
|
||||||
{error, <<"health_check is not a invalid url">>}
|
|
||||||
end;
|
|
||||||
{ok, _} ->
|
|
||||||
{error, <<"health_check is not string">>}
|
|
||||||
end;
|
|
||||||
check_manifest0([<<"exec">>|T], Settings, Manifest) ->
|
|
||||||
case maps:find(<<"exec">>, Settings) of
|
|
||||||
error ->
|
|
||||||
{error, <<"miss start">>};
|
|
||||||
{ok, Exec} when is_binary(Exec) ->
|
|
||||||
%% 不能包含空格
|
|
||||||
case binary:match(Exec, <<" ">>) of
|
|
||||||
nomatch ->
|
|
||||||
check_manifest0(T, Settings, Manifest#manifest{exec = Exec});
|
|
||||||
_ ->
|
|
||||||
{error, <<"start cmd cannot contain args">>}
|
|
||||||
end
|
|
||||||
end;
|
|
||||||
check_manifest0([<<"args">>|T], Settings, Manifest) ->
|
|
||||||
case maps:find(<<"args">>, Settings) of
|
|
||||||
error ->
|
|
||||||
check_manifest0(T, Settings, Manifest#manifest{args = []});
|
|
||||||
%% 对参数项目不进行检查
|
|
||||||
{ok, Args} when is_list(Args) ->
|
|
||||||
check_manifest0(T, Settings, Manifest#manifest{args = Args});
|
|
||||||
{ok, _} ->
|
|
||||||
{error, <<"args must be list">>}
|
|
||||||
end.
|
|
||||||
|
|
||||||
-spec is_url(binary()) -> boolean().
|
|
||||||
is_url(Input) when is_binary(Input) ->
|
|
||||||
try
|
|
||||||
uri_string:parse(Input),
|
|
||||||
true
|
|
||||||
catch
|
|
||||||
_:_ -> false
|
|
||||||
end.
|
|
||||||
@ -1,25 +0,0 @@
|
|||||||
%%%-------------------------------------------------------------------
|
|
||||||
%%% @author anlicheng
|
|
||||||
%%% @copyright (C) 2025, <COMPANY>
|
|
||||||
%%% @doc
|
|
||||||
%%%
|
|
||||||
%%% @end
|
|
||||||
%%% Created : 03. 6月 2025 14:09
|
|
||||||
%%%-------------------------------------------------------------------
|
|
||||||
-module(efka_monitor).
|
|
||||||
-author("anlicheng").
|
|
||||||
|
|
||||||
%% API
|
|
||||||
-export([]).
|
|
||||||
|
|
||||||
%% API
|
|
||||||
-export([memory_top/1, cpu_top/1, stop/0]).
|
|
||||||
|
|
||||||
memory_top(Interval) when is_integer(Interval) ->
|
|
||||||
spawn(fun()->etop:start([{output, text}, {interval, Interval}, {lines, 20}, {sort, memory}])end).
|
|
||||||
|
|
||||||
cpu_top(Interval) when is_integer(Interval) ->
|
|
||||||
spawn(fun()->etop:start([{output, text}, {interval, Interval}, {lines, 20}, {sort, runtime}])end).
|
|
||||||
|
|
||||||
stop() ->
|
|
||||||
etop:stop().
|
|
||||||
390
apps/efka/src/efka_remote_agent.erl
Normal file
390
apps/efka/src/efka_remote_agent.erl
Normal file
@ -0,0 +1,390 @@
|
|||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
%%% @author anlicheng
|
||||||
|
%%% @copyright (C) 2025, <COMPANY>
|
||||||
|
%%% @doc
|
||||||
|
%%%
|
||||||
|
%%% @end
|
||||||
|
%%% Created : 21. 5月 2025 18:38
|
||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
-module(efka_remote_agent).
|
||||||
|
-author("anlicheng").
|
||||||
|
-include("message.hrl").
|
||||||
|
-include("efka_tables.hrl").
|
||||||
|
|
||||||
|
-behaviour(gen_statem).
|
||||||
|
|
||||||
|
%% API
|
||||||
|
-export([start_link/0]).
|
||||||
|
-export([metric_data/2, ping/13, task_event_stream/3, close_task_event_stream/2]).
|
||||||
|
|
||||||
|
%% gen_statem callbacks
|
||||||
|
-export([init/1, handle_event/4, terminate/3, code_change/4, callback_mode/0]).
|
||||||
|
|
||||||
|
-define(SERVER, ?MODULE).
|
||||||
|
|
||||||
|
%% 标记当前agent的状态,只有在 activated 状态下才可以正常的发送数据
|
||||||
|
-define(STATE_DENIED, denied).
|
||||||
|
-define(STATE_CONNECTING, connecting).
|
||||||
|
-define(STATE_AUTH, auth).
|
||||||
|
%% 不能推送消息到服务,但是可以接受服务器的部分指令
|
||||||
|
-define(STATE_RESTRICTED, restricted).
|
||||||
|
%% 激活状态下
|
||||||
|
-define(STATE_ACTIVATED, activated).
|
||||||
|
|
||||||
|
-record(state, {
|
||||||
|
transport_pid :: undefined | pid(),
|
||||||
|
transport_ref :: undefined | reference()
|
||||||
|
}).
|
||||||
|
|
||||||
|
%%%===================================================================
|
||||||
|
%%% API
|
||||||
|
%%%===================================================================
|
||||||
|
|
||||||
|
%% 发送数据
|
||||||
|
-spec metric_data(RouteKey :: binary(), Metric :: binary()) -> no_return().
|
||||||
|
metric_data(RouteKey, Metric) when is_binary(RouteKey), is_binary(Metric) ->
|
||||||
|
gen_statem:cast(?SERVER, {metric_data, RouteKey, Metric}).
|
||||||
|
|
||||||
|
-spec task_event_stream(TaskId :: integer(), Type :: binary(), Stream :: binary()) -> no_return().
|
||||||
|
task_event_stream(TaskId, Type, Stream) when is_integer(TaskId), is_binary(Type), is_binary(Stream) ->
|
||||||
|
gen_statem:cast(?SERVER, {task_event_stream, TaskId, Type, Stream}).
|
||||||
|
|
||||||
|
-spec close_task_event_stream(TaskId :: integer(), Reason :: binary()) -> no_return().
|
||||||
|
close_task_event_stream(TaskId, Reason) when is_integer(TaskId), is_binary(Reason) ->
|
||||||
|
gen_statem:cast(?SERVER, {close_task_event_stream, TaskId, Reason}).
|
||||||
|
|
||||||
|
ping(AdCode, BootTime, Province, City, EfkaVersion, KernelArch, Ips, CpuCore, CpuLoad, CpuTemperature, Disk, Memory, Interfaces) ->
|
||||||
|
gen_statem:cast(?SERVER, {ping, AdCode, BootTime, Province, City, EfkaVersion, KernelArch, Ips, CpuCore, CpuLoad, CpuTemperature, Disk, Memory, Interfaces}).
|
||||||
|
|
||||||
|
%% @doc Creates a gen_statem process which calls Module:init/1 to
|
||||||
|
%% initialize. To ensure a synchronized start-up procedure, this
|
||||||
|
%% function does not return until Module:init/1 has returned.
|
||||||
|
start_link() ->
|
||||||
|
gen_statem:start_link({local, ?SERVER}, ?MODULE, [], []).
|
||||||
|
|
||||||
|
%%%===================================================================
|
||||||
|
%%% gen_statem callbacks
|
||||||
|
%%%===================================================================
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Whenever a gen_statem is started using gen_statem:start/[3,4] or
|
||||||
|
%% gen_statem:start_link/[3,4], this function is called by the new
|
||||||
|
%% process to initialize.
|
||||||
|
init([]) ->
|
||||||
|
erlang:start_timer(0, self(), create_transport),
|
||||||
|
{ok, ?STATE_DENIED, #state{}}.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc This function is called by a gen_statem when it needs to find out
|
||||||
|
%% the callback mode of the callback module.
|
||||||
|
callback_mode() ->
|
||||||
|
handle_event_function.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc If callback_mode is handle_event_function, then whenever a
|
||||||
|
%% gen_statem receives an event from call/2, cast/2, or as a normal
|
||||||
|
%% process message, this function is called.
|
||||||
|
|
||||||
|
%% 异步发送数据, 连接存在时候直接发送;否则缓存到mnesia
|
||||||
|
handle_event(cast, {metric_data, RouteKey, Metric}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
||||||
|
Packet = message_codec:encode(?MESSAGE_DATA, #data{
|
||||||
|
route_key = RouteKey,
|
||||||
|
metric = Metric
|
||||||
|
}),
|
||||||
|
efka_transport:send(TransportPid, Packet),
|
||||||
|
{keep_state, State};
|
||||||
|
|
||||||
|
handle_event(cast, {metric_data, RouteKey, Metric}, _, State) ->
|
||||||
|
Packet = message_codec:encode(?MESSAGE_DATA, #data{
|
||||||
|
route_key = RouteKey,
|
||||||
|
metric = Metric
|
||||||
|
}),
|
||||||
|
ok = cache_model:insert(Packet),
|
||||||
|
{keep_state, State};
|
||||||
|
|
||||||
|
handle_event(cast, {task_event_stream, TaskId, Type, Stream}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
||||||
|
lager:debug("[efka_remote_agent] event_stream task_id: ~p, stream: ~ts", [TaskId, Stream]),
|
||||||
|
EventPacket = message_codec:encode(?MESSAGE_EVENT_STREAM, #task_event_stream{
|
||||||
|
task_id = TaskId,
|
||||||
|
type = Type,
|
||||||
|
stream = Stream
|
||||||
|
}),
|
||||||
|
efka_transport:send(TransportPid, EventPacket),
|
||||||
|
{keep_state, State};
|
||||||
|
|
||||||
|
handle_event(cast, {close_task_event_stream, TaskId, Reason}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
||||||
|
EventPacket = message_codec:encode(?MESSAGE_EVENT_STREAM, #task_event_stream{
|
||||||
|
task_id = TaskId,
|
||||||
|
type = <<"close">>,
|
||||||
|
stream = Reason
|
||||||
|
}),
|
||||||
|
efka_transport:send(TransportPid, EventPacket),
|
||||||
|
{keep_state, State};
|
||||||
|
|
||||||
|
%% 其他情况下直接忽略
|
||||||
|
handle_event(cast, {task_event_stream, _TaskId, _Stream}, _, State = #state{}) ->
|
||||||
|
{keep_state, State};
|
||||||
|
|
||||||
|
%handle_event(cast, {ping, AdCode, BootTime, Province, City, EfkaVersion, KernelArch, Ips, CpuCore, CpuLoad, CpuTemperature, Disk, Memory, Interfaces}, ?STATE_ACTIVATED,
|
||||||
|
% State = #state{transport_pid = TransportPid}) ->
|
||||||
|
%
|
||||||
|
% Ping = message_pb:encode_msg(#ping{
|
||||||
|
% adcode = AdCode,
|
||||||
|
% boot_time = BootTime,
|
||||||
|
% province = Province,
|
||||||
|
% city = City,
|
||||||
|
% efka_version = EfkaVersion,
|
||||||
|
% kernel_arch = KernelArch,
|
||||||
|
% ips = Ips,
|
||||||
|
% cpu_core = CpuCore,
|
||||||
|
% cpu_load = CpuLoad,
|
||||||
|
% cpu_temperature = CpuTemperature,
|
||||||
|
% disk = Disk,
|
||||||
|
% memory = Memory,
|
||||||
|
% interfaces = Interfaces
|
||||||
|
% }),
|
||||||
|
% efka_transport:send(TransportPid, ?METHOD_PING, Ping),
|
||||||
|
% {keep_state, State};
|
||||||
|
|
||||||
|
%% 异步建立到服务器的连接
|
||||||
|
handle_event(info, {timeout, _, create_transport}, ?STATE_DENIED, State) ->
|
||||||
|
{ok, Props} = application:get_env(efka, tls_server),
|
||||||
|
Host = proplists:get_value(host, Props),
|
||||||
|
Port = proplists:get_value(port, Props),
|
||||||
|
{ok, {TransportPid, TransportRef}} = efka_transport:start_monitor(self(), Host, Port),
|
||||||
|
efka_transport:connect(TransportPid),
|
||||||
|
|
||||||
|
{next_state, ?STATE_CONNECTING, State#state{transport_pid = TransportPid, transport_ref = TransportRef}};
|
||||||
|
|
||||||
|
handle_event(info, {connect_reply, Reply}, ?STATE_CONNECTING, State = #state{transport_pid = TransportPid}) ->
|
||||||
|
case Reply of
|
||||||
|
ok ->
|
||||||
|
AuthBin = auth_request(),
|
||||||
|
efka_transport:auth_request(TransportPid, AuthBin),
|
||||||
|
{next_state, ?STATE_AUTH, State};
|
||||||
|
{error, Reason} ->
|
||||||
|
lager:debug("[efka_remote_agent] connect failed, error: ~p, pid: ~p", [Reason, TransportPid]),
|
||||||
|
efka_transport:stop(TransportPid),
|
||||||
|
{next_state, ?STATE_DENIED, State#state{transport_pid = undefined}}
|
||||||
|
end;
|
||||||
|
|
||||||
|
handle_event(info, {auth_reply, Reply}, ?STATE_AUTH, State = #state{transport_pid = TransportPid}) ->
|
||||||
|
case Reply of
|
||||||
|
{ok, #auth_reply{code = Code, payload = Message}} ->
|
||||||
|
case Code of
|
||||||
|
0 ->
|
||||||
|
lager:debug("[efka_remote_agent] auth success, message: ~p", [Message]),
|
||||||
|
{next_state, ?STATE_ACTIVATED, State, [{next_event, info, flush_cache}]};
|
||||||
|
1 ->
|
||||||
|
%% 主机在后台的授权未通过;此时agent不能推送数据给云端服务器,但是云端服务器可以推送命令给agent
|
||||||
|
%% socket的连接状态需要维持
|
||||||
|
lager:debug("[efka_remote_agent] auth denied, message: ~p", [Message]),
|
||||||
|
{next_state, ?STATE_RESTRICTED, State};
|
||||||
|
2 ->
|
||||||
|
% 其他类型的错误,需要间隔时间重试
|
||||||
|
lager:debug("[efka_remote_agent] auth failed, message: ~p", [Message]),
|
||||||
|
efka_transport:stop(TransportPid),
|
||||||
|
{next_state, ?STATE_DENIED, State#state{transport_pid = undefined}};
|
||||||
|
_ ->
|
||||||
|
% 其他类型的错误,需要间隔时间重试
|
||||||
|
lager:debug("[efka_remote_agent] auth failed, invalid message"),
|
||||||
|
efka_transport:stop(TransportPid),
|
||||||
|
{next_state, ?STATE_DENIED, State#state{transport_pid = undefined}}
|
||||||
|
end;
|
||||||
|
{error, Reason} ->
|
||||||
|
lager:debug("[efka_remote_agent] auth_request failed, error: ~p", [Reason]),
|
||||||
|
efka_transport:stop(TransportPid),
|
||||||
|
{next_state, ?STATE_DENIED, State#state{transport_pid = undefined}}
|
||||||
|
end;
|
||||||
|
|
||||||
|
%% 将缓存中的数据推送到服务器端
|
||||||
|
handle_event(info, flush_cache, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
||||||
|
case cache_model:fetch_next() of
|
||||||
|
{ok, {Id, Packet}} ->
|
||||||
|
efka_transport:send(TransportPid, Packet),
|
||||||
|
cache_model:delete(Id),
|
||||||
|
{keep_state, State, [{next_event, info, flush_cache}]};
|
||||||
|
error ->
|
||||||
|
{keep_state, State}
|
||||||
|
end;
|
||||||
|
handle_event(info, flush_cache, _, State) ->
|
||||||
|
{keep_state, State};
|
||||||
|
|
||||||
|
%% 云端服务器推送了消息
|
||||||
|
%% 激活消息
|
||||||
|
|
||||||
|
%% 微服务部署
|
||||||
|
handle_event(info, {server_rpc, PacketId, #jsonrpc_request{method = <<"get_containers">>}}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
||||||
|
%% 短暂的等待,efka_inetd收到消息后就立即返回了
|
||||||
|
case docker_manager:get_containers() of
|
||||||
|
{ok, Containers} ->
|
||||||
|
efka_transport:rpc_reply(TransportPid, PacketId, reply_success(Containers));
|
||||||
|
{error, Reason} when is_binary(Reason) ->
|
||||||
|
efka_transport:rpc_reply(TransportPid, PacketId, reply_error(-1, Reason))
|
||||||
|
end,
|
||||||
|
{keep_state, State};
|
||||||
|
|
||||||
|
%% 微服务部署
|
||||||
|
handle_event(info, {server_rpc, PacketId, #jsonrpc_request{method = <<"deploy">>, params = Params}}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
||||||
|
#{<<"task_id">> := TaskId, <<"config">> := Config} = Params,
|
||||||
|
%% 短暂的等待,efka_inetd收到消息后就立即返回了
|
||||||
|
case docker_manager:deploy(TaskId, Config) of
|
||||||
|
ok ->
|
||||||
|
efka_transport:rpc_reply(TransportPid, PacketId, reply_success(<<"ok">>));
|
||||||
|
{error, Reason} when is_binary(Reason) ->
|
||||||
|
efka_transport:rpc_reply(TransportPid, PacketId, reply_error(-1, Reason))
|
||||||
|
end,
|
||||||
|
{keep_state, State};
|
||||||
|
|
||||||
|
%% 启动微服务
|
||||||
|
handle_event(info, {server_rpc, PacketId, #jsonrpc_request{method = <<"start_container">>, params = Params}}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
||||||
|
#{<<"container_name">> := ContainerName} = Params,
|
||||||
|
%% 短暂的等待,efka_inetd收到消息后就立即返回了
|
||||||
|
case docker_manager:start_container(ContainerName) of
|
||||||
|
ok ->
|
||||||
|
efka_transport:rpc_reply(TransportPid, PacketId, reply_success(<<"ok">>));
|
||||||
|
{error, Reason} when is_binary(Reason) ->
|
||||||
|
efka_transport:rpc_reply(TransportPid, PacketId, reply_error(-1, Reason))
|
||||||
|
end,
|
||||||
|
{keep_state, State};
|
||||||
|
|
||||||
|
%% 停止微服务
|
||||||
|
handle_event(info, {server_rpc, PacketId, #jsonrpc_request{method = <<"stop_container">>, params = Params}}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
||||||
|
#{<<"container_name">> := ContainerName} = Params,
|
||||||
|
%% 短暂的等待,efka_inetd收到消息后就立即返回了
|
||||||
|
case docker_manager:stop_container(ContainerName) of
|
||||||
|
ok ->
|
||||||
|
efka_transport:rpc_reply(TransportPid, PacketId, reply_success(<<"ok">>));
|
||||||
|
{error, Reason} when is_binary(Reason) ->
|
||||||
|
efka_transport:rpc_reply(TransportPid, PacketId, reply_error(-1, Reason))
|
||||||
|
end,
|
||||||
|
{keep_state, State};
|
||||||
|
|
||||||
|
handle_event(info, {server_rpc, PacketId, #jsonrpc_request{method = <<"kill_container">>, params = Params}}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
||||||
|
#{<<"container_name">> := ContainerName} = Params,
|
||||||
|
%% 短暂的等待,efka_inetd收到消息后就立即返回了
|
||||||
|
case docker_manager:kill_container(ContainerName) of
|
||||||
|
ok ->
|
||||||
|
efka_transport:rpc_reply(TransportPid, PacketId, reply_success(<<"ok">>));
|
||||||
|
{error, Reason} when is_binary(Reason) ->
|
||||||
|
efka_transport:rpc_reply(TransportPid, PacketId, reply_error(-1, Reason))
|
||||||
|
end,
|
||||||
|
{keep_state, State};
|
||||||
|
|
||||||
|
handle_event(info, {server_rpc, PacketId, #jsonrpc_request{method = <<"remove_container">>, params = Params}}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
||||||
|
#{<<"container_name">> := ContainerName} = Params,
|
||||||
|
%% 短暂的等待,efka_inetd收到消息后就立即返回了
|
||||||
|
case docker_manager:remove_container(ContainerName) of
|
||||||
|
ok ->
|
||||||
|
efka_transport:rpc_reply(TransportPid, PacketId, reply_success(<<"ok">>));
|
||||||
|
{error, Reason} when is_binary(Reason) ->
|
||||||
|
efka_transport:rpc_reply(TransportPid, PacketId, reply_error(-1, Reason))
|
||||||
|
end,
|
||||||
|
{keep_state, State};
|
||||||
|
|
||||||
|
%% config.json配置信息
|
||||||
|
handle_event(info, {server_rpc, PacketId, #jsonrpc_request{method = <<"config_container">>, params = Params}}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
||||||
|
#{<<"container_name">> := ContainerName, <<"config">> := Config} = Params,
|
||||||
|
case docker_manager:config_container(ContainerName, Config) of
|
||||||
|
ok ->
|
||||||
|
efka_transport:rpc_reply(TransportPid, PacketId, reply_success(<<"ok">>));
|
||||||
|
{error, Reason} ->
|
||||||
|
efka_transport:rpc_reply(TransportPid, PacketId, reply_error(-1, Reason))
|
||||||
|
end,
|
||||||
|
{keep_state, State};
|
||||||
|
|
||||||
|
%% 处理task_log
|
||||||
|
%handle_event(info, {server_async_call, PacketId, <<?PUSH_TASK_LOG:8, TaskLogBin/binary>>}, ?STATE_ACTIVATED, State = #state{transport_pid = TransportPid}) ->
|
||||||
|
% #fetch_task_log{task_id = TaskId} = message_pb:decode_msg(TaskLogBin, fetch_task_log),
|
||||||
|
% lager:debug("[efka_remote_agent] get task_log request: ~p", [TaskId]),
|
||||||
|
% {ok, Logs} = efka_inetd_task_log:get_logs(TaskId),
|
||||||
|
% Reply = case length(Logs) > 0 of
|
||||||
|
% true ->
|
||||||
|
% Result = iolist_to_binary(jiffy:encode(Logs, [force_utf8])),
|
||||||
|
% #async_call_reply{code = 1, result = Result};
|
||||||
|
% false ->
|
||||||
|
% #async_call_reply{code = 1, result = <<"[]">>}
|
||||||
|
% end,
|
||||||
|
% efka_transport:async_call_reply(TransportPid, PacketId, message_pb:encode_msg(Reply)),
|
||||||
|
%
|
||||||
|
% {keep_state, State};
|
||||||
|
|
||||||
|
%% 处理命令
|
||||||
|
handle_event(info, {server_cast, #command{command_type = ?COMMAND_AUTH, command = Auth0}}, StateName, State = #state{transport_pid = TransportPid}) ->
|
||||||
|
Auth = binary_to_integer(Auth0),
|
||||||
|
case {Auth, StateName} of
|
||||||
|
{1, ?STATE_ACTIVATED} ->
|
||||||
|
{keep_state, State};
|
||||||
|
{1, ?STATE_DENIED} ->
|
||||||
|
%% 重新激活, 需要重新校验
|
||||||
|
AuthRequestBin = auth_request(),
|
||||||
|
efka_transport:auth_request(TransportPid, AuthRequestBin),
|
||||||
|
{next_state, ?STATE_AUTH, State};
|
||||||
|
{0, _} ->
|
||||||
|
%% 这个时候的主机应该是受限制的状态,不允许发送消息;但是能够接受服务器推送的消息
|
||||||
|
{next_state, ?STATE_RESTRICTED, State}
|
||||||
|
end;
|
||||||
|
|
||||||
|
%% 处理Pub/Sub机制
|
||||||
|
handle_event(info, {server_cast, #pub{topic = Topic, qos = Qos, content = Content}}, ?STATE_ACTIVATED, State) ->
|
||||||
|
lager:debug("[efka_remote_agent] get pub topic: ~p, qos: ~p, content: ~p", [Topic, Qos, Content]),
|
||||||
|
%% 消息发送到订阅系统
|
||||||
|
efka_subscription:publish(Topic, Qos, Content),
|
||||||
|
{keep_state, State};
|
||||||
|
|
||||||
|
%% transport进程退出
|
||||||
|
handle_event(info, {'DOWN', MRef, process, TransportPid, Reason}, _, State = #state{transport_ref = MRef}) ->
|
||||||
|
lager:debug("[efka_remote_agent] transport pid: ~p, exit with reason: ~p", [TransportPid, Reason]),
|
||||||
|
erlang:start_timer(5000, self(), create_transport),
|
||||||
|
{next_state, ?STATE_DENIED, State#state{transport_pid = undefined, transport_ref = undefined}}.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc This function is called by a gen_statem when it is about to
|
||||||
|
%% terminate. It should be the opposite of Module:init/1 and do any
|
||||||
|
%% necessary cleaning up. When it returns, the gen_statem terminates with
|
||||||
|
%% Reason. The return value is ignored.
|
||||||
|
terminate(_Reason, _StateName, _State = #state{transport_pid = TransportPid}) ->
|
||||||
|
case is_pid(TransportPid) andalso is_process_alive(TransportPid) of
|
||||||
|
true ->
|
||||||
|
efka_transport:stop(TransportPid);
|
||||||
|
false ->
|
||||||
|
ok
|
||||||
|
end,
|
||||||
|
ok.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Convert process state when code is changed
|
||||||
|
code_change(_OldVsn, StateName, State = #state{}, _Extra) ->
|
||||||
|
{ok, StateName, State}.
|
||||||
|
|
||||||
|
%%%===================================================================
|
||||||
|
%%% Internal functions
|
||||||
|
%%%===================================================================
|
||||||
|
|
||||||
|
-spec auth_request() -> binary().
|
||||||
|
auth_request() ->
|
||||||
|
{ok, AuthInfo} = application:get_env(efka, auth),
|
||||||
|
UUID = proplists:get_value(uuid, AuthInfo),
|
||||||
|
Username = proplists:get_value(username, AuthInfo),
|
||||||
|
Salt = proplists:get_value(salt, AuthInfo),
|
||||||
|
Token = proplists:get_value(token, AuthInfo),
|
||||||
|
|
||||||
|
message_codec:encode(?MESSAGE_AUTH_REQUEST, #auth_request{
|
||||||
|
uuid = unicode:characters_to_binary(UUID),
|
||||||
|
username = unicode:characters_to_binary(Username),
|
||||||
|
salt = unicode:characters_to_binary(Salt),
|
||||||
|
token = unicode:characters_to_binary(Token),
|
||||||
|
timestamp = efka_util:timestamp()
|
||||||
|
}).
|
||||||
|
|
||||||
|
-spec reply_success(Result :: any()) -> binary().
|
||||||
|
reply_success(Result) ->
|
||||||
|
message_codec:encode(?MESSAGE_JSONRPC_REPLY, #jsonrpc_reply{result = Result}).
|
||||||
|
|
||||||
|
-spec reply_error(Code :: integer(), Message :: binary()) -> binary().
|
||||||
|
reply_error(Code, Message) when is_integer(Code), is_binary(Message) ->
|
||||||
|
Error = #{
|
||||||
|
<<"code">> => Code,
|
||||||
|
<<"message">> => Message
|
||||||
|
},
|
||||||
|
message_codec:encode(?MESSAGE_JSONRPC_REPLY, #jsonrpc_reply{error = Error}).
|
||||||
@ -17,7 +17,6 @@
|
|||||||
%% API
|
%% API
|
||||||
-export([start_link/2]).
|
-export([start_link/2]).
|
||||||
-export([get_name/1, get_pid/1, attach_channel/2]).
|
-export([get_name/1, get_pid/1, attach_channel/2]).
|
||||||
-export([push_config/3, request_config/1, invoke/3]).
|
|
||||||
-export([metric_data/3, send_event/3]).
|
-export([metric_data/3, send_event/3]).
|
||||||
|
|
||||||
%% gen_server callbacks
|
%% gen_server callbacks
|
||||||
@ -26,17 +25,7 @@
|
|||||||
-record(state, {
|
-record(state, {
|
||||||
service_id :: binary(),
|
service_id :: binary(),
|
||||||
%% 通道id信息
|
%% 通道id信息
|
||||||
channel_pid :: pid() | undefined,
|
channel_pid :: pid() | undefined
|
||||||
%% 当前进程的port信息, OSPid = erlang:port_info(Port, os_pid)
|
|
||||||
port :: undefined | port(),
|
|
||||||
%% 系统对应的pid
|
|
||||||
os_pid :: undefined | integer(),
|
|
||||||
%% 配置信息
|
|
||||||
manifest :: undefined | efka_manifest:manifest(),
|
|
||||||
inflight = #{},
|
|
||||||
|
|
||||||
%% 映射关系: #{Ref => Fun}
|
|
||||||
callbacks = #{}
|
|
||||||
}).
|
}).
|
||||||
|
|
||||||
%%%===================================================================
|
%%%===================================================================
|
||||||
@ -51,21 +40,9 @@ get_name(ServiceId) when is_binary(ServiceId) ->
|
|||||||
get_pid(ServiceId) when is_binary(ServiceId) ->
|
get_pid(ServiceId) when is_binary(ServiceId) ->
|
||||||
whereis(get_name(ServiceId)).
|
whereis(get_name(ServiceId)).
|
||||||
|
|
||||||
-spec push_config(Pid :: pid(), Ref :: reference(), ConfigJson :: binary()) -> no_return().
|
-spec metric_data(Pid :: pid(), RouteKey :: binary(), Metric :: binary()) -> no_return().
|
||||||
push_config(Pid, Ref, ConfigJson) when is_pid(Pid), is_binary(ConfigJson) ->
|
metric_data(Pid, RouteKey, Metric) when is_pid(Pid), is_binary(RouteKey), is_binary(Metric) ->
|
||||||
gen_server:cast(Pid, {push_config, Ref, self(), ConfigJson}).
|
gen_server:cast(Pid, {metric_data, RouteKey, Metric}).
|
||||||
|
|
||||||
-spec invoke(Pid :: pid(), Ref :: reference(), Payload :: binary()) -> no_return().
|
|
||||||
invoke(Pid, Ref, Payload) when is_pid(Pid), is_reference(Ref), is_binary(Payload) ->
|
|
||||||
gen_server:cast(Pid, {invoke, Ref, self(), Payload}).
|
|
||||||
|
|
||||||
-spec request_config(Pid :: pid()) -> {ok, Config :: binary()}.
|
|
||||||
request_config(Pid) when is_pid(Pid) ->
|
|
||||||
gen_server:call(Pid, request_config).
|
|
||||||
|
|
||||||
-spec metric_data(Pid :: pid(), DeviceUUID :: binary(), Data :: binary()) -> no_return().
|
|
||||||
metric_data(Pid, DeviceUUID, Data) when is_pid(Pid), is_binary(DeviceUUID), is_binary(Data) ->
|
|
||||||
gen_server:cast(Pid, {metric_data, DeviceUUID, Data}).
|
|
||||||
|
|
||||||
-spec send_event(Pid :: pid(), EventType :: integer(), Params :: binary()) -> no_return().
|
-spec send_event(Pid :: pid(), EventType :: integer(), Params :: binary()) -> no_return().
|
||||||
send_event(Pid, EventType, Params) when is_pid(Pid), is_integer(EventType), is_binary(Params) ->
|
send_event(Pid, EventType, Params) when is_pid(Pid), is_integer(EventType), is_binary(Params) ->
|
||||||
@ -92,29 +69,8 @@ start_link(Name, ServiceId) when is_atom(Name), is_binary(ServiceId) ->
|
|||||||
{stop, Reason :: term()} | ignore).
|
{stop, Reason :: term()} | ignore).
|
||||||
init([ServiceId]) ->
|
init([ServiceId]) ->
|
||||||
%% supervisor进程通过exit(ChildPid, shutdown)调用的时候,确保terminate函数被调用
|
%% supervisor进程通过exit(ChildPid, shutdown)调用的时候,确保terminate函数被调用
|
||||||
erlang:process_flag(trap_exit, true),
|
lager:debug("[efka_service] service_id: ~p, started", [ServiceId]),
|
||||||
case service_model:get_service(ServiceId) of
|
{ok, #state{service_id = ServiceId}}.
|
||||||
error ->
|
|
||||||
lager:notice("[efka_service] service_id: ~p, not found", [ServiceId]),
|
|
||||||
ignore;
|
|
||||||
{ok, #service{root_dir = RootDir}} ->
|
|
||||||
%% 第一次启动,要求必须成功;只有第一次启动成功,后续的重启逻辑才有意义
|
|
||||||
case efka_manifest:new(RootDir) of
|
|
||||||
{ok, Manifest} ->
|
|
||||||
case efka_manifest:startup(Manifest) of
|
|
||||||
{ok, Port} ->
|
|
||||||
{os_pid, OSPid} = erlang:port_info(Port, os_pid),
|
|
||||||
lager:debug("[efka_service] service: ~p, port: ~p, boot_service success os_pid: ~p", [ServiceId, Port, OSPid]),
|
|
||||||
{ok, #state{service_id = ServiceId, manifest = Manifest, port = Port, os_pid = OSPid}};
|
|
||||||
{error, Reason} ->
|
|
||||||
lager:debug("[efka_service] service: ~p, boot_service get error: ~p", [ServiceId, Reason]),
|
|
||||||
{stop, Reason}
|
|
||||||
end;
|
|
||||||
{error, Reason} ->
|
|
||||||
lager:notice("[efka_service] service: ~p, read manifest.json get error: ~p", [ServiceId, Reason]),
|
|
||||||
ignore
|
|
||||||
end
|
|
||||||
end.
|
|
||||||
|
|
||||||
%% @private
|
%% @private
|
||||||
%% @doc Handling call messages
|
%% @doc Handling call messages
|
||||||
@ -137,15 +93,6 @@ handle_call({attach_channel, ChannelPid}, _From, State = #state{channel_pid = Ol
|
|||||||
{reply, {error, <<"channel exists">>}, State}
|
{reply, {error, <<"channel exists">>}, State}
|
||||||
end;
|
end;
|
||||||
|
|
||||||
%% 请求参数项 done
|
|
||||||
handle_call(request_config, _From, State = #state{service_id = ServiceId}) ->
|
|
||||||
case service_model:get_config_json(ServiceId) of
|
|
||||||
{ok, ConfigJson} ->
|
|
||||||
{reply, {ok, ConfigJson}, State};
|
|
||||||
error ->
|
|
||||||
{reply, {ok, <<>>}, State}
|
|
||||||
end;
|
|
||||||
|
|
||||||
handle_call(_Request, _From, State = #state{}) ->
|
handle_call(_Request, _From, State = #state{}) ->
|
||||||
{reply, ok, State}.
|
{reply, ok, State}.
|
||||||
|
|
||||||
@ -155,40 +102,11 @@ handle_call(_Request, _From, State = #state{}) ->
|
|||||||
{noreply, NewState :: #state{}} |
|
{noreply, NewState :: #state{}} |
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
handle_cast({metric_data, DeviceUUID, LineProtocolData}, State = #state{service_id = ServiceId}) ->
|
handle_cast({metric_data, RouteKey, Metric}, State = #state{service_id = ServiceId}) ->
|
||||||
lager:debug("[efka_service] metric_data service_id: ~p, device_uuid: ~p, metric data: ~p", [ServiceId, DeviceUUID, LineProtocolData]),
|
lager:debug("[efka_service] metric_data service_id: ~p, route_key: ~p, metric data: ~p", [ServiceId, RouteKey, Metric]),
|
||||||
efka_agent:metric_data(ServiceId, DeviceUUID, LineProtocolData),
|
efka_remote_agent:metric_data(RouteKey, Metric),
|
||||||
{noreply, State};
|
{noreply, State};
|
||||||
|
|
||||||
handle_cast({send_event, EventType, Params}, State = #state{service_id = ServiceId}) ->
|
|
||||||
efka_agent:event(ServiceId, EventType, Params),
|
|
||||||
lager:debug("[efka_service] send_event, service_id: ~p, event_type: ~p, params: ~p", [ServiceId, EventType, Params]),
|
|
||||||
{noreply, State};
|
|
||||||
|
|
||||||
%% 推送配置项目
|
|
||||||
handle_cast({push_config, Ref, ReceiverPid, ConfigJson}, State = #state{channel_pid = ChannelPid, service_id = ServiceId, inflight = Inflight, callbacks = Callbacks}) ->
|
|
||||||
case is_pid(ChannelPid) andalso is_process_alive(ChannelPid) of
|
|
||||||
true ->
|
|
||||||
efka_tcp_channel:push_config(ChannelPid, Ref, self(), ConfigJson),
|
|
||||||
%% 设置成功,需要更新微服务的配置
|
|
||||||
CB = fun() -> service_model:set_config(ServiceId, ConfigJson) end,
|
|
||||||
{noreply, State#state{inflight = maps:put(Ref, ReceiverPid, Inflight), callbacks = maps:put(Ref, CB, Callbacks)}};
|
|
||||||
false ->
|
|
||||||
ReceiverPid ! {service_reply, Ref, {error, <<"channel is not alive">>}},
|
|
||||||
{noreply, State}
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 推送配置项目
|
|
||||||
handle_cast({invoke, Ref, ReceiverPid, Payload}, State = #state{channel_pid = ChannelPid, inflight = Inflight}) ->
|
|
||||||
case is_pid(ChannelPid) andalso is_process_alive(ChannelPid) of
|
|
||||||
true ->
|
|
||||||
efka_tcp_channel:invoke(ChannelPid, Ref, self(), Payload),
|
|
||||||
{noreply, State#state{inflight = maps:put(Ref, ReceiverPid, Inflight)}};
|
|
||||||
false ->
|
|
||||||
ReceiverPid ! {service_reply, Ref, {error, <<"channel is not alive">>}},
|
|
||||||
{reply, State}
|
|
||||||
end;
|
|
||||||
|
|
||||||
handle_cast(_Request, State = #state{}) ->
|
handle_cast(_Request, State = #state{}) ->
|
||||||
{noreply, State}.
|
{noreply, State}.
|
||||||
|
|
||||||
@ -198,49 +116,10 @@ handle_cast(_Request, State = #state{}) ->
|
|||||||
{noreply, NewState :: #state{}} |
|
{noreply, NewState :: #state{}} |
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
%% 重启服务
|
|
||||||
handle_info({timeout, _, reboot_service}, State = #state{service_id = ServiceId, manifest = Manifest}) ->
|
|
||||||
case efka_manifest:startup(Manifest) of
|
|
||||||
{ok, Port} ->
|
|
||||||
{os_pid, OSPid} = erlang:port_info(Port, os_pid),
|
|
||||||
lager:debug("[efka_service] service_id: ~p, reboot success, port: ~p, os_pid: ~p", [ServiceId, Port, OSPid]),
|
|
||||||
{noreply, State#state{port = Port, os_pid = OSPid}};
|
|
||||||
{error, Reason} ->
|
|
||||||
lager:debug("[efka_service] service_id: ~p, boot_service get error: ~p", [ServiceId, Reason]),
|
|
||||||
try_reboot(),
|
|
||||||
{noreply, State}
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 处理channel的回复
|
|
||||||
handle_info({channel_reply, Ref, Reply}, State = #state{inflight = Inflight, callbacks = Callbacks}) ->
|
|
||||||
case maps:take(Ref, Inflight) of
|
|
||||||
error ->
|
|
||||||
{noreply, State};
|
|
||||||
{ReceiverPid, NInflight} ->
|
|
||||||
ReceiverPid ! {service_reply, Ref, Reply},
|
|
||||||
|
|
||||||
{noreply, State#state{inflight = NInflight, callbacks = trigger_callback(Ref, Callbacks)}}
|
|
||||||
end;
|
|
||||||
|
|
||||||
handle_info({Port, {data, Data}}, State = #state{service_id = ServiceId}) when is_port(Port) ->
|
|
||||||
lager:debug("[efka_service] service_id: ~p, port data: ~p", [ServiceId, Data]),
|
|
||||||
{noreply, State};
|
|
||||||
|
|
||||||
%% 处理port的消息, Port的被动关闭会触发;因此这个时候的Port和State.port的值是相等的
|
|
||||||
handle_info({Port, {exit_status, Code}}, State = #state{service_id = ServiceId}) when is_port(Port) ->
|
|
||||||
lager:debug("[efka_service] service_id: ~p, port: ~p, exit with code: ~p", [ServiceId, Port, Code]),
|
|
||||||
{noreply, State#state{port = undefined, os_pid = undefined}};
|
|
||||||
|
|
||||||
%% 处理port的退出消息
|
|
||||||
handle_info({'EXIT', Port, Reason}, State = #state{service_id = ServiceId}) when is_port(Port) ->
|
|
||||||
lager:debug("[efka_service] service_id: ~p, port: ~p, exit with reason: ~p", [ServiceId, Port, Reason]),
|
|
||||||
try_reboot(),
|
|
||||||
{noreply, State#state{port = undefined, os_pid = undefined}};
|
|
||||||
|
|
||||||
%% 处理channel进程的退出
|
%% 处理channel进程的退出
|
||||||
handle_info({'DOWN', _Ref, process, ChannelPid, Reason}, State = #state{channel_pid = ChannelPid, service_id = ServiceId}) ->
|
handle_info({'DOWN', _Ref, process, ChannelPid, Reason}, State = #state{channel_pid = ChannelPid, service_id = ServiceId}) ->
|
||||||
lager:debug("[efka_service] service_id: ~p, channel exited: ~p", [ServiceId, Reason]),
|
lager:debug("[efka_service] service_id: ~p, channel exited: ~p", [ServiceId, Reason]),
|
||||||
{noreply, State#state{channel_pid = undefined, inflight = #{}}}.
|
{noreply, State#state{channel_pid = undefined}}.
|
||||||
|
|
||||||
%% @private
|
%% @private
|
||||||
%% @doc This function is called by a gen_server when it is about to
|
%% @doc This function is called by a gen_server when it is about to
|
||||||
@ -249,9 +128,7 @@ handle_info({'DOWN', _Ref, process, ChannelPid, Reason}, State = #state{channel_
|
|||||||
%% with Reason. The return value is ignored.
|
%% with Reason. The return value is ignored.
|
||||||
-spec(terminate(Reason :: (normal | shutdown | {shutdown, term()} | term()),
|
-spec(terminate(Reason :: (normal | shutdown | {shutdown, term()} | term()),
|
||||||
State :: #state{}) -> term()).
|
State :: #state{}) -> term()).
|
||||||
terminate(Reason, _State = #state{service_id = ServiceId, port = Port, os_pid = OSPid}) ->
|
terminate(Reason, _State = #state{service_id = ServiceId}) ->
|
||||||
erlang:is_port(Port) andalso erlang:port_close(Port),
|
|
||||||
catch kill_os_pid(OSPid),
|
|
||||||
lager:debug("[efka_service] service_id: ~p, terminate with reason: ~p", [ServiceId, Reason]),
|
lager:debug("[efka_service] service_id: ~p, terminate with reason: ~p", [ServiceId, Reason]),
|
||||||
ok.
|
ok.
|
||||||
|
|
||||||
@ -265,27 +142,4 @@ code_change(_OldVsn, State = #state{}, _Extra) ->
|
|||||||
|
|
||||||
%%%===================================================================
|
%%%===================================================================
|
||||||
%%% Internal functions
|
%%% Internal functions
|
||||||
%%%===================================================================
|
%%%===================================================================
|
||||||
|
|
||||||
%% 关闭系统进程
|
|
||||||
-spec kill_os_pid(port() | undefined) -> no_return().
|
|
||||||
kill_os_pid(undefined) ->
|
|
||||||
ok;
|
|
||||||
kill_os_pid(OSPid) when is_integer(OSPid) ->
|
|
||||||
Cmd = lists:flatten(io_lib:format("kill -9 ~p", [OSPid])),
|
|
||||||
lager:debug("kill cmd is: ~p", [Cmd]),
|
|
||||||
os:cmd(Cmd).
|
|
||||||
|
|
||||||
-spec try_reboot() -> no_return().
|
|
||||||
try_reboot() ->
|
|
||||||
erlang:start_timer(5000, self(), reboot_service).
|
|
||||||
|
|
||||||
-spec trigger_callback(Ref :: reference(), Callbacks :: map()) -> NewCallbacks :: map().
|
|
||||||
trigger_callback(Ref, Callbacks) ->
|
|
||||||
case maps:take(Ref, Callbacks) of
|
|
||||||
error ->
|
|
||||||
Callbacks;
|
|
||||||
{Fun, NCallbacks} ->
|
|
||||||
catch Fun(),
|
|
||||||
NCallbacks
|
|
||||||
end.
|
|
||||||
@ -41,14 +41,7 @@ start_link() ->
|
|||||||
%% specifications.
|
%% specifications.
|
||||||
init([]) ->
|
init([]) ->
|
||||||
SupFlags = #{strategy => one_for_one, intensity => 1000, period => 3600},
|
SupFlags = #{strategy => one_for_one, intensity => 1000, period => 3600},
|
||||||
%% 简化逻辑,只启动需要运行的微服务
|
{ok, {SupFlags, []}}.
|
||||||
{ok, Services} = service_model:get_running_services(),
|
|
||||||
ServiceIds = lists:map(fun(#service{service_id = ServiceId}) -> ServiceId end, Services),
|
|
||||||
lager:debug("[efka_service_sup] will start services: ~p", [ServiceIds]),
|
|
||||||
|
|
||||||
Specs = lists:map(fun(ServiceId) -> child_spec(ServiceId) end, Services),
|
|
||||||
|
|
||||||
{ok, {SupFlags, Specs}}.
|
|
||||||
|
|
||||||
%%%===================================================================
|
%%%===================================================================
|
||||||
%%% Internal functions
|
%%% Internal functions
|
||||||
@ -71,8 +64,6 @@ stop_service(ServiceId) when is_binary(ServiceId) ->
|
|||||||
supervisor:terminate_child(?MODULE, ChildId),
|
supervisor:terminate_child(?MODULE, ChildId),
|
||||||
supervisor:delete_child(?MODULE, ChildId).
|
supervisor:delete_child(?MODULE, ChildId).
|
||||||
|
|
||||||
child_spec(#service{service_id = ServiceId}) when is_binary(ServiceId) ->
|
|
||||||
child_spec(ServiceId);
|
|
||||||
child_spec(ServiceId) when is_binary(ServiceId) ->
|
child_spec(ServiceId) when is_binary(ServiceId) ->
|
||||||
Name = efka_service:get_name(ServiceId),
|
Name = efka_service:get_name(ServiceId),
|
||||||
#{
|
#{
|
||||||
|
|||||||
@ -1,291 +0,0 @@
|
|||||||
%%%-------------------------------------------------------------------
|
|
||||||
%%% @author anlicheng
|
|
||||||
%%% @copyright (C) 2025, <COMPANY>
|
|
||||||
%%% @doc
|
|
||||||
%%% 1. 需要管理服务的整个生命周期,包括: 启动,停止
|
|
||||||
%%% 2. 需要监控服务的状态,通过port的方式
|
|
||||||
%%% 3. 服务的启动和关闭,需要在更高的层级控制
|
|
||||||
%%% @end
|
|
||||||
%%% Created : 18. 4月 2025 16:50
|
|
||||||
%%%-------------------------------------------------------------------
|
|
||||||
-module(efka_std_modbus_service).
|
|
||||||
-author("anlicheng").
|
|
||||||
-include("efka_tables.hrl").
|
|
||||||
|
|
||||||
-behaviour(gen_server).
|
|
||||||
|
|
||||||
%% API
|
|
||||||
-export([start_link/2]).
|
|
||||||
-export([get_name/1, get_pid/1, attach_channel/2]).
|
|
||||||
-export([push_config/3, request_config/1, invoke/3]).
|
|
||||||
-export([metric_data/3, send_event/3]).
|
|
||||||
|
|
||||||
%% gen_server callbacks
|
|
||||||
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).
|
|
||||||
|
|
||||||
-record(state, {
|
|
||||||
service_id :: binary(),
|
|
||||||
%% 通道id信息
|
|
||||||
channel_pid :: pid() | undefined,
|
|
||||||
%% 当前进程的port信息, OSPid = erlang:port_info(Port, os_pid)
|
|
||||||
port :: undefined | port(),
|
|
||||||
%% 系统对应的pid
|
|
||||||
os_pid :: undefined | integer(),
|
|
||||||
%% 配置信息
|
|
||||||
manifest :: undefined | efka_manifest:manifest(),
|
|
||||||
inflight = #{},
|
|
||||||
|
|
||||||
%% 映射关系: #{Ref => Fun}
|
|
||||||
callbacks = #{}
|
|
||||||
}).
|
|
||||||
|
|
||||||
%%%===================================================================
|
|
||||||
%%% API
|
|
||||||
%%%===================================================================
|
|
||||||
|
|
||||||
-spec get_name(ServiceId :: binary()) -> atom().
|
|
||||||
get_name(ServiceId) when is_binary(ServiceId) ->
|
|
||||||
list_to_atom("efka_service:" ++ binary_to_list(ServiceId)).
|
|
||||||
|
|
||||||
-spec get_pid(ServiceId :: binary()) -> undefined | pid().
|
|
||||||
get_pid(ServiceId) when is_binary(ServiceId) ->
|
|
||||||
whereis(get_name(ServiceId)).
|
|
||||||
|
|
||||||
-spec push_config(Pid :: pid(), Ref :: reference(), ConfigJson :: binary()) -> no_return().
|
|
||||||
push_config(Pid, Ref, ConfigJson) when is_pid(Pid), is_binary(ConfigJson) ->
|
|
||||||
gen_server:cast(Pid, {push_config, Ref, self(), ConfigJson}).
|
|
||||||
|
|
||||||
-spec invoke(Pid :: pid(), Ref :: reference(), Payload :: binary()) -> no_return().
|
|
||||||
invoke(Pid, Ref, Payload) when is_pid(Pid), is_reference(Ref), is_binary(Payload) ->
|
|
||||||
gen_server:cast(Pid, {invoke, Ref, self(), Payload}).
|
|
||||||
|
|
||||||
-spec request_config(Pid :: pid()) -> {ok, Config :: binary()}.
|
|
||||||
request_config(Pid) when is_pid(Pid) ->
|
|
||||||
gen_server:call(Pid, request_config).
|
|
||||||
|
|
||||||
-spec metric_data(Pid :: pid(), DeviceUUID :: binary(), Data :: binary()) -> no_return().
|
|
||||||
metric_data(Pid, DeviceUUID, Data) when is_pid(Pid), is_binary(DeviceUUID), is_binary(Data) ->
|
|
||||||
gen_server:cast(Pid, {metric_data, DeviceUUID, Data}).
|
|
||||||
|
|
||||||
-spec send_event(Pid :: pid(), EventType :: integer(), Params :: binary()) -> no_return().
|
|
||||||
send_event(Pid, EventType, Params) when is_pid(Pid), is_integer(EventType), is_binary(Params) ->
|
|
||||||
gen_server:cast(Pid, {send_event, EventType, Params}).
|
|
||||||
|
|
||||||
-spec attach_channel(pid(), pid()) -> ok | {error, Reason :: binary()}.
|
|
||||||
attach_channel(Pid, ChannelPid) when is_pid(Pid), is_pid(ChannelPid) ->
|
|
||||||
gen_server:call(Pid, {attach_channel, ChannelPid}).
|
|
||||||
|
|
||||||
%% @doc Spawns the server and registers the local name (unique)
|
|
||||||
-spec(start_link(Name :: atom(), Service :: binary()) ->
|
|
||||||
{ok, Pid :: pid()} | ignore | {error, Reason :: term()}).
|
|
||||||
start_link(Name, ServiceId) when is_atom(Name), is_binary(ServiceId) ->
|
|
||||||
gen_server:start_link({local, Name}, ?MODULE, [ServiceId], []).
|
|
||||||
|
|
||||||
%%%===================================================================
|
|
||||||
%%% gen_server callbacks
|
|
||||||
%%%===================================================================
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Initializes the server
|
|
||||||
-spec(init(Args :: term()) ->
|
|
||||||
{ok, State :: #state{}} | {ok, State :: #state{}, timeout() | hibernate} |
|
|
||||||
{stop, Reason :: term()} | ignore).
|
|
||||||
init([ServiceId]) ->
|
|
||||||
%% supervisor进程通过exit(ChildPid, shutdown)调用的时候,确保terminate函数被调用
|
|
||||||
erlang:process_flag(trap_exit, true),
|
|
||||||
case service_model:get_service(ServiceId) of
|
|
||||||
error ->
|
|
||||||
lager:notice("[efka_service] service_id: ~p, not found", [ServiceId]),
|
|
||||||
ignore;
|
|
||||||
{ok, #service{root_dir = RootDir}} ->
|
|
||||||
%% 第一次启动,要求必须成功;只有第一次启动成功,后续的重启逻辑才有意义
|
|
||||||
case efka_manifest:new(RootDir) of
|
|
||||||
{ok, Manifest} ->
|
|
||||||
case efka_manifest:startup(Manifest) of
|
|
||||||
{ok, Port} ->
|
|
||||||
{os_pid, OSPid} = erlang:port_info(Port, os_pid),
|
|
||||||
lager:debug("[efka_service] service: ~p, port: ~p, boot_service success os_pid: ~p", [ServiceId, Port, OSPid]),
|
|
||||||
{ok, #state{service_id = ServiceId, manifest = Manifest, port = Port, os_pid = OSPid}};
|
|
||||||
{error, Reason} ->
|
|
||||||
lager:debug("[efka_service] service: ~p, boot_service get error: ~p", [ServiceId, Reason]),
|
|
||||||
{stop, Reason}
|
|
||||||
end;
|
|
||||||
{error, Reason} ->
|
|
||||||
lager:notice("[efka_service] service: ~p, read manifest.json get error: ~p", [ServiceId, Reason]),
|
|
||||||
ignore
|
|
||||||
end
|
|
||||||
end.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Handling call messages
|
|
||||||
-spec(handle_call(Request :: term(), From :: {pid(), Tag :: term()},
|
|
||||||
State :: #state{}) ->
|
|
||||||
{reply, Reply :: term(), NewState :: #state{}} |
|
|
||||||
{reply, Reply :: term(), NewState :: #state{}, timeout() | hibernate} |
|
|
||||||
{noreply, NewState :: #state{}} |
|
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
|
||||||
{stop, Reason :: term(), Reply :: term(), NewState :: #state{}} |
|
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
|
||||||
%% 绑定channel
|
|
||||||
handle_call({attach_channel, ChannelPid}, _From, State = #state{channel_pid = OldChannelPid, service_id = ServiceId}) ->
|
|
||||||
case is_pid(OldChannelPid) andalso is_process_alive(OldChannelPid) of
|
|
||||||
false ->
|
|
||||||
erlang:monitor(process, ChannelPid),
|
|
||||||
lager:debug("[efka_service] service_id: ~p, channel attched", [ServiceId]),
|
|
||||||
{reply, ok, State#state{channel_pid = ChannelPid}};
|
|
||||||
true ->
|
|
||||||
{reply, {error, <<"channel exists">>}, State}
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 请求参数项 done
|
|
||||||
handle_call(request_config, _From, State = #state{service_id = ServiceId}) ->
|
|
||||||
case service_model:get_config_json(ServiceId) of
|
|
||||||
{ok, ConfigJson} ->
|
|
||||||
{reply, {ok, ConfigJson}, State};
|
|
||||||
error ->
|
|
||||||
{reply, {ok, <<>>}, State}
|
|
||||||
end;
|
|
||||||
|
|
||||||
handle_call(_Request, _From, State = #state{}) ->
|
|
||||||
{reply, ok, State}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Handling cast messages
|
|
||||||
-spec(handle_cast(Request :: term(), State :: #state{}) ->
|
|
||||||
{noreply, NewState :: #state{}} |
|
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
|
||||||
handle_cast({metric_data, DeviceUUID, LineProtocolData}, State = #state{service_id = ServiceId}) ->
|
|
||||||
lager:debug("[efka_service] metric_data service_id: ~p, device_uuid: ~p, metric data: ~p", [ServiceId, DeviceUUID, LineProtocolData]),
|
|
||||||
efka_agent:metric_data(ServiceId, DeviceUUID, LineProtocolData),
|
|
||||||
{noreply, State};
|
|
||||||
|
|
||||||
handle_cast({send_event, EventType, Params}, State = #state{service_id = ServiceId}) ->
|
|
||||||
efka_agent:event(ServiceId, EventType, Params),
|
|
||||||
lager:debug("[efka_service] send_event, service_id: ~p, event_type: ~p, params: ~p", [ServiceId, EventType, Params]),
|
|
||||||
{noreply, State};
|
|
||||||
|
|
||||||
%% 推送配置项目
|
|
||||||
handle_cast({push_config, Ref, ReceiverPid, ConfigJson}, State = #state{channel_pid = ChannelPid, service_id = ServiceId, inflight = Inflight, callbacks = Callbacks}) ->
|
|
||||||
case is_pid(ChannelPid) andalso is_process_alive(ChannelPid) of
|
|
||||||
true ->
|
|
||||||
efka_tcp_channel:push_config(ChannelPid, Ref, self(), ConfigJson),
|
|
||||||
%% 设置成功,需要更新微服务的配置
|
|
||||||
CB = fun() -> service_model:set_config(ServiceId, ConfigJson) end,
|
|
||||||
{noreply, State#state{inflight = maps:put(Ref, ReceiverPid, Inflight), callbacks = maps:put(Ref, CB, Callbacks)}};
|
|
||||||
false ->
|
|
||||||
ReceiverPid ! {service_reply, Ref, {error, <<"channel is not alive">>}},
|
|
||||||
{noreply, State}
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 推送配置项目
|
|
||||||
handle_cast({invoke, Ref, ReceiverPid, Payload}, State = #state{channel_pid = ChannelPid, inflight = Inflight}) ->
|
|
||||||
case is_pid(ChannelPid) andalso is_process_alive(ChannelPid) of
|
|
||||||
true ->
|
|
||||||
efka_tcp_channel:invoke(ChannelPid, Ref, self(), Payload),
|
|
||||||
{noreply, State#state{inflight = maps:put(Ref, ReceiverPid, Inflight)}};
|
|
||||||
false ->
|
|
||||||
ReceiverPid ! {service_reply, Ref, {error, <<"channel is not alive">>}},
|
|
||||||
{reply, State}
|
|
||||||
end;
|
|
||||||
|
|
||||||
handle_cast(_Request, State = #state{}) ->
|
|
||||||
{noreply, State}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Handling all non call/cast messages
|
|
||||||
-spec(handle_info(Info :: timeout() | term(), State :: #state{}) ->
|
|
||||||
{noreply, NewState :: #state{}} |
|
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
|
||||||
%% 重启服务
|
|
||||||
handle_info({timeout, _, reboot_service}, State = #state{service_id = ServiceId, manifest = Manifest}) ->
|
|
||||||
case efka_manifest:startup(Manifest) of
|
|
||||||
{ok, Port} ->
|
|
||||||
{os_pid, OSPid} = erlang:port_info(Port, os_pid),
|
|
||||||
lager:debug("[efka_service] service_id: ~p, reboot success, port: ~p, os_pid: ~p", [ServiceId, Port, OSPid]),
|
|
||||||
{noreply, State#state{port = Port, os_pid = OSPid}};
|
|
||||||
{error, Reason} ->
|
|
||||||
lager:debug("[efka_service] service_id: ~p, boot_service get error: ~p", [ServiceId, Reason]),
|
|
||||||
try_reboot(),
|
|
||||||
{noreply, State}
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 处理channel的回复
|
|
||||||
handle_info({channel_reply, Ref, Reply}, State = #state{inflight = Inflight, callbacks = Callbacks}) ->
|
|
||||||
case maps:take(Ref, Inflight) of
|
|
||||||
error ->
|
|
||||||
{noreply, State};
|
|
||||||
{ReceiverPid, NInflight} ->
|
|
||||||
ReceiverPid ! {service_reply, Ref, Reply},
|
|
||||||
|
|
||||||
{noreply, State#state{inflight = NInflight, callbacks = trigger_callback(Ref, Callbacks)}}
|
|
||||||
end;
|
|
||||||
|
|
||||||
handle_info({Port, {data, Data}}, State = #state{service_id = ServiceId}) when is_port(Port) ->
|
|
||||||
lager:debug("[efka_service] service_id: ~p, port data: ~p", [ServiceId, Data]),
|
|
||||||
{noreply, State};
|
|
||||||
|
|
||||||
%% 处理port的消息, Port的被动关闭会触发;因此这个时候的Port和State.port的值是相等的
|
|
||||||
handle_info({Port, {exit_status, Code}}, State = #state{service_id = ServiceId}) when is_port(Port) ->
|
|
||||||
lager:debug("[efka_service] service_id: ~p, port: ~p, exit with code: ~p", [ServiceId, Port, Code]),
|
|
||||||
{noreply, State#state{port = undefined, os_pid = undefined}};
|
|
||||||
|
|
||||||
%% 处理port的退出消息
|
|
||||||
handle_info({'EXIT', Port, Reason}, State = #state{service_id = ServiceId}) when is_port(Port) ->
|
|
||||||
lager:debug("[efka_service] service_id: ~p, port: ~p, exit with reason: ~p", [ServiceId, Port, Reason]),
|
|
||||||
try_reboot(),
|
|
||||||
{noreply, State#state{port = undefined, os_pid = undefined}};
|
|
||||||
|
|
||||||
%% 处理channel进程的退出
|
|
||||||
handle_info({'DOWN', _Ref, process, ChannelPid, Reason}, State = #state{channel_pid = ChannelPid, service_id = ServiceId}) ->
|
|
||||||
lager:debug("[efka_service] service_id: ~p, channel exited: ~p", [ServiceId, Reason]),
|
|
||||||
{noreply, State#state{channel_pid = undefined, inflight = #{}}}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc This function is called by a gen_server when it is about to
|
|
||||||
%% terminate. It should be the opposite of Module:init/1 and do any
|
|
||||||
%% necessary cleaning up. When it returns, the gen_server terminates
|
|
||||||
%% with Reason. The return value is ignored.
|
|
||||||
-spec(terminate(Reason :: (normal | shutdown | {shutdown, term()} | term()),
|
|
||||||
State :: #state{}) -> term()).
|
|
||||||
terminate(Reason, _State = #state{service_id = ServiceId, port = Port, os_pid = OSPid}) ->
|
|
||||||
erlang:is_port(Port) andalso erlang:port_close(Port),
|
|
||||||
catch kill_os_pid(OSPid),
|
|
||||||
lager:debug("[efka_service] service_id: ~p, terminate with reason: ~p", [ServiceId, Reason]),
|
|
||||||
ok.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Convert process state when code is changed
|
|
||||||
-spec(code_change(OldVsn :: term() | {down, term()}, State :: #state{},
|
|
||||||
Extra :: term()) ->
|
|
||||||
{ok, NewState :: #state{}} | {error, Reason :: term()}).
|
|
||||||
code_change(_OldVsn, State = #state{}, _Extra) ->
|
|
||||||
{ok, State}.
|
|
||||||
|
|
||||||
%%%===================================================================
|
|
||||||
%%% Internal functions
|
|
||||||
%%%===================================================================
|
|
||||||
|
|
||||||
%% 关闭系统进程
|
|
||||||
-spec kill_os_pid(port() | undefined) -> no_return().
|
|
||||||
kill_os_pid(undefined) ->
|
|
||||||
ok;
|
|
||||||
kill_os_pid(OSPid) when is_integer(OSPid) ->
|
|
||||||
Cmd = lists:flatten(io_lib:format("kill -9 ~p", [OSPid])),
|
|
||||||
lager:debug("kill cmd is: ~p", [Cmd]),
|
|
||||||
os:cmd(Cmd).
|
|
||||||
|
|
||||||
-spec try_reboot() -> no_return().
|
|
||||||
try_reboot() ->
|
|
||||||
erlang:start_timer(5000, self(), reboot_service).
|
|
||||||
|
|
||||||
-spec trigger_callback(Ref :: reference(), Callbacks :: map()) -> NewCallbacks :: map().
|
|
||||||
trigger_callback(Ref, Callbacks) ->
|
|
||||||
case maps:take(Ref, Callbacks) of
|
|
||||||
error ->
|
|
||||||
Callbacks;
|
|
||||||
{Fun, NCallbacks} ->
|
|
||||||
catch Fun(),
|
|
||||||
NCallbacks
|
|
||||||
end.
|
|
||||||
164
apps/efka/src/efka_stream.erl
Normal file
164
apps/efka/src/efka_stream.erl
Normal file
@ -0,0 +1,164 @@
|
|||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
%%% @author anlicheng
|
||||||
|
%%% @copyright (C) 2025, <COMPANY>
|
||||||
|
%%% @doc
|
||||||
|
%%%
|
||||||
|
%%% @end
|
||||||
|
%%% Created : 13. 11月 2025 10:57
|
||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
-module(efka_stream).
|
||||||
|
-author("anlicheng").
|
||||||
|
|
||||||
|
-behaviour(gen_server).
|
||||||
|
|
||||||
|
%% API
|
||||||
|
-export([start_monitor/1]).
|
||||||
|
-export([setup/3, data/2, finish/1]).
|
||||||
|
|
||||||
|
%% gen_server callbacks
|
||||||
|
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).
|
||||||
|
|
||||||
|
-define(SERVER, ?MODULE).
|
||||||
|
|
||||||
|
-record(state, {
|
||||||
|
parent_pid :: pid(),
|
||||||
|
ref :: reference(),
|
||||||
|
file_size = 0 :: integer(),
|
||||||
|
acc_size = 0 :: integer(),
|
||||||
|
real_file :: undefined | string(),
|
||||||
|
io_device :: undefined | file:fd()
|
||||||
|
}).
|
||||||
|
|
||||||
|
%%%===================================================================
|
||||||
|
%%% API
|
||||||
|
%%%===================================================================
|
||||||
|
|
||||||
|
-spec setup(StreamPid :: pid(), FileName :: string(), FileSize :: integer()) -> {ok, Path :: string()}.
|
||||||
|
setup(StreamPid, FileName, FileSize) when is_pid(StreamPid), is_list(FileName), is_integer(FileSize) ->
|
||||||
|
gen_server:call(StreamPid, {setup, FileName, FileSize}).
|
||||||
|
|
||||||
|
-spec data(StreamPid :: pid(), ChunkData :: binary()) -> no_return().
|
||||||
|
data(StreamPid, ChunkData) when is_pid(StreamPid), is_binary(ChunkData) ->
|
||||||
|
gen_server:cast(StreamPid, {data, ChunkData}).
|
||||||
|
|
||||||
|
-spec finish(StreamPid :: pid()) -> no_return().
|
||||||
|
finish(StreamPid) when is_pid(StreamPid) ->
|
||||||
|
gen_server:cast(StreamPid, finish).
|
||||||
|
|
||||||
|
%% @doc Spawns the server and registers the local name (unique)
|
||||||
|
-spec(start_monitor(ParentPid :: pid()) ->
|
||||||
|
{ok, {Pid :: pid(), MonRef :: reference()}} | ignore | {error, Reason :: term()}).
|
||||||
|
start_monitor(ParentPid) when is_pid(ParentPid) ->
|
||||||
|
gen_server:start_monitor(?MODULE, [ParentPid], []).
|
||||||
|
|
||||||
|
%%%===================================================================
|
||||||
|
%%% gen_server callbacks
|
||||||
|
%%%===================================================================
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Initializes the server
|
||||||
|
-spec(init(Args :: term()) ->
|
||||||
|
{ok, State :: #state{}} | {ok, State :: #state{}, timeout() | hibernate} |
|
||||||
|
{stop, Reason :: term()} | ignore).
|
||||||
|
init([ParentPid]) ->
|
||||||
|
Ref = erlang:monitor(process, ParentPid),
|
||||||
|
{ok, #state{parent_pid = ParentPid, ref = Ref}}.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Handling call messages
|
||||||
|
-spec(handle_call(Request :: term(), From :: {pid(), Tag :: term()},
|
||||||
|
State :: #state{}) ->
|
||||||
|
{reply, Reply :: term(), NewState :: #state{}} |
|
||||||
|
{reply, Reply :: term(), NewState :: #state{}, timeout() | hibernate} |
|
||||||
|
{noreply, NewState :: #state{}} |
|
||||||
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
|
{stop, Reason :: term(), Reply :: term(), NewState :: #state{}} |
|
||||||
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
|
handle_call({setup, FileName, FileSize}, _From, State = #state{}) ->
|
||||||
|
{RealFileName, Path} = make_file(filename:basename(FileName)),
|
||||||
|
{ok, IoDevice} = file:open(RealFileName, [write]),
|
||||||
|
|
||||||
|
{reply, {ok, Path}, State#state{io_device = IoDevice, real_file = RealFileName, file_size = FileSize, acc_size = 0}}.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Handling cast messages
|
||||||
|
-spec(handle_cast(Request :: term(), State :: #state{}) ->
|
||||||
|
{noreply, NewState :: #state{}} |
|
||||||
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
|
handle_cast({data, ChunkData}, State = #state{io_device = IoDevice, acc_size = AccSize}) ->
|
||||||
|
Data = base64:decode(ChunkData),
|
||||||
|
Len = byte_size(Data),
|
||||||
|
|
||||||
|
ok = file:write(IoDevice, Data),
|
||||||
|
{noreply, State#state{acc_size = AccSize + Len}};
|
||||||
|
handle_cast(finish, State = #state{parent_pid = ParentPid, io_device = IoDevice, acc_size = AccSize, file_size = FileSize, real_file = RealFile}) ->
|
||||||
|
case AccSize == FileSize of
|
||||||
|
true ->
|
||||||
|
ok = file:close(IoDevice),
|
||||||
|
ParentPid ! {stream_reply, self(), ok};
|
||||||
|
false ->
|
||||||
|
ok = file:close(IoDevice),
|
||||||
|
ok = file:delete(RealFile),
|
||||||
|
ParentPid ! {stream_reply, self(), invalid}
|
||||||
|
end,
|
||||||
|
{stop, normal, State}.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Handling all non call/cast messages
|
||||||
|
-spec(handle_info(Info :: timeout() | term(), State :: #state{}) ->
|
||||||
|
{noreply, NewState :: #state{}} |
|
||||||
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
|
handle_info({'DOWN', Ref, process, Pid, normal}, State = #state{ref = Ref, parent_pid = Pid}) ->
|
||||||
|
{noreply, State};
|
||||||
|
handle_info({'DOWN', Ref, process, Pid, Reason}, State = #state{ref = Ref, parent_pid = Pid, io_device = IoDevice, real_file = RealFile}) ->
|
||||||
|
lager:debug("[efka_stream] ws_channel close with reason: ~p", [Reason]),
|
||||||
|
case IoDevice =:= undefined of
|
||||||
|
true ->
|
||||||
|
ok;
|
||||||
|
false ->
|
||||||
|
ok = file:close(IoDevice),
|
||||||
|
RealFile /= undefined andalso file:delete(RealFile)
|
||||||
|
end,
|
||||||
|
{stop, normal, State};
|
||||||
|
handle_info(_Info, State = #state{}) ->
|
||||||
|
{noreply, State}.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc This function is called by a gen_server when it is about to
|
||||||
|
%% terminate. It should be the opposite of Module:init/1 and do any
|
||||||
|
%% necessary cleaning up. When it returns, the gen_server terminates
|
||||||
|
%% with Reason. The return value is ignored.
|
||||||
|
-spec(terminate(Reason :: (normal | shutdown | {shutdown, term()} | term()),
|
||||||
|
State :: #state{}) -> term()).
|
||||||
|
terminate(_Reason, _State = #state{}) ->
|
||||||
|
ok.
|
||||||
|
|
||||||
|
%% @private
|
||||||
|
%% @doc Convert process state when code is changed
|
||||||
|
-spec(code_change(OldVsn :: term() | {down, term()}, State :: #state{},
|
||||||
|
Extra :: term()) ->
|
||||||
|
{ok, NewState :: #state{}} | {error, Reason :: term()}).
|
||||||
|
code_change(_OldVsn, State = #state{}, _Extra) ->
|
||||||
|
{ok, State}.
|
||||||
|
|
||||||
|
%%%===================================================================
|
||||||
|
%%% Internal functions
|
||||||
|
%%%===================================================================
|
||||||
|
|
||||||
|
-spec make_file(Basename :: string()) -> {string(), string()}.
|
||||||
|
make_file(Basename) when is_list(Basename) ->
|
||||||
|
{ok, UploadDir} = application:get_env(efka, upload_dir),
|
||||||
|
{{Y, M, D}, _} = calendar:local_time(),
|
||||||
|
DateDir = io_lib:format("~p-~p-~p", [Y, M, D]),
|
||||||
|
BaseDir = UploadDir ++ DateDir,
|
||||||
|
case filelib:is_dir(BaseDir) of
|
||||||
|
true ->
|
||||||
|
ok;
|
||||||
|
false ->
|
||||||
|
ok = file:make_dir(BaseDir)
|
||||||
|
end,
|
||||||
|
Path = DateDir ++ "/" ++ Basename,
|
||||||
|
|
||||||
|
{UploadDir ++ Path, Path}.
|
||||||
@ -13,7 +13,7 @@
|
|||||||
|
|
||||||
%% API
|
%% API
|
||||||
-export([start_link/0]).
|
-export([start_link/0]).
|
||||||
-export([subscribe/2, publish/2]).
|
-export([subscribe/2, publish/3, debug_info/0]).
|
||||||
-export([match_components/2, is_valid_components/1, of_components/1]).
|
-export([match_components/2, is_valid_components/1, of_components/1]).
|
||||||
|
|
||||||
%% gen_server callbacks
|
%% gen_server callbacks
|
||||||
@ -34,20 +34,26 @@
|
|||||||
}).
|
}).
|
||||||
|
|
||||||
-record(state, {
|
-record(state, {
|
||||||
subscribers = []
|
subscribers = [],
|
||||||
|
%% qos未1,并且未被消费的消息
|
||||||
|
remaining_messages = []
|
||||||
}).
|
}).
|
||||||
|
|
||||||
%%%===================================================================
|
%%%===================================================================
|
||||||
%%% API
|
%%% API
|
||||||
%%%===================================================================
|
%%%===================================================================
|
||||||
|
|
||||||
-spec subscribe(Topic :: binary(), SubscriberPid :: pid()) -> no_return().
|
-spec subscribe(Topic :: binary(), SubscriberPid :: pid()) -> ok | {error, Reason :: binary()}.
|
||||||
subscribe(Topic, SubscriberPid) when is_binary(Topic), is_pid(SubscriberPid) ->
|
subscribe(Topic, SubscriberPid) when is_binary(Topic), is_pid(SubscriberPid) ->
|
||||||
gen_server:cast(?SERVER, {subscribe, Topic, SubscriberPid}).
|
gen_server:call(?SERVER, {subscribe, Topic, SubscriberPid}).
|
||||||
|
|
||||||
-spec publish(Topic :: binary(), Content :: binary()) -> no_return().
|
-spec publish(Topic :: binary(), Qos :: integer(), Content :: binary()) -> no_return().
|
||||||
publish(Topic, Content) when is_binary(Topic), is_binary(Content) ->
|
publish(Topic, Qos, Content) when is_binary(Topic), is_integer(Qos), is_binary(Content) ->
|
||||||
gen_server:cast(?SERVER, {publish, Topic, Content}).
|
gen_server:cast(?SERVER, {publish, Topic, Qos, Content}).
|
||||||
|
|
||||||
|
-spec debug_info() -> {ok, Info :: map()}.
|
||||||
|
debug_info() ->
|
||||||
|
gen_server:call(?SERVER, debug_info).
|
||||||
|
|
||||||
%% @doc Spawns the server and registers the local name (unique)
|
%% @doc Spawns the server and registers the local name (unique)
|
||||||
-spec(start_link() ->
|
-spec(start_link() ->
|
||||||
@ -77,8 +83,26 @@ init([]) ->
|
|||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
{stop, Reason :: term(), Reply :: term(), NewState :: #state{}} |
|
{stop, Reason :: term(), Reply :: term(), NewState :: #state{}} |
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
handle_call(_Request, _From, State = #state{}) ->
|
%% 同一个SubscriberPid只能订阅同一个topic一次
|
||||||
{reply, ok, State}.
|
handle_call({subscribe, Topic, SubscriberPid}, _From, State = #state{subscribers = Subscribers, remaining_messages = RemainingMessages}) ->
|
||||||
|
Components = of_components(Topic),
|
||||||
|
case is_valid_components(Components) of
|
||||||
|
true ->
|
||||||
|
Sub = #subscriber{topic = Topic, subscriber_pid = SubscriberPid, components = Components, order = order_num(Components)},
|
||||||
|
%% 建立到SubscriberPid的monitor,进程退出需要清理订阅
|
||||||
|
erlang:monitor(process, SubscriberPid),
|
||||||
|
%% 处理遗留的消息
|
||||||
|
RestRemainingMessages = dispatch_remaining_messages(Sub, RemainingMessages),
|
||||||
|
{reply, ok, State#state{subscribers = Subscribers ++ [Sub], remaining_messages = RestRemainingMessages}};
|
||||||
|
false ->
|
||||||
|
{reply, {error, <<"invalid topic name">>}, State}
|
||||||
|
end;
|
||||||
|
handle_call(debug_info, _From, State = #state{subscribers = Subscribers, remaining_messages = RemainingMessages}) ->
|
||||||
|
Info = #{
|
||||||
|
subscribes => Subscribers,
|
||||||
|
remaining_messages => RemainingMessages
|
||||||
|
},
|
||||||
|
{reply, {ok, Info}, State}.
|
||||||
|
|
||||||
%% @private
|
%% @private
|
||||||
%% @doc Handling cast messages
|
%% @doc Handling cast messages
|
||||||
@ -86,29 +110,19 @@ handle_call(_Request, _From, State = #state{}) ->
|
|||||||
{noreply, NewState :: #state{}} |
|
{noreply, NewState :: #state{}} |
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
%% 同一个SubscriberPid只能订阅同一个topic一次
|
|
||||||
handle_cast({subscribe, Topic, SubscriberPid}, State = #state{subscribers = Subscribers}) ->
|
|
||||||
Components = of_components(Topic),
|
|
||||||
case is_valid_components(Components) of
|
|
||||||
true ->
|
|
||||||
Sub = #subscriber{topic = Topic, subscriber_pid = SubscriberPid, components = Components, order = order_num(Components)},
|
|
||||||
%% 建立到SubscriberPid的monitor,进程退出需要清理订阅
|
|
||||||
erlang:monitor(process, SubscriberPid),
|
|
||||||
|
|
||||||
{noreply, State#state{subscribers = Subscribers ++ [Sub]}};
|
|
||||||
false ->
|
|
||||||
{noreply, State}
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 发布消息
|
%% 发布消息
|
||||||
handle_cast({publish, Topic, Content}, State = #state{subscribers = Subscribers}) ->
|
handle_cast({publish, Topic, Qos, Content}, State = #state{subscribers = Subscribers, remaining_messages = RemainingMessages}) ->
|
||||||
MatchedSubscribers = match_subscribers(Subscribers, Topic),
|
MatchedSubscribers = match_subscribers(Subscribers, Topic),
|
||||||
lists:foreach(fun(#subscriber{subscriber_pid = SubscriberPid}) ->
|
|
||||||
SubscriberPid ! {topic_broadcast, Topic, Content}
|
|
||||||
end, MatchedSubscribers),
|
|
||||||
|
|
||||||
lager:debug("[efka_subscription] topic: ~p, content: ~p, match subscribers: ~p", [Topic, Content, MatchedSubscribers]),
|
lager:debug("[efka_subscription] topic: ~p, content: ~p, match subscribers: ~p", [Topic, Content, MatchedSubscribers]),
|
||||||
{noreply, State}.
|
case length(MatchedSubscribers) > 0 of
|
||||||
|
true ->
|
||||||
|
broadcast(Topic, Content, MatchedSubscribers),
|
||||||
|
{noreply, State};
|
||||||
|
false when Qos =:= 0 ->
|
||||||
|
{noreply, State};
|
||||||
|
false ->
|
||||||
|
{noreply, State#state{remaining_messages = [{Topic, Content}|RemainingMessages]}}
|
||||||
|
end.
|
||||||
|
|
||||||
%% @private
|
%% @private
|
||||||
%% @doc Handling all non call/cast messages
|
%% @doc Handling all non call/cast messages
|
||||||
@ -201,4 +215,23 @@ order_num([<<$*>>|_]) ->
|
|||||||
order_num([<<$+>>|_]) ->
|
order_num([<<$+>>|_]) ->
|
||||||
3;
|
3;
|
||||||
order_num([_|Tail]) ->
|
order_num([_|Tail]) ->
|
||||||
order_num(Tail).
|
order_num(Tail).
|
||||||
|
|
||||||
|
broadcast(Topic, Content, MatchedSubscribers) ->
|
||||||
|
lists:foreach(fun(#subscriber{subscriber_pid = SubscriberPid}) ->
|
||||||
|
SubscriberPid ! {topic_broadcast, Topic, Content}
|
||||||
|
end, MatchedSubscribers).
|
||||||
|
|
||||||
|
-spec dispatch_remaining_messages(Subscriber :: #subscriber{}, RemainingMessages :: list()) -> RestRemainingMessages :: list().
|
||||||
|
dispatch_remaining_messages(#subscriber{subscriber_pid = SubscriberPid, components = Components}, RemainingMessages) when is_list(RemainingMessages) ->
|
||||||
|
%% 处理遗留的消息
|
||||||
|
lists:foldl(fun({Topic0, Content0}, Acc) ->
|
||||||
|
Components0 = of_components(Topic0),
|
||||||
|
case match_components(Components0, Components) of
|
||||||
|
true ->
|
||||||
|
SubscriberPid ! {topic_broadcast, Topic0, Content0},
|
||||||
|
Acc;
|
||||||
|
false ->
|
||||||
|
[{Topic0, Content0}|Acc]
|
||||||
|
end
|
||||||
|
end, [], RemainingMessages).
|
||||||
@ -29,12 +29,48 @@ init([]) ->
|
|||||||
SupFlags = #{strategy => one_for_one, intensity => 1000, period => 3600},
|
SupFlags = #{strategy => one_for_one, intensity => 1000, period => 3600},
|
||||||
ChildSpecs = [
|
ChildSpecs = [
|
||||||
#{
|
#{
|
||||||
id => 'efka_inetd_task_log',
|
id => 'efka_logger',
|
||||||
start => {'efka_inetd_task_log', start_link, []},
|
start => {'efka_logger', start_link, ["deploy_log"]},
|
||||||
restart => permanent,
|
restart => permanent,
|
||||||
shutdown => 2000,
|
shutdown => 2000,
|
||||||
type => worker,
|
type => worker,
|
||||||
modules => ['efka_inetd_task_log']
|
modules => ['efka_logger']
|
||||||
|
},
|
||||||
|
|
||||||
|
#{
|
||||||
|
id => 'efka_service_sup',
|
||||||
|
start => {'efka_service_sup', start_link, []},
|
||||||
|
restart => permanent,
|
||||||
|
shutdown => 2000,
|
||||||
|
type => supervisor,
|
||||||
|
modules => ['efka_service_sup']
|
||||||
|
},
|
||||||
|
|
||||||
|
%#{
|
||||||
|
% id => 'docker_events',
|
||||||
|
% start => {'docker_events', start_link, []},
|
||||||
|
% restart => permanent,
|
||||||
|
% shutdown => 2000,
|
||||||
|
% type => worker,
|
||||||
|
% modules => ['docker_events']
|
||||||
|
%},
|
||||||
|
|
||||||
|
#{
|
||||||
|
id => cache_model,
|
||||||
|
start => {cache_model, start_link, []},
|
||||||
|
restart => permanent,
|
||||||
|
shutdown => 5000,
|
||||||
|
type => worker,
|
||||||
|
modules => ['cache_model']
|
||||||
|
},
|
||||||
|
|
||||||
|
#{
|
||||||
|
id => service_model,
|
||||||
|
start => {service_model, start_link, []},
|
||||||
|
restart => permanent,
|
||||||
|
shutdown => 5000,
|
||||||
|
type => worker,
|
||||||
|
modules => ['service_model']
|
||||||
},
|
},
|
||||||
|
|
||||||
#{
|
#{
|
||||||
@ -47,49 +83,23 @@ init([]) ->
|
|||||||
},
|
},
|
||||||
|
|
||||||
#{
|
#{
|
||||||
id => 'efka_inetd',
|
id => 'docker_manager',
|
||||||
start => {'efka_inetd', start_link, []},
|
start => {'docker_manager', start_link, []},
|
||||||
restart => permanent,
|
restart => permanent,
|
||||||
shutdown => 2000,
|
shutdown => 2000,
|
||||||
type => worker,
|
type => worker,
|
||||||
modules => ['efka_inetd']
|
modules => ['docker_manager']
|
||||||
},
|
},
|
||||||
|
|
||||||
#{
|
#{
|
||||||
id => 'efka_agent',
|
id => 'efka_remote_agent',
|
||||||
start => {'efka_agent', start_link, []},
|
start => {'efka_remote_agent', start_link, []},
|
||||||
restart => permanent,
|
restart => permanent,
|
||||||
shutdown => 2000,
|
shutdown => 2000,
|
||||||
type => worker,
|
type => worker,
|
||||||
modules => ['efka_agent']
|
modules => ['efka_remote_agent']
|
||||||
},
|
|
||||||
|
|
||||||
#{
|
|
||||||
id => 'efka_tcp_sup',
|
|
||||||
start => {'efka_tcp_sup', start_link, []},
|
|
||||||
restart => permanent,
|
|
||||||
shutdown => 2000,
|
|
||||||
type => supervisor,
|
|
||||||
modules => ['efka_tcp_sup']
|
|
||||||
},
|
|
||||||
|
|
||||||
#{
|
|
||||||
id => 'efka_tcp_server',
|
|
||||||
start => {'efka_tcp_server', start_link, []},
|
|
||||||
restart => permanent,
|
|
||||||
shutdown => 2000,
|
|
||||||
type => worker,
|
|
||||||
modules => ['efka_tcp_server']
|
|
||||||
},
|
|
||||||
|
|
||||||
#{
|
|
||||||
id => 'efka_service_sup',
|
|
||||||
start => {'efka_service_sup', start_link, []},
|
|
||||||
restart => permanent,
|
|
||||||
shutdown => 2000,
|
|
||||||
type => supervisor,
|
|
||||||
modules => ['efka_service_sup']
|
|
||||||
}
|
}
|
||||||
|
|
||||||
],
|
],
|
||||||
|
|
||||||
{ok, {SupFlags, ChildSpecs}}.
|
{ok, {SupFlags, ChildSpecs}}.
|
||||||
|
|||||||
@ -1,295 +0,0 @@
|
|||||||
%%%-------------------------------------------------------------------
|
|
||||||
%%% @author anlicheng
|
|
||||||
%%% @copyright (C) 2025, <COMPANY>
|
|
||||||
%%% @doc
|
|
||||||
%%%
|
|
||||||
%%% @end
|
|
||||||
%%% Created : 30. 4月 2025 09:22
|
|
||||||
%%%-------------------------------------------------------------------
|
|
||||||
-module(efka_tcp_channel).
|
|
||||||
-author("anlicheng").
|
|
||||||
|
|
||||||
-behaviour(gen_server).
|
|
||||||
|
|
||||||
%% API
|
|
||||||
-export([start_link/1]).
|
|
||||||
-export([push_config/4, invoke/4]).
|
|
||||||
|
|
||||||
%% gen_server callbacks
|
|
||||||
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).
|
|
||||||
|
|
||||||
-define(SERVER, ?MODULE).
|
|
||||||
|
|
||||||
%% 最大的等待时间
|
|
||||||
-define(PENDING_TIMEOUT, 10 * 1000).
|
|
||||||
%% 消息类型
|
|
||||||
|
|
||||||
%% 服务注册
|
|
||||||
-define(PACKET_REQUEST, 16#01).
|
|
||||||
%% 消息响应
|
|
||||||
-define(PACKET_RESPONSE, 16#02).
|
|
||||||
%% 上传数据
|
|
||||||
-define(PACKET_PUSH, 16#03).
|
|
||||||
|
|
||||||
-define(PACKET_PUB, 16#04).
|
|
||||||
|
|
||||||
-record(state, {
|
|
||||||
packet_id = 1,
|
|
||||||
socket :: gen_tcp:socket(),
|
|
||||||
service_id :: undefined | binary(),
|
|
||||||
service_pid :: undefined | pid(),
|
|
||||||
is_registered = false :: boolean(),
|
|
||||||
|
|
||||||
%% 请求响应的对应关系, #{packet_id => {ReceiverPid, Ref}}; 自身的inflight需要超时逻辑处理
|
|
||||||
inflight = #{}
|
|
||||||
}).
|
|
||||||
|
|
||||||
%%%===================================================================
|
|
||||||
%%% API
|
|
||||||
%%%===================================================================
|
|
||||||
|
|
||||||
-spec push_config(ChannelPid :: pid(), Ref :: reference(), ReceiverPid :: pid(), ConfigJson :: binary()) -> no_return().
|
|
||||||
push_config(ChannelPid, Ref, ReceiverPid, ConfigJson) when is_pid(ChannelPid), is_pid(ReceiverPid), is_binary(ConfigJson), is_reference(Ref) ->
|
|
||||||
gen_server:cast(ChannelPid, {push_config, Ref, ReceiverPid, ConfigJson}).
|
|
||||||
|
|
||||||
-spec invoke(ChannelPid :: pid(), Ref :: reference(), ReceiverPid :: pid(), Payload :: binary()) -> no_return().
|
|
||||||
invoke(ChannelPid, Ref, ReceiverPid, Payload) when is_pid(ChannelPid), is_pid(ReceiverPid), is_binary(Payload), is_reference(Ref) ->
|
|
||||||
gen_server:cast(ChannelPid, {invoke, Ref, ReceiverPid, Payload}).
|
|
||||||
|
|
||||||
%% @doc Spawns the server and registers the local name (unique)
|
|
||||||
-spec(start_link(Socket :: gen_tcp:socket()) ->
|
|
||||||
{ok, Pid :: pid()} | ignore | {error, Reason :: term()}).
|
|
||||||
start_link(Socket) ->
|
|
||||||
gen_server:start_link(?MODULE, [Socket], []).
|
|
||||||
|
|
||||||
%%%===================================================================
|
|
||||||
%%% gen_server callbacks
|
|
||||||
%%%===================================================================
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Initializes the server
|
|
||||||
-spec(init(Args :: term()) ->
|
|
||||||
{ok, State :: #state{}} | {ok, State :: #state{}, timeout() | hibernate} |
|
|
||||||
{stop, Reason :: term()} | ignore).
|
|
||||||
init([Socket]) ->
|
|
||||||
ok = inet:setopts(Socket, [{active, true}]),
|
|
||||||
lager:debug("[efka_tcp_channel] get micro service socket: ~p", [Socket]),
|
|
||||||
{ok, #state{socket = Socket}}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Handling call messages
|
|
||||||
-spec(handle_call(Request :: term(), From :: {pid(), Tag :: term()},
|
|
||||||
State :: #state{}) ->
|
|
||||||
{reply, Reply :: term(), NewState :: #state{}} |
|
|
||||||
{reply, Reply :: term(), NewState :: #state{}, timeout() | hibernate} |
|
|
||||||
{noreply, NewState :: #state{}} |
|
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
|
||||||
{stop, Reason :: term(), Reply :: term(), NewState :: #state{}} |
|
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
|
||||||
handle_call(_Request, _From, State = #state{}) ->
|
|
||||||
{reply, ok, State}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Handling cast messages
|
|
||||||
-spec(handle_cast(Request :: term(), State :: #state{}) ->
|
|
||||||
{noreply, NewState :: #state{}} |
|
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
|
||||||
%% 推送配置项目
|
|
||||||
handle_cast({push_config, Ref, ReceiverPid, ConfigJson}, State = #state{socket = Socket, packet_id = PacketId, inflight = Inflight}) ->
|
|
||||||
PushConfig = #{<<"id">> => PacketId, <<"method">> => <<"push_config">>, <<"params">> => #{<<"config">> => ConfigJson}},
|
|
||||||
Packet = jiffy:encode(PushConfig, [force_utf8]),
|
|
||||||
ok = gen_tcp:send(Socket, <<?PACKET_PUSH:8, Packet/binary>>),
|
|
||||||
|
|
||||||
erlang:start_timer(?PENDING_TIMEOUT, self(), {pending_timeout, PacketId}),
|
|
||||||
{noreply, State#state{packet_id = next_packet_id(PacketId), inflight = maps:put(PacketId, {ReceiverPid, Ref}, Inflight)}};
|
|
||||||
|
|
||||||
%% 远程调用
|
|
||||||
handle_cast({invoke, Ref, ReceiverPid, Payload}, State = #state{socket = Socket, packet_id = PacketId, inflight = Inflight}) ->
|
|
||||||
PushConfig = #{<<"id">> => PacketId, <<"method">> => <<"invoke">>, <<"params">> => #{<<"payload">> => Payload}},
|
|
||||||
Packet = jiffy:encode(PushConfig, [force_utf8]),
|
|
||||||
ok = gen_tcp:send(Socket, <<?PACKET_PUSH:8, Packet/binary>>),
|
|
||||||
|
|
||||||
erlang:start_timer(?PENDING_TIMEOUT, self(), {pending_timeout, PacketId}),
|
|
||||||
{noreply, State#state{packet_id = next_packet_id(PacketId), inflight = maps:put(PacketId, {ReceiverPid, Ref}, Inflight)}};
|
|
||||||
|
|
||||||
handle_cast(_Request, State = #state{}) ->
|
|
||||||
{noreply, State}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Handling all non call/cast messages
|
|
||||||
-spec(handle_info(Info :: timeout() | term(), State :: #state{}) ->
|
|
||||||
{noreply, NewState :: #state{}} |
|
|
||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
|
||||||
%% 处理micro-client:request => efka 主动的请求
|
|
||||||
handle_info({tcp, Socket, <<?PACKET_REQUEST:8, Data/binary>>}, State = #state{socket = Socket}) ->
|
|
||||||
Request = jiffy:decode(Data, [return_maps]),
|
|
||||||
case handle_request(Request, State) of
|
|
||||||
{ok, NewState} ->
|
|
||||||
{noreply, NewState};
|
|
||||||
{stop, Reason, NewState} ->
|
|
||||||
{stop, Reason, NewState}
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 处理micro-client:response => efka 的响应
|
|
||||||
handle_info({tcp, Socket, <<?PACKET_RESPONSE:8, Data/binary>>}, State = #state{socket = Socket, inflight = Inflight}) ->
|
|
||||||
Resp = jiffy:decode(Data, [return_maps]),
|
|
||||||
case Resp of
|
|
||||||
#{<<"id">> := Id, <<"result">> := Result} ->
|
|
||||||
case maps:take(Id, Inflight) of
|
|
||||||
error ->
|
|
||||||
lager:warning("[tcp_channel] get unknown publish response message: ~p, packet_id: ~p", [Resp, Id]),
|
|
||||||
{noreply, State};
|
|
||||||
{{ReceiverPid, Ref}, NInflight} ->
|
|
||||||
case is_pid(ReceiverPid) andalso is_process_alive(ReceiverPid) of
|
|
||||||
true ->
|
|
||||||
ReceiverPid ! {channel_reply, Ref, {ok, Result}};
|
|
||||||
false ->
|
|
||||||
lager:warning("[tcp_channel] get publish response message: ~p, packet_id: ~p, but receiver_pid is deaded", [Resp, Id])
|
|
||||||
end,
|
|
||||||
{noreply, State#state{inflight = NInflight}}
|
|
||||||
end;
|
|
||||||
#{<<"id">> := Id, <<"error">> := #{<<"code">> := _Code, <<"message">> := Error}} ->
|
|
||||||
case maps:take(Id, Inflight) of
|
|
||||||
error ->
|
|
||||||
lager:warning("[tcp_channel] get unknown publish response message: ~p, packet_id: ~p", [Resp, Id]),
|
|
||||||
{noreply, State};
|
|
||||||
{{ReceiverPid, Ref}, NInflight} ->
|
|
||||||
case is_pid(ReceiverPid) andalso is_process_alive(ReceiverPid) of
|
|
||||||
true ->
|
|
||||||
ReceiverPid ! {channel_reply, Ref, {error, Error}};
|
|
||||||
false ->
|
|
||||||
lager:warning("[tcp_channel] get publish response message: ~p, packet_id: ~p, but receiver_pid is deaded", [Resp, Id])
|
|
||||||
end,
|
|
||||||
{noreply, State#state{inflight = NInflight}}
|
|
||||||
end
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 超时逻辑处理
|
|
||||||
handle_info({timeout, _, {pending_timeout, Id}}, State = #state{inflight = Inflight}) ->
|
|
||||||
case maps:take(Id, Inflight) of
|
|
||||||
error ->
|
|
||||||
{noreply, State};
|
|
||||||
{{ReceiverPid, Ref}, NInflight} ->
|
|
||||||
case is_pid(ReceiverPid) andalso is_process_alive(ReceiverPid) of
|
|
||||||
true ->
|
|
||||||
ReceiverPid ! {channel_reply, Ref, {error, <<"timeout">>}};
|
|
||||||
false ->
|
|
||||||
ok
|
|
||||||
end,
|
|
||||||
{noreply, State#state{inflight = NInflight}}
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 订阅的消息
|
|
||||||
handle_info({topic_broadcast, Topic, Content}, State = #state{socket = Socket}) ->
|
|
||||||
Packet = jiffy:encode(#{<<"topic">> => Topic, <<"content">> => Content}, [force_utf8]),
|
|
||||||
ok = gen_tcp:send(Socket, <<?PACKET_PUB:8, Packet/binary>>),
|
|
||||||
|
|
||||||
{noreply, State};
|
|
||||||
|
|
||||||
%% service进程关闭
|
|
||||||
handle_info({'DOWN', _Ref, process, ServicePid, Reason}, State = #state{service_pid = ServicePid}) ->
|
|
||||||
lager:debug("[tcp_channel] service_pid: ~p, exited: ~p", [ServicePid, Reason]),
|
|
||||||
{stop, normal, State#state{service_pid = undefined}};
|
|
||||||
|
|
||||||
handle_info({tcp_error, Socket, Reason}, State = #state{socket = Socket, service_id = ServiceId}) ->
|
|
||||||
lager:debug("[tcp_channel] tcp_error: ~p, assoc service: ~p", [Reason, ServiceId]),
|
|
||||||
{stop, normal, State};
|
|
||||||
handle_info({tcp_closed, Socket}, State = #state{socket = Socket, service_id = ServiceId}) ->
|
|
||||||
lager:debug("[tcp_channel] tcp_closed: ~p, assoc service: ~p", [Socket, ServiceId]),
|
|
||||||
{stop, normal, State}.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc This function is called by a gen_server when it is about to
|
|
||||||
%% terminate. It should be the opposite of Module:init/1 and do any
|
|
||||||
%% necessary cleaning up. When it returns, the gen_server terminates
|
|
||||||
%% with Reason. The return value is ignored.
|
|
||||||
-spec(terminate(Reason :: (normal | shutdown | {shutdown, term()} | term()),
|
|
||||||
State :: #state{}) -> term()).
|
|
||||||
terminate(_Reason, _State = #state{}) ->
|
|
||||||
ok.
|
|
||||||
|
|
||||||
%% @private
|
|
||||||
%% @doc Convert process state when code is changed
|
|
||||||
-spec(code_change(OldVsn :: term() | {down, term()}, State :: #state{},
|
|
||||||
Extra :: term()) ->
|
|
||||||
{ok, NewState :: #state{}} | {error, Reason :: term()}).
|
|
||||||
code_change(_OldVsn, State = #state{}, _Extra) ->
|
|
||||||
{ok, State}.
|
|
||||||
|
|
||||||
%%%===================================================================
|
|
||||||
%%% Internal functions
|
|
||||||
%%%===================================================================
|
|
||||||
|
|
||||||
%% 注册
|
|
||||||
handle_request(#{<<"id">> := Id, <<"method">> := <<"register">>, <<"params">> := #{<<"service_id">> := ServiceId}}, State = #state{socket = Socket}) ->
|
|
||||||
case efka_service:get_pid(ServiceId) of
|
|
||||||
undefined ->
|
|
||||||
lager:warning("[efka_tcp_channel] service_id: ~p, not running", [ServiceId]),
|
|
||||||
Packet = json_error(Id, -1, <<"service not running">>),
|
|
||||||
ok = gen_tcp:send(Socket, <<?PACKET_RESPONSE:8, Packet/binary>>),
|
|
||||||
{stop, normal, State};
|
|
||||||
ServicePid when is_pid(ServicePid) ->
|
|
||||||
case efka_service:attach_channel(ServicePid, self()) of
|
|
||||||
ok ->
|
|
||||||
Packet = json_result(Id, <<"ok">>),
|
|
||||||
erlang:monitor(process, ServicePid),
|
|
||||||
|
|
||||||
ok = gen_tcp:send(Socket, <<?PACKET_RESPONSE:8, Packet/binary>>),
|
|
||||||
{ok, State#state{service_id = ServiceId, service_pid = ServicePid, is_registered = true}};
|
|
||||||
{error, Error} ->
|
|
||||||
lager:warning("[efka_tcp_channel] service_id: ~p, attach_channel get error: ~p", [ServiceId, Error]),
|
|
||||||
Packet = json_error(Id, -1, Error),
|
|
||||||
ok = gen_tcp:send(Socket, <<?PACKET_RESPONSE:8, Packet/binary>>),
|
|
||||||
{stop, normal, State}
|
|
||||||
end
|
|
||||||
end;
|
|
||||||
|
|
||||||
%% 请求参数
|
|
||||||
handle_request(#{<<"id">> := Id, <<"method">> := <<"request_config">>}, State = #state{socket = Socket, service_pid = ServicePid, is_registered = true}) ->
|
|
||||||
{ok, ConfigJson} = efka_service:request_config(ServicePid),
|
|
||||||
Packet = json_result(Id, ConfigJson),
|
|
||||||
ok = gen_tcp:send(Socket, <<?PACKET_RESPONSE:8, Packet/binary>>),
|
|
||||||
{ok, State};
|
|
||||||
|
|
||||||
%% 数据项
|
|
||||||
handle_request(#{<<"id">> := 0, <<"method">> := <<"metric_data">>, <<"params">> := #{<<"device_uuid">> := DeviceUUID, <<"metric">> := Metric}}, State = #state{service_pid = ServicePid, is_registered = true}) ->
|
|
||||||
efka_service:metric_data(ServicePid, DeviceUUID, Metric),
|
|
||||||
{ok, State};
|
|
||||||
|
|
||||||
%% Event事件
|
|
||||||
handle_request(#{<<"id">> := 0, <<"method">> := <<"event">>, <<"params">> := #{<<"event_type">> := EventType, <<"body">> := Body}}, State = #state{service_pid = ServicePid, is_registered = true}) ->
|
|
||||||
efka_service:send_event(ServicePid, EventType, Body),
|
|
||||||
{ok, State};
|
|
||||||
|
|
||||||
%% 订阅事件
|
|
||||||
handle_request(#{<<"id">> := 0, <<"method">> := <<"subscribe">>, <<"params">> := #{<<"topic">> := Topic}}, State = #state{is_registered = true}) ->
|
|
||||||
efka_subscription:subscribe(Topic, self()),
|
|
||||||
{ok, State}.
|
|
||||||
|
|
||||||
%% 采用32位编码
|
|
||||||
-spec next_packet_id(PacketId :: integer()) -> NextPacketId :: integer().
|
|
||||||
next_packet_id(PacketId) when PacketId >= 4294967295 ->
|
|
||||||
1;
|
|
||||||
next_packet_id(PacketId) ->
|
|
||||||
PacketId + 1.
|
|
||||||
|
|
||||||
-spec json_result(Id :: integer(), Result :: term()) -> binary().
|
|
||||||
json_result(Id, Result) when is_integer(Id) ->
|
|
||||||
Response = #{
|
|
||||||
<<"id">> => Id,
|
|
||||||
<<"result">> => Result
|
|
||||||
},
|
|
||||||
jiffy:encode(Response, [force_utf8]).
|
|
||||||
|
|
||||||
-spec json_error(Id :: integer(), Code :: integer(), Message :: binary()) -> binary().
|
|
||||||
json_error(Id, Code, Message) when is_integer(Id), is_integer(Code), is_binary(Message) ->
|
|
||||||
Response = #{
|
|
||||||
<<"id">> => Id,
|
|
||||||
<<"error">> => #{
|
|
||||||
<<"code">> => Code,
|
|
||||||
<<"message">> => Message
|
|
||||||
}
|
|
||||||
},
|
|
||||||
jiffy:encode(Response, [force_utf8]).
|
|
||||||
@ -1,46 +0,0 @@
|
|||||||
%%%-------------------------------------------------------------------
|
|
||||||
%%% @author anlicheng
|
|
||||||
%%% @copyright (C) 2025, <COMPANY>
|
|
||||||
%%% @doc
|
|
||||||
%%%
|
|
||||||
%%% @end
|
|
||||||
%%% Created : 29. 4月 2025 23:24
|
|
||||||
%%%-------------------------------------------------------------------
|
|
||||||
-module(efka_tcp_server).
|
|
||||||
-author("anlicheng").
|
|
||||||
|
|
||||||
%% API
|
|
||||||
-export([start_link/0, init/0]).
|
|
||||||
|
|
||||||
start_link() ->
|
|
||||||
{ok, spawn_link(?MODULE, init, [])}.
|
|
||||||
|
|
||||||
%% 监听循环
|
|
||||||
init() ->
|
|
||||||
{ok, TcpServerProps} = application:get_env(efka, tcp_server),
|
|
||||||
Port = proplists:get_value(port, TcpServerProps),
|
|
||||||
|
|
||||||
case gen_tcp:listen(Port, [binary, {packet, 4}, {active, false}, {reuseaddr, true}]) of
|
|
||||||
{ok, ListenSocket} ->
|
|
||||||
lager:debug("[efka_tcp_server] Server started on port ~p~n", [Port]),
|
|
||||||
main_loop(ListenSocket);
|
|
||||||
{error, Reason} ->
|
|
||||||
lager:debug("[efka_tcp_server] Failed to start server: ~p~n", [Reason]),
|
|
||||||
exit(Reason)
|
|
||||||
end.
|
|
||||||
|
|
||||||
main_loop(ListenSocket) ->
|
|
||||||
case gen_tcp:accept(ListenSocket) of
|
|
||||||
{ok, Socket} ->
|
|
||||||
% 为每个新连接生成一个处理进程
|
|
||||||
{ok, ChannelPid} = efka_tcp_sup:start_child(Socket),
|
|
||||||
ok = gen_tcp:controlling_process(Socket, ChannelPid),
|
|
||||||
% 继续监听下一个连接
|
|
||||||
main_loop(ListenSocket);
|
|
||||||
{error, closed} ->
|
|
||||||
lager:debug("[efka_tcp_server] Server socket closed"),
|
|
||||||
exit(tcp_closed);
|
|
||||||
{error, Reason} ->
|
|
||||||
lager:debug("[efka_tcp_server] Accept error: ~p", [Reason]),
|
|
||||||
exit(Reason)
|
|
||||||
end.
|
|
||||||
@ -1,43 +0,0 @@
|
|||||||
%%%-------------------------------------------------------------------
|
|
||||||
%% @doc efka top level supervisor.
|
|
||||||
%% @end
|
|
||||||
%%%-------------------------------------------------------------------
|
|
||||||
|
|
||||||
-module(efka_tcp_sup).
|
|
||||||
|
|
||||||
-behaviour(supervisor).
|
|
||||||
|
|
||||||
-export([start_link/0, start_child/1]).
|
|
||||||
|
|
||||||
-export([init/1]).
|
|
||||||
|
|
||||||
-define(SERVER, ?MODULE).
|
|
||||||
|
|
||||||
start_link() ->
|
|
||||||
supervisor:start_link({local, ?SERVER}, ?MODULE, []).
|
|
||||||
|
|
||||||
%% sup_flags() = #{strategy => strategy(), % optional
|
|
||||||
%% intensity => non_neg_integer(), % optional
|
|
||||||
%% period => pos_integer()} % optional
|
|
||||||
%% child_spec() = #{id => child_id(), % mandatory
|
|
||||||
%% start => mfargs(), % mandatory
|
|
||||||
%% restart => restart(), % optional
|
|
||||||
%% shutdown => shutdown(), % optional
|
|
||||||
%% type => worker(), % optional
|
|
||||||
%% modules => modules()} % optional
|
|
||||||
init([]) ->
|
|
||||||
SupFlags = #{strategy => simple_one_for_one, intensity => 0, period => 1},
|
|
||||||
ChildSpec = #{
|
|
||||||
id => efka_tcp_channel,
|
|
||||||
start => {efka_tcp_channel, start_link, []},
|
|
||||||
restart => temporary,
|
|
||||||
type => worker
|
|
||||||
},
|
|
||||||
{ok, {SupFlags, [ChildSpec]}}.
|
|
||||||
|
|
||||||
%% internal functions
|
|
||||||
|
|
||||||
start_child(Socket) ->
|
|
||||||
supervisor:start_child(?MODULE, [Socket]).
|
|
||||||
|
|
||||||
|
|
||||||
@ -8,15 +8,13 @@
|
|||||||
%%%-------------------------------------------------------------------
|
%%%-------------------------------------------------------------------
|
||||||
-module(efka_transport).
|
-module(efka_transport).
|
||||||
-author("anlicheng").
|
-author("anlicheng").
|
||||||
-include("message_pb.hrl").
|
-include("message.hrl").
|
||||||
-include("efka.hrl").
|
|
||||||
|
|
||||||
-behaviour(gen_server).
|
-behaviour(gen_server).
|
||||||
|
|
||||||
%% API
|
%% API
|
||||||
-export([start_monitor/3]).
|
-export([start_monitor/3]).
|
||||||
-export([connect/1, auth_request/2, send/3, async_call_reply/3, stop/1]).
|
-export([connect/1, auth_request/2, send/2, rpc_reply/3, stop/1]).
|
||||||
-export([request/3]).
|
|
||||||
|
|
||||||
%% gen_server callbacks
|
%% gen_server callbacks
|
||||||
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).
|
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).
|
||||||
@ -27,10 +25,7 @@
|
|||||||
parent_pid :: pid(),
|
parent_pid :: pid(),
|
||||||
host :: string(),
|
host :: string(),
|
||||||
port :: integer(),
|
port :: integer(),
|
||||||
socket :: undefined | ssl:sslsocket(),
|
socket :: undefined | ssl:sslsocket()
|
||||||
packet_id = 1,
|
|
||||||
%% 通过packet_id建立请求和响应的关系
|
|
||||||
inflight = #{}
|
|
||||||
}).
|
}).
|
||||||
|
|
||||||
%%%===================================================================
|
%%%===================================================================
|
||||||
@ -41,25 +36,19 @@
|
|||||||
auth_request(Pid, AuthBin) when is_pid(Pid), is_binary(AuthBin) ->
|
auth_request(Pid, AuthBin) when is_pid(Pid), is_binary(AuthBin) ->
|
||||||
gen_server:cast(Pid, {auth_request, AuthBin}).
|
gen_server:cast(Pid, {auth_request, AuthBin}).
|
||||||
|
|
||||||
-spec request(Pid :: pid(), Method :: integer(), ReqBin :: binary()) -> Ref :: reference().
|
|
||||||
request(Pid, Method, ReqBin) when is_pid(Pid), is_binary(ReqBin) ->
|
|
||||||
Ref = make_ref(),
|
|
||||||
gen_server:cast(Pid, {request, Ref, Method, ReqBin}),
|
|
||||||
Ref.
|
|
||||||
|
|
||||||
-spec connect(Pid :: pid()) -> no_return().
|
-spec connect(Pid :: pid()) -> no_return().
|
||||||
connect(Pid) when is_pid(Pid) ->
|
connect(Pid) when is_pid(Pid) ->
|
||||||
gen_server:cast(Pid, connect).
|
gen_server:cast(Pid, connect).
|
||||||
|
|
||||||
-spec send(Pid :: pid(), Method :: integer(), Packet :: binary()) -> no_return().
|
-spec send(Pid :: pid(), Packet :: binary()) -> no_return().
|
||||||
send(Pid, Method, Packet) when is_pid(Pid), is_integer(Method), is_binary(Packet) ->
|
send(Pid, Packet) when is_pid(Pid), is_binary(Packet) ->
|
||||||
gen_server:cast(Pid, {send, Method, Packet}).
|
gen_server:cast(Pid, {send, Packet}).
|
||||||
|
|
||||||
-spec async_call_reply(Pid :: pid() | undefined, PacketId :: integer(), Response :: binary()) -> no_return().
|
-spec rpc_reply(Pid :: pid() | undefined, PacketId :: integer(), Response :: binary()) -> no_return().
|
||||||
async_call_reply(undefined, PacketId, Response) when is_integer(PacketId), is_binary(Response) ->
|
rpc_reply(undefined, PacketId, Response) when is_integer(PacketId), is_binary(Response) ->
|
||||||
ok;
|
ok;
|
||||||
async_call_reply(Pid, PacketId, Response) when is_pid(Pid), is_integer(PacketId), is_binary(Response) ->
|
rpc_reply(Pid, PacketId, Reply) when is_pid(Pid), is_integer(PacketId), is_binary(Reply) ->
|
||||||
gen_server:cast(Pid, {async_call_reply, PacketId, Response}).
|
gen_server:cast(Pid, {rpc_reply, PacketId, Reply}).
|
||||||
|
|
||||||
%% 关闭的时候不一定能成功,可能关闭的时候;transport进程已经退出了
|
%% 关闭的时候不一定能成功,可能关闭的时候;transport进程已经退出了
|
||||||
-spec stop(Pid :: pid() | undefined) -> ok.
|
-spec stop(Pid :: pid() | undefined) -> ok.
|
||||||
@ -123,34 +112,31 @@ handle_cast(connect, State = #state{host = Host, port = Port, parent_pid = Paren
|
|||||||
end;
|
end;
|
||||||
|
|
||||||
%% auth校验
|
%% auth校验
|
||||||
handle_cast({auth_request, AuthRequestBin}, State = #state{parent_pid = ParentPid, socket = Socket, packet_id = PacketId}) ->
|
handle_cast({auth_request, AuthRequestBin}, State = #state{parent_pid = ParentPid, socket = Socket}) ->
|
||||||
ok = ssl:send(Socket, <<?PACKET_REQUEST, PacketId:32, ?METHOD_AUTH, AuthRequestBin/binary>>),
|
PacketId = 1,
|
||||||
|
ok = ssl:send(Socket, <<?PACKET_REQUEST, PacketId:32, AuthRequestBin/binary>>),
|
||||||
%% 需要等待auth返回的结果
|
%% 需要等待auth返回的结果
|
||||||
receive
|
receive
|
||||||
{ssl, Socket, <<?PACKET_RESPONSE, PacketId:32, ReplyBin/binary>>} ->
|
{ssl, Socket, <<?PACKET_RESPONSE, PacketId:32, ReplyBin/binary>>} ->
|
||||||
ParentPid ! {auth_reply, {ok, ReplyBin}},
|
{ok, #auth_reply{} = Reply} = message_codec:decode(ReplyBin),
|
||||||
{noreply, State#state{packet_id = PacketId + 1}};
|
ParentPid ! {auth_reply, {ok, Reply}},
|
||||||
|
{noreply, State};
|
||||||
{ssl, Socket, Info} ->
|
{ssl, Socket, Info} ->
|
||||||
lager:warning("[efka_transport] get invalid auth_reply: ~p", [Info]),
|
lager:warning("[efka_transport] get invalid auth_reply: ~p", [Info]),
|
||||||
ParentPid ! {auth_reply, {error, invalid_auth_reply}},
|
ParentPid ! {auth_reply, {error, invalid_auth_reply}},
|
||||||
{noreply, State#state{packet_id = PacketId + 1}}
|
{noreply, State}
|
||||||
after 5000 ->
|
after 5000 ->
|
||||||
ParentPid ! {auth_reply, {error, timeout}},
|
ParentPid ! {auth_reply, {error, timeout}},
|
||||||
{noreply, State#state{packet_id = PacketId + 1}}
|
{noreply, State}
|
||||||
end;
|
end;
|
||||||
|
|
||||||
%% 提交请求
|
handle_cast({send, Packet}, State = #state{socket = Socket}) ->
|
||||||
handle_cast({request, Ref, Method, ReqBin}, State = #state{socket = Socket, packet_id = PacketId, inflight = Inflight}) ->
|
ok = ssl:send(Socket, <<?PACKET_CAST, Packet/binary>>),
|
||||||
ok = ssl:send(Socket, <<?PACKET_REQUEST, PacketId:32, Method:8, ReqBin/binary>>),
|
|
||||||
{noreply, State#state{packet_id = PacketId + 1, inflight = maps:put(PacketId, Ref, Inflight)}};
|
|
||||||
|
|
||||||
handle_cast({send, Method, Packet}, State = #state{socket = Socket}) ->
|
|
||||||
ok = ssl:send(Socket, <<?PACKET_REQUEST, Method:8, Packet/binary>>),
|
|
||||||
{noreply, State};
|
{noreply, State};
|
||||||
|
|
||||||
%% 服务push的消息的回复
|
%% 服务push的消息的回复
|
||||||
handle_cast({async_call_reply, PacketId, Response}, State = #state{socket = Socket}) ->
|
handle_cast({rpc_reply, PacketId, Reply}, State = #state{socket = Socket}) ->
|
||||||
ok = ssl:send(Socket, <<?PACKET_ASYNC_CALL_REPLY, PacketId:32, Response/binary>>),
|
ok = ssl:send(Socket, <<?PACKET_RESPONSE, PacketId:32, Reply/binary>>),
|
||||||
{noreply, State}.
|
{noreply, State}.
|
||||||
|
|
||||||
%% @private
|
%% @private
|
||||||
@ -160,29 +146,16 @@ handle_cast({async_call_reply, PacketId, Response}, State = #state{socket = Sock
|
|||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
%% 服务器主动推送的数据,有packetId的是要求返回的;为0的表示不需要返回值
|
%% 服务器主动推送的数据,有packetId的是要求返回的;为0的表示不需要返回值
|
||||||
handle_info({ssl, Socket, <<?PACKET_COMMAND, CommandType:8, Command/binary>>}, State = #state{socket = Socket, parent_pid = ParentPid}) ->
|
handle_info({ssl, Socket, <<?PACKET_CAST, CastBin/binary>>}, State = #state{socket = Socket, parent_pid = ParentPid}) ->
|
||||||
ParentPid ! {server_command, CommandType, Command},
|
{ok, CastRequest} = message_codec:decode(CastBin),
|
||||||
|
ParentPid ! {server_cast, CastRequest},
|
||||||
{noreply, State};
|
{noreply, State};
|
||||||
|
|
||||||
handle_info({ssl, Socket, <<?PACKET_PUB, PubBin/binary>>}, State = #state{socket = Socket, parent_pid = ParentPid}) ->
|
handle_info({ssl, Socket, <<?PACKET_REQUEST, PacketId:32, RPCRequestBin/binary>>}, State = #state{socket = Socket, parent_pid = ParentPid}) ->
|
||||||
#pub{topic = Topic, content = Content} = message_pb:decode_msg(PubBin, pub),
|
{ok, RPCRequest} = message_codec:decode(RPCRequestBin),
|
||||||
ParentPid ! {server_pub, Topic, Content},
|
ParentPid ! {server_rpc, PacketId, RPCRequest},
|
||||||
{noreply, State};
|
{noreply, State};
|
||||||
|
|
||||||
handle_info({ssl, Socket, <<?PACKET_ASYNC_CALL, PacketId:32, AsyncCallBin/binary>>}, State = #state{socket = Socket, parent_pid = ParentPid}) ->
|
|
||||||
ParentPid ! {server_async_call, PacketId, AsyncCallBin},
|
|
||||||
{noreply, State};
|
|
||||||
|
|
||||||
%% efka:request <-> iot:response
|
|
||||||
handle_info({ssl, Socket, <<?PACKET_RESPONSE, PacketId:32, ReplyBin/binary>>}, State = #state{socket = Socket, inflight = Inflight, parent_pid = ParentPid}) ->
|
|
||||||
case maps:take(PacketId, Inflight) of
|
|
||||||
error ->
|
|
||||||
{noreply, State};
|
|
||||||
{Ref, NInflight} ->
|
|
||||||
ParentPid ! {server_reply, Ref, ReplyBin},
|
|
||||||
{noreply, State#state{inflight = NInflight}}
|
|
||||||
end;
|
|
||||||
|
|
||||||
handle_info({ssl_error, Socket, Reason}, State = #state{socket = Socket}) ->
|
handle_info({ssl_error, Socket, Reason}, State = #state{socket = Socket}) ->
|
||||||
lager:debug("[efka_transport] ssl error: ~p", [Reason]),
|
lager:debug("[efka_transport] ssl error: ~p", [Reason]),
|
||||||
{stop, normal, State};
|
{stop, normal, State};
|
||||||
@ -190,8 +163,7 @@ handle_info({ssl_error, Socket, Reason}, State = #state{socket = Socket}) ->
|
|||||||
handle_info({ssl_closed, Socket}, State = #state{socket = Socket}) ->
|
handle_info({ssl_closed, Socket}, State = #state{socket = Socket}) ->
|
||||||
{stop, normal, State};
|
{stop, normal, State};
|
||||||
|
|
||||||
handle_info({timeout, _, ping_ticker}, State = #state{socket = Socket}) ->
|
handle_info({timeout, _, ping_ticker}, State) ->
|
||||||
ok = ssl:send(Socket, <<?PACKET_PING>>),
|
|
||||||
ping_ticker(),
|
ping_ticker(),
|
||||||
{noreply, State};
|
{noreply, State};
|
||||||
|
|
||||||
|
|||||||
@ -14,6 +14,7 @@
|
|||||||
-export([timestamp/0, number_format/2, timestamp_ms/0, float_to_binary/2, int_format/2]).
|
-export([timestamp/0, number_format/2, timestamp_ms/0, float_to_binary/2, int_format/2]).
|
||||||
-export([chunks/2, rand_bytes/1, uuid/0, md5/1, sha_uuid/0]).
|
-export([chunks/2, rand_bytes/1, uuid/0, md5/1, sha_uuid/0]).
|
||||||
-export([json_data/1, json_error/2]).
|
-export([json_data/1, json_error/2]).
|
||||||
|
-export([starts_with/2, file_md5/1]).
|
||||||
|
|
||||||
get_file_md5(FilePath) when is_list(FilePath) ->
|
get_file_md5(FilePath) when is_list(FilePath) ->
|
||||||
{ok, FileData} = file:read_file(FilePath),
|
{ok, FileData} = file:read_file(FilePath),
|
||||||
@ -102,4 +103,29 @@ float_to_binary(V, Decimals) when is_float(V), is_integer(Decimals) ->
|
|||||||
sha_uuid() ->
|
sha_uuid() ->
|
||||||
Salt = crypto:strong_rand_bytes(32),
|
Salt = crypto:strong_rand_bytes(32),
|
||||||
Str = string:lowercase(binary:encode_hex(crypto:hash(sha256, Salt))),
|
Str = string:lowercase(binary:encode_hex(crypto:hash(sha256, Salt))),
|
||||||
binary:part(Str, 1, 32).
|
binary:part(Str, 1, 32).
|
||||||
|
|
||||||
|
-spec starts_with(Binary :: binary(), Prefix :: binary()) -> boolean().
|
||||||
|
starts_with(Binary, Prefix) when is_binary(Binary), is_binary(Prefix) ->
|
||||||
|
PrefixSize = byte_size(Prefix),
|
||||||
|
case Binary of
|
||||||
|
<<Prefix:PrefixSize/binary, _Rest/binary>> -> true;
|
||||||
|
_ -> false
|
||||||
|
end.
|
||||||
|
|
||||||
|
-spec file_md5(FilePath :: string()) -> Md5 :: string().
|
||||||
|
file_md5(FilePath) when is_list(FilePath) ->
|
||||||
|
{ok, F} = file:open(FilePath, [read, binary]),
|
||||||
|
Digest = md5_loop(F, crypto:hash_init(md5)),
|
||||||
|
file:close(F),
|
||||||
|
lists:flatten(io_lib:format("~32.16.0b", [binary:decode_unsigned(Digest)])).
|
||||||
|
|
||||||
|
md5_loop(F, Context) ->
|
||||||
|
%% 每次读取 1MB,可调整块大小
|
||||||
|
case file:read(F, 1024 * 1024) of
|
||||||
|
eof ->
|
||||||
|
crypto:hash_final(Context);
|
||||||
|
{ok, Bin} ->
|
||||||
|
md5_loop(F, crypto:hash_update(Context, Bin))
|
||||||
|
end.
|
||||||
|
|
||||||
|
|||||||
141
apps/efka/src/message/message_codec.erl
Normal file
141
apps/efka/src/message/message_codec.erl
Normal file
@ -0,0 +1,141 @@
|
|||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
%%% @author anlicheng
|
||||||
|
%%% @copyright (C) 2025, <COMPANY>
|
||||||
|
%%% @doc
|
||||||
|
%%%
|
||||||
|
%%% @end
|
||||||
|
%%% Created : 17. 9月 2025 16:05
|
||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
-module(message_codec).
|
||||||
|
-author("anlicheng").
|
||||||
|
-include("message.hrl").
|
||||||
|
|
||||||
|
-define(I8, 1).
|
||||||
|
-define(I16, 2).
|
||||||
|
-define(I32, 3).
|
||||||
|
-define(Bytes, 4).
|
||||||
|
|
||||||
|
%% API
|
||||||
|
-export([encode/2, decode/1]).
|
||||||
|
|
||||||
|
-spec encode(MessageType :: integer(), Message :: any()) -> binary().
|
||||||
|
encode(MessageType, Message) when is_integer(MessageType) ->
|
||||||
|
Bin = encode0(Message),
|
||||||
|
<<MessageType, Bin/binary>>.
|
||||||
|
encode0(#auth_request{uuid = UUID, username = Username, salt = Salt, token = Token, timestamp = Timestamp}) ->
|
||||||
|
iolist_to_binary([
|
||||||
|
marshal(?Bytes, UUID),
|
||||||
|
marshal(?Bytes, Username),
|
||||||
|
marshal(?Bytes, Salt),
|
||||||
|
marshal(?Bytes, Token),
|
||||||
|
marshal(?I32, Timestamp)
|
||||||
|
]);
|
||||||
|
encode0(#auth_reply{code = Code, payload = Payload}) ->
|
||||||
|
iolist_to_binary([
|
||||||
|
marshal(?I32, Code),
|
||||||
|
marshal(?Bytes, Payload)
|
||||||
|
]);
|
||||||
|
encode0(#jsonrpc_reply{result = Result, error = undefined}) ->
|
||||||
|
ResultBin = erlang:term_to_binary(#{<<"result">> => Result}),
|
||||||
|
iolist_to_binary([
|
||||||
|
marshal(?Bytes, ResultBin)
|
||||||
|
]);
|
||||||
|
encode0(#jsonrpc_reply{result = undefined, error = Error}) ->
|
||||||
|
ResultBin = erlang:term_to_binary(#{<<"error">> => Error}),
|
||||||
|
iolist_to_binary([
|
||||||
|
marshal(?Bytes, ResultBin)
|
||||||
|
]);
|
||||||
|
encode0(#pub{topic = Topic, qos = Qos, content = Content}) ->
|
||||||
|
iolist_to_binary([
|
||||||
|
marshal(?Bytes, Topic),
|
||||||
|
marshal(?I8, Qos),
|
||||||
|
marshal(?Bytes, Content)
|
||||||
|
]);
|
||||||
|
encode0(#command{command_type = CommandType, command = Command}) ->
|
||||||
|
iolist_to_binary([
|
||||||
|
marshal(?I32, CommandType),
|
||||||
|
marshal(?Bytes, Command)
|
||||||
|
]);
|
||||||
|
|
||||||
|
encode0(#jsonrpc_request{method = Method, params = Params}) ->
|
||||||
|
ReqBody = erlang:term_to_binary(#{<<"method">> => Method, <<"params">> => Params}),
|
||||||
|
iolist_to_binary([
|
||||||
|
marshal(?Bytes, ReqBody)
|
||||||
|
]);
|
||||||
|
encode0(#data{route_key = RouteKey, metric = Metric}) ->
|
||||||
|
iolist_to_binary([
|
||||||
|
marshal(?Bytes, RouteKey),
|
||||||
|
marshal(?Bytes, Metric)
|
||||||
|
]);
|
||||||
|
encode0(#task_event_stream{task_id = TaskId, type = Type, stream = Stream}) ->
|
||||||
|
iolist_to_binary([
|
||||||
|
marshal(?I32, TaskId),
|
||||||
|
marshal(?Bytes, Type),
|
||||||
|
marshal(?Bytes, Stream)
|
||||||
|
]).
|
||||||
|
|
||||||
|
-spec decode(Bin :: binary()) -> {ok, Message :: any()} | error.
|
||||||
|
decode(<<PacketType:8, Packet/binary>>) ->
|
||||||
|
case unmarshal(Packet) of
|
||||||
|
{ok, Fields} ->
|
||||||
|
decode0(PacketType, Fields);
|
||||||
|
error ->
|
||||||
|
error
|
||||||
|
end.
|
||||||
|
decode0(?MESSAGE_AUTH_REQUEST, [UUID, Username, Salt, Token, Timestamp]) ->
|
||||||
|
{ok, #auth_request{uuid = UUID, username = Username, salt = Salt, token = Token, timestamp = Timestamp}};
|
||||||
|
decode0(?MESSAGE_JSONRPC_REPLY, [ReplyBin]) ->
|
||||||
|
case erlang:binary_to_term(ReplyBin) of
|
||||||
|
#{<<"result">> := Result} ->
|
||||||
|
{ok, #jsonrpc_reply{result = Result}};
|
||||||
|
#{<<"error">> := Error} ->
|
||||||
|
{ok, #jsonrpc_reply{error = Error}};
|
||||||
|
_ ->
|
||||||
|
error
|
||||||
|
end;
|
||||||
|
decode0(?MESSAGE_PUB, [Topic, Qos, Content]) ->
|
||||||
|
{ok, #pub{topic = Topic, qos = Qos, content = Content}};
|
||||||
|
decode0(?MESSAGE_COMMAND, [CommandType, Command]) ->
|
||||||
|
{ok, #command{command_type = CommandType, command = Command}};
|
||||||
|
decode0(?MESSAGE_AUTH_REPLY, [Code, Payload]) ->
|
||||||
|
{ok, #auth_reply{code = Code, payload = Payload}};
|
||||||
|
decode0(?MESSAGE_JSONRPC_REQUEST, [ReqBody]) ->
|
||||||
|
#{<<"method">> := Method, <<"params">> := Params} = erlang:binary_to_term(ReqBody),
|
||||||
|
{ok, #jsonrpc_request{method = Method, params = Params}};
|
||||||
|
decode0(?MESSAGE_DATA, [RouteKey, Metric]) ->
|
||||||
|
{ok, #data{route_key = RouteKey, metric = Metric}};
|
||||||
|
decode0(?MESSAGE_EVENT_STREAM, [TaskId, Type, Stream]) ->
|
||||||
|
{ok, #task_event_stream{task_id = TaskId, type = Type, stream = Stream}};
|
||||||
|
decode0(_, _) ->
|
||||||
|
error.
|
||||||
|
|
||||||
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||||
|
%%% helper methods
|
||||||
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||||
|
|
||||||
|
-spec marshal(Type :: integer(), Field :: any()) -> binary().
|
||||||
|
marshal(?I8, Field) when is_integer(Field) ->
|
||||||
|
<<?I8, Field:8>>;
|
||||||
|
marshal(?I16, Field) when is_integer(Field) ->
|
||||||
|
<<?I16, Field:16>>;
|
||||||
|
marshal(?I32, Field) when is_integer(Field) ->
|
||||||
|
<<?I32, Field:32>>;
|
||||||
|
marshal(?Bytes, Field) when is_binary(Field) ->
|
||||||
|
Len = byte_size(Field),
|
||||||
|
<<?Bytes, Len:16, Field/binary>>.
|
||||||
|
|
||||||
|
-spec unmarshal(Bin :: binary()) -> {ok, Components :: [any()]} | error.
|
||||||
|
unmarshal(Bin) when is_binary(Bin) ->
|
||||||
|
unmarshal(Bin, []).
|
||||||
|
unmarshal(<<>>, Acc) ->
|
||||||
|
{ok, lists:reverse(Acc)};
|
||||||
|
unmarshal(<<?I8, F:8, Rest/binary>>, Acc) ->
|
||||||
|
unmarshal(Rest, [F|Acc]);
|
||||||
|
unmarshal(<<?I16, F:16, Rest/binary>>, Acc) ->
|
||||||
|
unmarshal(Rest, [F|Acc]);
|
||||||
|
unmarshal(<<?I32, F:32, Rest/binary>>, Acc) ->
|
||||||
|
unmarshal(Rest, [F|Acc]);
|
||||||
|
unmarshal(<<?Bytes, Len:16, F:Len/binary, Rest/binary>>, Acc) ->
|
||||||
|
unmarshal(Rest, [F|Acc]);
|
||||||
|
unmarshal(_, _) ->
|
||||||
|
error.
|
||||||
@ -1,74 +0,0 @@
|
|||||||
%%%-------------------------------------------------------------------
|
|
||||||
%%% @author aresei
|
|
||||||
%%% @copyright (C) 2023, <COMPANY>
|
|
||||||
%%% @doc
|
|
||||||
%%%
|
|
||||||
%%% @end
|
|
||||||
%%% Created : 04. 7月 2023 12:31
|
|
||||||
%%%-------------------------------------------------------------------
|
|
||||||
-module(cache_model).
|
|
||||||
-author("aresei").
|
|
||||||
-include("efka_tables.hrl").
|
|
||||||
-include_lib("stdlib/include/qlc.hrl").
|
|
||||||
|
|
||||||
-define(TAB, cache).
|
|
||||||
|
|
||||||
%% API
|
|
||||||
-export([create_table/0]).
|
|
||||||
-export([insert/2, get_all_cache/0, fetch_next/0, delete/1, next_id/0]).
|
|
||||||
-export([first_key/0]).
|
|
||||||
|
|
||||||
create_table() ->
|
|
||||||
%% id生成器
|
|
||||||
{atomic, ok} = mnesia:create_table(cache, [
|
|
||||||
{attributes, record_info(fields, cache)},
|
|
||||||
{record_name, cache},
|
|
||||||
{disc_copies, [node()]},
|
|
||||||
{type, ordered_set}
|
|
||||||
]).
|
|
||||||
|
|
||||||
next_id() ->
|
|
||||||
id_generator_model:next_id(?TAB).
|
|
||||||
|
|
||||||
-spec insert(Method :: integer(), Data :: binary()) -> ok | {error, Reason :: any()}.
|
|
||||||
insert(Method, Data) when is_integer(Method), is_binary(Data) ->
|
|
||||||
Cache = #cache{id = next_id(), method = Method, data = Data},
|
|
||||||
case mnesia:transaction(fun() -> mnesia:write(?TAB, Cache, write) end) of
|
|
||||||
{'atomic', ok} ->
|
|
||||||
ok;
|
|
||||||
{'aborted', Reason} ->
|
|
||||||
{error, Reason}
|
|
||||||
end.
|
|
||||||
|
|
||||||
fetch_next() ->
|
|
||||||
case mnesia:dirty_first(?TAB) of
|
|
||||||
'$end_of_table' ->
|
|
||||||
error;
|
|
||||||
Id ->
|
|
||||||
[Entry] = mnesia:dirty_read(?TAB, Id),
|
|
||||||
{ok, Entry}
|
|
||||||
end.
|
|
||||||
|
|
||||||
delete(Id) when is_integer(Id) ->
|
|
||||||
case mnesia:transaction(fun() -> mnesia:delete(?TAB, Id, write) end) of
|
|
||||||
{'atomic', ok} ->
|
|
||||||
ok;
|
|
||||||
{'aborted', Reason} ->
|
|
||||||
{error, Reason}
|
|
||||||
end.
|
|
||||||
|
|
||||||
-spec get_all_cache() -> [#cache{}].
|
|
||||||
get_all_cache() ->
|
|
||||||
Fun = fun() ->
|
|
||||||
Q = qlc:q([E || E <- mnesia:table(?TAB)]),
|
|
||||||
qlc:e(Q)
|
|
||||||
end,
|
|
||||||
case mnesia:transaction(Fun) of
|
|
||||||
{'atomic', Res} ->
|
|
||||||
Res;
|
|
||||||
{'aborted', _} ->
|
|
||||||
[]
|
|
||||||
end.
|
|
||||||
|
|
||||||
first_key() ->
|
|
||||||
mnesia:dirty_first(?TAB).
|
|
||||||
@ -1,26 +0,0 @@
|
|||||||
%%%-------------------------------------------------------------------
|
|
||||||
%%% @author anlicheng
|
|
||||||
%%% @copyright (C) 2025, <COMPANY>
|
|
||||||
%%% @doc
|
|
||||||
%%%
|
|
||||||
%%% @end
|
|
||||||
%%% Created : 06. 5月 2025 10:32
|
|
||||||
%%%-------------------------------------------------------------------
|
|
||||||
-module(id_generator_model).
|
|
||||||
-author("anlicheng").
|
|
||||||
-include("efka_tables.hrl").
|
|
||||||
|
|
||||||
%% API
|
|
||||||
-export([create_table/0, next_id/1]).
|
|
||||||
|
|
||||||
create_table() ->
|
|
||||||
%% id生成器
|
|
||||||
{atomic, ok} = mnesia:create_table(id_generator, [
|
|
||||||
{attributes, record_info(fields, id_generator)},
|
|
||||||
{record_name, id_generator},
|
|
||||||
{disc_copies, [node()]},
|
|
||||||
{type, ordered_set}
|
|
||||||
]).
|
|
||||||
|
|
||||||
next_id(Tab) when is_atom(Tab) ->
|
|
||||||
mnesia:dirty_update_counter(id_generator, Tab, 1).
|
|
||||||
@ -1,140 +0,0 @@
|
|||||||
%%%-------------------------------------------------------------------
|
|
||||||
%%% @author aresei
|
|
||||||
%%% @copyright (C) 2023, <COMPANY>
|
|
||||||
%%% @doc
|
|
||||||
%%%
|
|
||||||
%%% @end
|
|
||||||
%%% Created : 04. 7月 2023 12:31
|
|
||||||
%%%-------------------------------------------------------------------
|
|
||||||
-module(service_model).
|
|
||||||
-author("aresei").
|
|
||||||
-include("efka_tables.hrl").
|
|
||||||
-include_lib("stdlib/include/qlc.hrl").
|
|
||||||
|
|
||||||
-define(TAB, service).
|
|
||||||
|
|
||||||
%% API
|
|
||||||
-export([create_table/0]).
|
|
||||||
-export([insert/1, get_all_services/0, get_all_service_ids/0, get_running_services/0]).
|
|
||||||
-export([get_config_json/1, set_config/2, get_service/1, get_status/1, change_status/2]).
|
|
||||||
-export([display_services/0]).
|
|
||||||
|
|
||||||
create_table() ->
|
|
||||||
%% id生成器
|
|
||||||
{atomic, ok} = mnesia:create_table(service, [
|
|
||||||
{attributes, record_info(fields, service)},
|
|
||||||
{record_name, service},
|
|
||||||
{disc_copies, [node()]},
|
|
||||||
{type, ordered_set}
|
|
||||||
]).
|
|
||||||
|
|
||||||
insert(Service = #service{}) ->
|
|
||||||
case mnesia:transaction(fun() -> mnesia:write(?TAB, Service, write) end) of
|
|
||||||
{'atomic', Res} ->
|
|
||||||
Res;
|
|
||||||
{'aborted', Reason} ->
|
|
||||||
{error, Reason}
|
|
||||||
end.
|
|
||||||
|
|
||||||
change_status(ServiceId, NewStatus) when is_binary(ServiceId), is_integer(NewStatus) ->
|
|
||||||
Fun = fun() ->
|
|
||||||
case mnesia:read(?TAB, ServiceId, write) of
|
|
||||||
[] ->
|
|
||||||
mnesia:abort(<<"service not found">>);
|
|
||||||
[Service] ->
|
|
||||||
mnesia:write(?TAB, Service#service{status = NewStatus}, write)
|
|
||||||
end
|
|
||||||
end,
|
|
||||||
case mnesia:transaction(Fun) of
|
|
||||||
{'atomic', ok} ->
|
|
||||||
ok;
|
|
||||||
{'aborted', Reason} ->
|
|
||||||
{error, Reason}
|
|
||||||
end.
|
|
||||||
|
|
||||||
-spec set_config(ServiceId :: binary(), ConfigJson :: binary()) -> ok | {error, Reason :: any()}.
|
|
||||||
set_config(ServiceId, ConfigJson) when is_binary(ServiceId), is_binary(ConfigJson) ->
|
|
||||||
Fun = fun() ->
|
|
||||||
case mnesia:read(?TAB, ServiceId, write) of
|
|
||||||
[] ->
|
|
||||||
mnesia:abort(<<"service not found">>);
|
|
||||||
[S] ->
|
|
||||||
mnesia:write(?TAB, S#service{config_json = ConfigJson}, write)
|
|
||||||
end
|
|
||||||
end,
|
|
||||||
case mnesia:transaction(Fun) of
|
|
||||||
{'atomic', ok} ->
|
|
||||||
ok;
|
|
||||||
{'aborted', Reason} ->
|
|
||||||
{error, Reason}
|
|
||||||
end.
|
|
||||||
|
|
||||||
-spec get_config_json(ServiceId :: binary()) -> error | {ok, ConfigJson :: binary()}.
|
|
||||||
get_config_json(ServiceId) when is_binary(ServiceId) ->
|
|
||||||
case mnesia:dirty_read(?TAB, ServiceId) of
|
|
||||||
[] ->
|
|
||||||
error;
|
|
||||||
[#service{config_json = ConfigJson}] ->
|
|
||||||
{ok, ConfigJson}
|
|
||||||
end.
|
|
||||||
|
|
||||||
-spec get_status(ServiceId :: binary()) -> Status :: integer().
|
|
||||||
get_status(ServiceId) when is_binary(ServiceId) ->
|
|
||||||
case mnesia:dirty_read(?TAB, ServiceId) of
|
|
||||||
[] ->
|
|
||||||
0;
|
|
||||||
[#service{status = Status}] ->
|
|
||||||
Status
|
|
||||||
end.
|
|
||||||
|
|
||||||
-spec get_service(ServiceId :: binary()) -> error | {ok, Service :: #service{}}.
|
|
||||||
get_service(ServiceId) when is_binary(ServiceId) ->
|
|
||||||
case mnesia:dirty_read(?TAB, ServiceId) of
|
|
||||||
[] ->
|
|
||||||
error;
|
|
||||||
[Service] ->
|
|
||||||
{ok, Service}
|
|
||||||
end.
|
|
||||||
|
|
||||||
-spec get_all_services() -> [#service{}].
|
|
||||||
get_all_services() ->
|
|
||||||
Fun = fun() ->
|
|
||||||
Q = qlc:q([E || E <- mnesia:table(?TAB)]),
|
|
||||||
qlc:e(Q)
|
|
||||||
end,
|
|
||||||
|
|
||||||
case mnesia:transaction(Fun) of
|
|
||||||
{'atomic', Res} ->
|
|
||||||
Res;
|
|
||||||
{'aborted', _} ->
|
|
||||||
[]
|
|
||||||
end.
|
|
||||||
|
|
||||||
-spec get_all_service_ids() -> [ServiceId :: binary()].
|
|
||||||
get_all_service_ids() ->
|
|
||||||
mnesia:dirty_all_keys(?TAB).
|
|
||||||
|
|
||||||
-spec get_running_services() -> {ok, [#service{}]} | {error, Reason :: term()}.
|
|
||||||
get_running_services() ->
|
|
||||||
F = fun() ->
|
|
||||||
Q = qlc:q([E || E <- mnesia:table(?TAB), E#service.status == 1]),
|
|
||||||
qlc:e(Q)
|
|
||||||
end,
|
|
||||||
case mnesia:transaction(F) of
|
|
||||||
{atomic, Services} ->
|
|
||||||
{ok, Services};
|
|
||||||
{aborted, Error} ->
|
|
||||||
{error, Error}
|
|
||||||
end.
|
|
||||||
|
|
||||||
display_services() ->
|
|
||||||
F = fun() ->
|
|
||||||
Q = qlc:q([E || E <- mnesia:table(?TAB)]),
|
|
||||||
qlc:e(Q)
|
|
||||||
end,
|
|
||||||
case mnesia:transaction(F) of
|
|
||||||
{atomic, Services} ->
|
|
||||||
{ok, Services};
|
|
||||||
{aborted, Error} ->
|
|
||||||
{error, Error}
|
|
||||||
end.
|
|
||||||
@ -1,46 +0,0 @@
|
|||||||
%%%-------------------------------------------------------------------
|
|
||||||
%%% @author aresei
|
|
||||||
%%% @copyright (C) 2023, <COMPANY>
|
|
||||||
%%% @doc
|
|
||||||
%%%
|
|
||||||
%%% @end
|
|
||||||
%%% Created : 04. 7月 2023 12:31
|
|
||||||
%%%-------------------------------------------------------------------
|
|
||||||
-module(task_log_model).
|
|
||||||
-author("aresei").
|
|
||||||
-include("efka_tables.hrl").
|
|
||||||
-include_lib("stdlib/include/qlc.hrl").
|
|
||||||
|
|
||||||
-define(TAB, task_log).
|
|
||||||
|
|
||||||
%% API
|
|
||||||
-export([create_table/0]).
|
|
||||||
-export([insert/2, get_logs/1]).
|
|
||||||
|
|
||||||
create_table() ->
|
|
||||||
%% id生成器
|
|
||||||
{atomic, ok} = mnesia:create_table(task_log, [
|
|
||||||
{attributes, record_info(fields, task_log)},
|
|
||||||
{record_name, task_log},
|
|
||||||
{disc_copies, [node()]},
|
|
||||||
{type, ordered_set}
|
|
||||||
]).
|
|
||||||
|
|
||||||
-spec insert(TaskId :: integer(), Logs :: [binary()]) -> ok | {error, Reason :: term()}.
|
|
||||||
insert(TaskId, Logs) when is_integer(TaskId), is_list(Logs) ->
|
|
||||||
TaskLog = #task_log{task_id = TaskId, logs = Logs},
|
|
||||||
case mnesia:transaction(fun() -> mnesia:write(?TAB, TaskLog, write) end) of
|
|
||||||
{'atomic', Res} ->
|
|
||||||
Res;
|
|
||||||
{'aborted', Reason} ->
|
|
||||||
{error, Reason}
|
|
||||||
end.
|
|
||||||
|
|
||||||
-spec get_logs(TaskId :: integer()) -> Logs :: [binary()].
|
|
||||||
get_logs(TaskId) when is_integer(TaskId) ->
|
|
||||||
case mnesia:dirty_read(?TAB, TaskId) of
|
|
||||||
[] ->
|
|
||||||
[];
|
|
||||||
[#task_log{logs = Logs}] ->
|
|
||||||
Logs
|
|
||||||
end.
|
|
||||||
@ -8,13 +8,12 @@
|
|||||||
%%%-------------------------------------------------------------------
|
%%%-------------------------------------------------------------------
|
||||||
-module(cache_model).
|
-module(cache_model).
|
||||||
-author("anlicheng").
|
-author("anlicheng").
|
||||||
-include("efka_tables.hrl").
|
|
||||||
|
|
||||||
-behaviour(gen_server).
|
-behaviour(gen_server).
|
||||||
|
|
||||||
%% API
|
%% API
|
||||||
-export([start_link/0]).
|
-export([start_link/0]).
|
||||||
-export([insert/2, fetch_next/0, delete/1, get_all_cache/0]).
|
-export([insert/1, fetch_next/0, delete/1, get_all_cache/0]).
|
||||||
|
|
||||||
%% gen_server callbacks
|
%% gen_server callbacks
|
||||||
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).
|
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).
|
||||||
@ -30,10 +29,9 @@
|
|||||||
%%% API
|
%%% API
|
||||||
%%%===================================================================
|
%%%===================================================================
|
||||||
|
|
||||||
-spec insert(Method :: integer(), Data :: binary()) -> ok | {error, Reason :: any()}.
|
-spec insert(Data :: binary()) -> ok | {error, Reason :: any()}.
|
||||||
insert(Method, Data) when is_integer(Method), is_binary(Data) ->
|
insert(Data) when is_binary(Data) ->
|
||||||
Cache = #cache{id = generate_id(), method = Method, data = Data},
|
gen_server:call(?SERVER, {insert, {generate_id(), Data}}).
|
||||||
gen_server:call(?SERVER, {insert, Cache}).
|
|
||||||
|
|
||||||
fetch_next() ->
|
fetch_next() ->
|
||||||
gen_server:call(?SERVER, fetch_next).
|
gen_server:call(?SERVER, fetch_next).
|
||||||
@ -41,7 +39,7 @@ fetch_next() ->
|
|||||||
delete(Id) when is_integer(Id) ->
|
delete(Id) when is_integer(Id) ->
|
||||||
gen_server:call(?SERVER, {delete, Id}).
|
gen_server:call(?SERVER, {delete, Id}).
|
||||||
|
|
||||||
-spec get_all_cache() -> [#cache{}].
|
-spec get_all_cache() -> [binary()].
|
||||||
get_all_cache() ->
|
get_all_cache() ->
|
||||||
gen_server:call(?SERVER, get_all_cache).
|
gen_server:call(?SERVER, get_all_cache).
|
||||||
|
|
||||||
@ -63,7 +61,7 @@ start_link() ->
|
|||||||
init([]) ->
|
init([]) ->
|
||||||
{ok, DetsDir} = application:get_env(efka, dets_dir),
|
{ok, DetsDir} = application:get_env(efka, dets_dir),
|
||||||
File = DetsDir ++ "cache.dets",
|
File = DetsDir ++ "cache.dets",
|
||||||
{ok, ?TAB} = dets:open_file(?TAB, [{file, File}, {type, bag}, {keypos, 2}]),
|
{ok, ?TAB} = dets:open_file(?TAB, [{file, File}, {type, bag}, {keypos, 1}]),
|
||||||
{ok, #state{}}.
|
{ok, #state{}}.
|
||||||
|
|
||||||
%% @private
|
%% @private
|
||||||
|
|||||||
@ -14,6 +14,7 @@
|
|||||||
|
|
||||||
%% API
|
%% API
|
||||||
-export([start_link/0]).
|
-export([start_link/0]).
|
||||||
|
-export([insert/1, change_status/2, get_status/1, get_service/1, get_all_services/0, get_running_services/0]).
|
||||||
|
|
||||||
%% gen_server callbacks
|
%% gen_server callbacks
|
||||||
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).
|
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).
|
||||||
@ -35,14 +36,6 @@ insert(Service = #service{}) ->
|
|||||||
change_status(ServiceId, NewStatus) when is_binary(ServiceId), is_integer(NewStatus) ->
|
change_status(ServiceId, NewStatus) when is_binary(ServiceId), is_integer(NewStatus) ->
|
||||||
gen_server:call(?SERVER, {change_status, ServiceId, NewStatus}).
|
gen_server:call(?SERVER, {change_status, ServiceId, NewStatus}).
|
||||||
|
|
||||||
-spec set_config(ServiceId :: binary(), ConfigJson :: binary()) -> ok | {error, Reason :: any()}.
|
|
||||||
set_config(ServiceId, ConfigJson) when is_binary(ServiceId), is_binary(ConfigJson) ->
|
|
||||||
gen_server:call(?SERVER, {set_config, ServiceId, ConfigJson}).
|
|
||||||
|
|
||||||
-spec get_config_json(ServiceId :: binary()) -> error | {ok, ConfigJson :: binary()}.
|
|
||||||
get_config_json(ServiceId) when is_binary(ServiceId) ->
|
|
||||||
gen_server:call(?SERVER, {get_config_json, ServiceId}).
|
|
||||||
|
|
||||||
-spec get_status(ServiceId :: binary()) -> Status :: integer().
|
-spec get_status(ServiceId :: binary()) -> Status :: integer().
|
||||||
get_status(ServiceId) when is_binary(ServiceId) ->
|
get_status(ServiceId) when is_binary(ServiceId) ->
|
||||||
gen_server:call(?SERVER, {get_status, ServiceId}).
|
gen_server:call(?SERVER, {get_status, ServiceId}).
|
||||||
@ -55,11 +48,7 @@ get_service(ServiceId) when is_binary(ServiceId) ->
|
|||||||
get_all_services() ->
|
get_all_services() ->
|
||||||
gen_server:call(?SERVER, get_all_services).
|
gen_server:call(?SERVER, get_all_services).
|
||||||
|
|
||||||
-spec get_all_service_ids() -> [ServiceId :: binary()].
|
-spec get_running_services() -> {ok, [#service{}]}.
|
||||||
get_all_service_ids() ->
|
|
||||||
gen_server:call(?SERVER, get_all_service_ids).
|
|
||||||
|
|
||||||
-spec get_running_services() -> {ok, [#service{}]} | {error, Reason :: term()}.
|
|
||||||
get_running_services() ->
|
get_running_services() ->
|
||||||
gen_server:call(?SERVER, get_running_services).
|
gen_server:call(?SERVER, get_running_services).
|
||||||
|
|
||||||
@ -81,7 +70,7 @@ start_link() ->
|
|||||||
init([]) ->
|
init([]) ->
|
||||||
{ok, DetsDir} = application:get_env(efka, dets_dir),
|
{ok, DetsDir} = application:get_env(efka, dets_dir),
|
||||||
File = DetsDir ++ "service.dets",
|
File = DetsDir ++ "service.dets",
|
||||||
{ok, ?TAB} = dets:open_file(?TAB, [{file, File}, {type, bag}, {keypos, 2}]),
|
{ok, ?TAB} = dets:open_file(?TAB, [{file, File}, {type, set}, {keypos, 2}]),
|
||||||
{ok, #state{}}.
|
{ok, #state{}}.
|
||||||
|
|
||||||
%% @private
|
%% @private
|
||||||
@ -94,8 +83,18 @@ init([]) ->
|
|||||||
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
{noreply, NewState :: #state{}, timeout() | hibernate} |
|
||||||
{stop, Reason :: term(), Reply :: term(), NewState :: #state{}} |
|
{stop, Reason :: term(), Reply :: term(), NewState :: #state{}} |
|
||||||
{stop, Reason :: term(), NewState :: #state{}}).
|
{stop, Reason :: term(), NewState :: #state{}}).
|
||||||
handle_call({insert, Service}, _From, State = #state{}) ->
|
handle_call({insert, Service = #service{service_id = ServiceId}}, _From, State = #state{}) ->
|
||||||
ok = dets:insert(?TAB, Service),
|
case dets:lookup(?TAB, ServiceId) of
|
||||||
|
[] ->
|
||||||
|
ok = dets:insert(?TAB, Service);
|
||||||
|
[OldService] ->
|
||||||
|
NewService = OldService#service{
|
||||||
|
meta_data = Service#service.meta_data,
|
||||||
|
container_name = Service#service.container_name,
|
||||||
|
update_ts = Service#service.update_ts
|
||||||
|
},
|
||||||
|
ok = dets:insert(?TAB, NewService)
|
||||||
|
end,
|
||||||
{reply, ok, State};
|
{reply, ok, State};
|
||||||
|
|
||||||
handle_call({change_status, ServiceId, NewStatus}, _From, State = #state{}) ->
|
handle_call({change_status, ServiceId, NewStatus}, _From, State = #state{}) ->
|
||||||
@ -108,24 +107,6 @@ handle_call({change_status, ServiceId, NewStatus}, _From, State = #state{}) ->
|
|||||||
{reply, ok, State}
|
{reply, ok, State}
|
||||||
end;
|
end;
|
||||||
|
|
||||||
handle_call({set_config, ServiceId, ConfigJson}, _From, State = #state{}) ->
|
|
||||||
case dets:lookup(?TAB, ServiceId) of
|
|
||||||
[] ->
|
|
||||||
{reply, {error, <<"service not found">>}, State};
|
|
||||||
[OldService] ->
|
|
||||||
NewService = OldService#service{config_json = ConfigJson},
|
|
||||||
ok = dets:insert(?TAB, NewService),
|
|
||||||
{reply, ok, State}
|
|
||||||
end;
|
|
||||||
|
|
||||||
handle_call({get_config_json, ServiceId}, _From, State = #state{}) ->
|
|
||||||
case dets:lookup(?TAB, ServiceId) of
|
|
||||||
[] ->
|
|
||||||
{reply, error, State};
|
|
||||||
[#service{config_json = ConfigJson}] ->
|
|
||||||
{reply, {ok, ConfigJson}, State}
|
|
||||||
end;
|
|
||||||
|
|
||||||
handle_call({get_status, ServiceId}, _From, State = #state{}) ->
|
handle_call({get_status, ServiceId}, _From, State = #state{}) ->
|
||||||
case dets:lookup(?TAB, ServiceId) of
|
case dets:lookup(?TAB, ServiceId) of
|
||||||
[] ->
|
[] ->
|
||||||
@ -141,7 +122,7 @@ handle_call(get_all_services, _From, State = #state{}) ->
|
|||||||
handle_call(get_running_services, _From, State = #state{}) ->
|
handle_call(get_running_services, _From, State = #state{}) ->
|
||||||
Items = dets:foldl(fun(Record, Acc) -> [Record|Acc] end, [], ?TAB),
|
Items = dets:foldl(fun(Record, Acc) -> [Record|Acc] end, [], ?TAB),
|
||||||
RunningItems = lists:filter(fun(#service{status = Status}) -> Status =:= 1 end, lists:reverse(Items)),
|
RunningItems = lists:filter(fun(#service{status = Status}) -> Status =:= 1 end, lists:reverse(Items)),
|
||||||
{reply, RunningItems, State};
|
{reply, {ok, RunningItems}, State};
|
||||||
|
|
||||||
handle_call(_Request, _From, State = #state{}) ->
|
handle_call(_Request, _From, State = #state{}) ->
|
||||||
{reply, ok, State}.
|
{reply, ok, State}.
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
104
apps/efka/src/tests/docker_commands_tests.erl
Normal file
104
apps/efka/src/tests/docker_commands_tests.erl
Normal file
@ -0,0 +1,104 @@
|
|||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
%%% @author anlicheng
|
||||||
|
%%% @copyright (C) 2025, <COMPANY>
|
||||||
|
%%% @doc
|
||||||
|
%%%
|
||||||
|
%%% @end
|
||||||
|
%%% Created : 23. 9月 2025 17:23
|
||||||
|
%%%-------------------------------------------------------------------
|
||||||
|
-module(docker_commands_tests).
|
||||||
|
-author("anlicheng").
|
||||||
|
|
||||||
|
%% API
|
||||||
|
-export([test_pull/0, test_commands/0, test_create_container/0]).
|
||||||
|
|
||||||
|
test_pull() ->
|
||||||
|
Image = <<"docker.1ms.run/library/nginx:latest">>,
|
||||||
|
docker_commands:pull_image(Image, fun(Msg) -> lager:debug("msg is: ~p", [Msg]) end).
|
||||||
|
|
||||||
|
test_commands() ->
|
||||||
|
Id = <<"redpanda-console">>,
|
||||||
|
StopRes = docker_commands:stop_container(Id),
|
||||||
|
lager:debug("stop res: ~p", [StopRes]),
|
||||||
|
StartRes = docker_commands:start_container(Id),
|
||||||
|
lager:debug("start res: ~p", [StartRes]).
|
||||||
|
|
||||||
|
test_create_container() ->
|
||||||
|
M = #{
|
||||||
|
<<"image">> => <<"docker.1ms.run/library/nginx:latest">>,
|
||||||
|
<<"container_name">> => <<"my_nginx_new1">>,
|
||||||
|
<<"command">> => [
|
||||||
|
<<"nginx">>,
|
||||||
|
<<"-g">>,
|
||||||
|
<<"daemon off;">>
|
||||||
|
],
|
||||||
|
<<"entrypoint">> => [
|
||||||
|
<<"/docker-entrypoint.sh">>
|
||||||
|
],
|
||||||
|
<<"envs">> => [
|
||||||
|
<<"ENV1=val1">>,
|
||||||
|
<<"ENV2=val2">>
|
||||||
|
],
|
||||||
|
<<"env_file">> => [
|
||||||
|
<<"./env.list">>
|
||||||
|
],
|
||||||
|
<<"ports">> => [
|
||||||
|
<<"8080:80">>,
|
||||||
|
<<"443:443">>
|
||||||
|
],
|
||||||
|
<<"expose">> => [
|
||||||
|
<<"80">>,
|
||||||
|
<<"443">>
|
||||||
|
],
|
||||||
|
<<"volumes">> => [
|
||||||
|
<<"/host/data:/data">>,
|
||||||
|
<<"/host/log:/var/log">>
|
||||||
|
],
|
||||||
|
<<"networks">> => [
|
||||||
|
<<"mynet">>
|
||||||
|
],
|
||||||
|
<<"labels">> => #{
|
||||||
|
<<"role">> => <<"web">>,
|
||||||
|
<<"env">> => <<"prod">>
|
||||||
|
},
|
||||||
|
<<"restart">> => <<"always">>,
|
||||||
|
<<"user">> => <<"www-data">>,
|
||||||
|
<<"working_dir">> => <<"/app">>,
|
||||||
|
<<"hostname">> => <<"myhost">>,
|
||||||
|
<<"privileged">> => true,
|
||||||
|
<<"cap_add">> => [
|
||||||
|
<<"NET_ADMIN">>
|
||||||
|
],
|
||||||
|
<<"cap_drop">> => [
|
||||||
|
<<"MKNOD">>
|
||||||
|
],
|
||||||
|
<<"devices">> => [
|
||||||
|
<<"/dev/snd:/dev/snd">>
|
||||||
|
],
|
||||||
|
<<"mem_limit">> => <<"512m">>,
|
||||||
|
<<"mem_reservation">> => <<"256m">>,
|
||||||
|
<<"cpu_shares">> => 512,
|
||||||
|
<<"cpus">> => 1.5,
|
||||||
|
<<"ulimits">> => #{
|
||||||
|
<<"nofile">> => <<"1024:2048">>
|
||||||
|
},
|
||||||
|
<<"sysctls">> => #{
|
||||||
|
<<"net.ipv4.ip_forward">> => <<"1">>
|
||||||
|
},
|
||||||
|
<<"tmpfs">> => [
|
||||||
|
<<"/tmp">>
|
||||||
|
],
|
||||||
|
<<"extra_hosts">> => [
|
||||||
|
<<"host1:192.168.0.1">>
|
||||||
|
],
|
||||||
|
<<"healthcheck">> => #{
|
||||||
|
<<"test">> => [
|
||||||
|
<<"CMD-SHELL">>,
|
||||||
|
<<"curl -f http://localhost || exit 1">>
|
||||||
|
],
|
||||||
|
<<"interval">> => <<"30s">>,
|
||||||
|
<<"timeout">> => <<"10s">>,
|
||||||
|
<<"retries">> => 3
|
||||||
|
}
|
||||||
|
},
|
||||||
|
docker_commands:create_container(<<"my_nginx_xx3">>, "/usr/local/code/efka/", M).
|
||||||
@ -2,12 +2,21 @@
|
|||||||
{efka, [
|
{efka, [
|
||||||
{root_dir, "/usr/local/code/efka"},
|
{root_dir, "/usr/local/code/efka"},
|
||||||
|
|
||||||
{dets_dir, "/tmp/db/"},
|
{dets_dir, "/usr/local/code/tmp/dets/"},
|
||||||
|
|
||||||
|
{upload_dir, "/usr/local/code/tmp/upload/"},
|
||||||
|
|
||||||
{tcp_server, [
|
{tcp_server, [
|
||||||
{port, 18088}
|
{port, 18088}
|
||||||
]},
|
]},
|
||||||
|
|
||||||
|
{http_server, [
|
||||||
|
{port, 18080},
|
||||||
|
{acceptors, 10},
|
||||||
|
{max_connections, 1024},
|
||||||
|
{backlog, 256}
|
||||||
|
]},
|
||||||
|
|
||||||
{tls_server, [
|
{tls_server, [
|
||||||
{host, "localhost"},
|
{host, "localhost"},
|
||||||
{port, 443}
|
{port, 443}
|
||||||
|
|||||||
@ -19,23 +19,34 @@ message AuthReply {
|
|||||||
// service_id主动订阅消息, 基于广播通讯
|
// service_id主动订阅消息, 基于广播通讯
|
||||||
message Pub {
|
message Pub {
|
||||||
string topic = 1;
|
string topic = 1;
|
||||||
string content = 2;
|
bytes content = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Command {
|
||||||
|
string command_type = 1;
|
||||||
|
bytes command = 2;
|
||||||
}
|
}
|
||||||
|
|
||||||
///// 服务器主动推送的消息
|
///// 服务器主动推送的消息
|
||||||
|
|
||||||
message AsyncCallReply {
|
// 部署逻辑
|
||||||
// 0: 表示失败,1: 成功
|
message RPCDeploy {
|
||||||
uint32 code = 1;
|
uint32 task_id = 1;
|
||||||
string result = 2;
|
// json
|
||||||
string message = 3;
|
string config = 2;
|
||||||
}
|
}
|
||||||
|
|
||||||
// 部署逻辑
|
message RPCStartContainer {
|
||||||
message Deploy {
|
string container_name = 1;
|
||||||
uint32 task_id = 1;
|
}
|
||||||
string service_id = 2;
|
|
||||||
string tar_url = 3;
|
message RPCStopContainer {
|
||||||
|
string container_name = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message RPCConfigContainer {
|
||||||
|
string container_name = 1;
|
||||||
|
bytes config = 2;
|
||||||
}
|
}
|
||||||
|
|
||||||
// 获取task的logs
|
// 获取task的logs
|
||||||
@ -43,18 +54,11 @@ message FetchTaskLog {
|
|||||||
uint32 task_id = 1;
|
uint32 task_id = 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
// 需要响应, 云端主动发起的调用; 提供给用户
|
|
||||||
message Invoke {
|
|
||||||
string service_id = 1;
|
|
||||||
string payload = 2;
|
|
||||||
uint32 timeout = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
// 参数配置
|
// 参数配置
|
||||||
message PushServiceConfig {
|
message ContainerConfig {
|
||||||
string service_id = 1;
|
string container_name = 1;
|
||||||
string config_json = 2;
|
// 任意的数据格式
|
||||||
uint32 timeout = 3;
|
bytes config = 2;
|
||||||
}
|
}
|
||||||
|
|
||||||
/////// EFKA主动上报的消息类型
|
/////// EFKA主动上报的消息类型
|
||||||
@ -63,8 +67,15 @@ message PushServiceConfig {
|
|||||||
message Data {
|
message Data {
|
||||||
string service_id = 1;
|
string service_id = 1;
|
||||||
string device_uuid = 2;
|
string device_uuid = 2;
|
||||||
|
string route_key = 3;
|
||||||
// measurement[,tag_key=tag_value...] field_key=field_value[,field_key2=field_value2...] [timestamp]
|
// measurement[,tag_key=tag_value...] field_key=field_value[,field_key2=field_value2...] [timestamp]
|
||||||
string metric = 3;
|
bytes metric = 4;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Event {
|
||||||
|
string service_id = 1;
|
||||||
|
uint32 event_type = 2;
|
||||||
|
string params = 3;
|
||||||
}
|
}
|
||||||
|
|
||||||
//#{<<"adcode">> => 0,<<"boot_time">> => 18256077,<<"city">> => <<>>,
|
//#{<<"adcode">> => 0,<<"boot_time">> => 18256077,<<"city">> => <<>>,
|
||||||
@ -97,24 +108,4 @@ message Ping {
|
|||||||
repeated int32 memory = 12;
|
repeated int32 memory = 12;
|
||||||
// 接口信息的定义: 每个接口的信息, 采用json格式传输,没有办法提前定义
|
// 接口信息的定义: 每个接口的信息, 采用json格式传输,没有办法提前定义
|
||||||
string interfaces = 13;
|
string interfaces = 13;
|
||||||
}
|
|
||||||
|
|
||||||
// Inform消息
|
|
||||||
message ServiceInform {
|
|
||||||
string service_id = 1;
|
|
||||||
string props = 2;
|
|
||||||
uint32 status = 3;
|
|
||||||
uint32 timestamp = 4;
|
|
||||||
}
|
|
||||||
|
|
||||||
message Event {
|
|
||||||
string service_id = 1;
|
|
||||||
uint32 event_type = 2;
|
|
||||||
string params = 3;
|
|
||||||
}
|
|
||||||
|
|
||||||
// 告警信息
|
|
||||||
message Alarm {
|
|
||||||
string service_id = 1;
|
|
||||||
string params = 2;
|
|
||||||
}
|
}
|
||||||
@ -2,8 +2,8 @@
|
|||||||
{deps, [
|
{deps, [
|
||||||
{sync, ".*", {git, "https://github.com/rustyio/sync.git", {branch, "master"}}},
|
{sync, ".*", {git, "https://github.com/rustyio/sync.git", {branch, "master"}}},
|
||||||
{jiffy, ".*", {git, "https://github.com/davisp/jiffy.git", {tag, "1.1.2"}}},
|
{jiffy, ".*", {git, "https://github.com/davisp/jiffy.git", {tag, "1.1.2"}}},
|
||||||
{gpb, ".*", {git, "https://github.com/tomas-abrahamsson/gpb.git", {tag, "4.20.0"}}},
|
{cowboy, ".*", {git, "https://github.com/ninenines/cowboy.git", {tag, "2.10.0"}}},
|
||||||
{jiffy, ".*", {git, "https://github.com/davisp/jiffy.git", {tag, "1.1.1"}}},
|
{gun, ".*", {git, "https://github.com/ninenines/gun.git", {tag, "2.2.0"}}},
|
||||||
{parse_trans, ".*", {git, "https://github.com/uwiger/parse_trans", {tag, "3.0.0"}}},
|
{parse_trans, ".*", {git, "https://github.com/uwiger/parse_trans", {tag, "3.0.0"}}},
|
||||||
{lager, ".*", {git,"https://github.com/erlang-lager/lager.git", {tag, "3.9.2"}}}
|
{lager, ".*", {git,"https://github.com/erlang-lager/lager.git", {tag, "3.9.2"}}}
|
||||||
]}.
|
]}.
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user