文章目录
- 1. 前言
- 2. 源码分析
- 2.1 sofia 模块的加载
- 2.2 呼入的处理流程
1. 前言
SIP(Session Initiation Protocol)
是应用层的信令控制协议,有许多开源的协议栈实现,其中就包括 Sofia-SIP
。FreeSWITCH 中的 sofia模块 就是对底层 Sofia-SIP 协议栈的使用封装,提供了至关重要的呼入呼出能力。下图是 FreeSWITCH 中 sofia 模块的源码时序图,下文将对源码进行分析
如果读者有 SpringBoot 开发经验,可以将 Sofia-SIP 协议栈看成类似 Tomcat/Netty 那样的底层服务器,只不过 Sofia-SIP 面向的是 SIP 协议而已
2. 源码分析
2.1 sofia 模块的加载
-
在 FreeSWITCH 1.10 源码阅读(1)-服务启动及 Event Socket 模块工作原理 中笔者分析了 FreeSWITCH 加载模块的流程,sofia 模块被加载时将触发
mod_sofia.c#SWITCH_MODULE_LOAD_FUNCTION(mod_sofia_load)
执行。这个函数比较长,不过逻辑很清晰,大致有以下几个关键点:- 首先是初始化关键的结构体实例 mod_sofia_globals,包括各个消息队列的创建,不做深入分析
- 调用
sofia.c#sofia_init()
函数初始化底层的 Sofia-SIP 协议栈,核心逻辑是调用库函数su_init()
,不做深入分析 - 调用
sofia.c#config_sofia()
函数加载名称为 sofia.conf 的 xml 配置,拉起底层 Sofia-SIP 协议栈的 UA 监听端口处理 SIP 请求 - 调用
sofia.c#sofia_msg_thread_start()
函数拉起处理 队列mod_sofia_globals.msg_queue 消息的线程 - 调用
switch_event.c#switch_event_bind()
函数向核心的事件组件注册各个消息监听,不做深入分析 - 构建 sofia 模块的对外接口结构体,将其作为
端点接口
注册到 FreeSWITCH 核心。这个部分比较关键的是将回调函数表sofia_io_routines
和sofia_event_handlers
存放到结构体的指定字段,这些回调函数的作用本文暂不涉及 - 最后是注册各种 API 和 APP 接口供 FreeSWITCH 核心使用,不做深入分析
SWITCH_MODULE_LOAD_FUNCTION(mod_sofia_load) { switch_chat_interface_t *chat_interface; switch_api_interface_t *api_interface; switch_management_interface_t *management_interface; switch_application_interface_t *app_interface; struct in_addr in; switch_status_t status; memset(&mod_sofia_globals, 0, sizeof(mod_sofia_globals)); mod_sofia_globals.destroy_private.destroy_nh = 1; mod_sofia_globals.destroy_private.is_static = 1; mod_sofia_globals.keep_private.is_static = 1; mod_sofia_globals.pool = pool; switch_mutex_init(&mod_sofia_globals.mutex, SWITCH_MUTEX_NESTED, mod_sofia_globals.pool); switch_core_hash_init(&mod_sofia_globals.profile_hash); switch_core_hash_init(&mod_sofia_globals.gateway_hash); switch_mutex_init(&mod_sofia_globals.hash_mutex, SWITCH_MUTEX_NESTED, mod_sofia_globals.pool); if (switch_event_reserve_subclass(MY_EVENT_NOTIFY_REFER) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_NOTIFY_REFER); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_NOTIFY_WATCHED_HEADER) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_NOTIFY_WATCHED_HEADER); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_UNREGISTER) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_UNREGISTER); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_PROFILE_START) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_PROFILE_START); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_REINVITE) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_REINVITE); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_REPLACED) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_REPLACED); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_TRANSFEROR) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_TRANSFEROR); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_TRANSFEREE) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_TRANSFEREE); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_ERROR) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_ERROR); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_INTERCEPTED) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_INTERCEPTED); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_GATEWAY_STATE) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_GATEWAY_STATE); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_SIP_USER_STATE) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_SIP_USER_STATE); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_GATEWAY_DEL) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_GATEWAY_DEL); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_EXPIRE) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_EXPIRE); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_REGISTER_ATTEMPT) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_REGISTER_ATTEMPT); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_REGISTER_FAILURE) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_REGISTER_FAILURE); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_PRE_REGISTER) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_PRE_REGISTER); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_REGISTER) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_REGISTER); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_GATEWAY_ADD) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_GATEWAY_ADD); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_reserve_subclass(MY_EVENT_BYE_RESPONSE) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't register subclass %s!\n", MY_EVENT_BYE_RESPONSE); switch_goto_status(SWITCH_STATUS_TERM, err); } switch_find_local_ip(mod_sofia_globals.guess_ip, sizeof(mod_sofia_globals.guess_ip), &mod_sofia_globals.guess_mask, AF_INET); in.s_addr = mod_sofia_globals.guess_mask; switch_set_string(mod_sofia_globals.guess_mask_str, inet_ntoa(in)); strcpy(mod_sofia_globals.hostname, switch_core_get_switchname()); switch_mutex_lock(mod_sofia_globals.mutex); mod_sofia_globals.running = 1; switch_mutex_unlock(mod_sofia_globals.mutex); mod_sofia_globals.auto_nat = (switch_nat_get_type() ? 1 : 0); switch_queue_create(&mod_sofia_globals.presence_queue, SOFIA_QUEUE_SIZE, mod_sofia_globals.pool); switch_queue_create(&mod_sofia_globals.general_event_queue, SOFIA_QUEUE_SIZE, mod_sofia_globals.pool); mod_sofia_globals.cpu_count = switch_core_cpu_count(); mod_sofia_globals.max_msg_queues = (mod_sofia_globals.cpu_count / 2) + 1; if (mod_sofia_globals.max_msg_queues < 2) { mod_sofia_globals.max_msg_queues = 2; } if (mod_sofia_globals.max_msg_queues > SOFIA_MAX_MSG_QUEUE) { mod_sofia_globals.max_msg_queues = SOFIA_MAX_MSG_QUEUE; } switch_queue_create(&mod_sofia_globals.msg_queue, SOFIA_MSG_QUEUE_SIZE * mod_sofia_globals.max_msg_queues, mod_sofia_globals.pool); /* start one message thread */ switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_INFO, "Starting initial message thread.\n"); if (sofia_init() != SWITCH_STATUS_SUCCESS) { switch_goto_status(SWITCH_STATUS_GENERR, err); return SWITCH_STATUS_GENERR; } if (config_sofia(SOFIA_CONFIG_LOAD, NULL) != SWITCH_STATUS_SUCCESS) { mod_sofia_globals.running = 0; switch_goto_status(SWITCH_STATUS_GENERR, err); return SWITCH_STATUS_GENERR; } sofia_msg_thread_start(0); switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "Waiting for profiles to start\n"); switch_yield(1500000); if (switch_event_bind(modname, SWITCH_EVENT_CUSTOM, MULTICAST_EVENT, event_handler, NULL) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't bind!\n"); switch_goto_status(SWITCH_STATUS_TERM, err); } if (switch_event_bind(modname, SWITCH_EVENT_CONFERENCE_DATA, SWITCH_EVENT_SUBCLASS_ANY, sofia_presence_event_handler, NULL) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't bind!\n"); switch_goto_status(SWITCH_STATUS_GENERR, err); return SWITCH_STATUS_GENERR; } if (switch_event_bind(modname, SWITCH_EVENT_PRESENCE_IN, SWITCH_EVENT_SUBCLASS_ANY, sofia_presence_event_handler, NULL) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't bind!\n"); switch_goto_status(SWITCH_STATUS_GENERR, err); return SWITCH_STATUS_GENERR; } if (switch_event_bind(modname, SWITCH_EVENT_PRESENCE_OUT, SWITCH_EVENT_SUBCLASS_ANY, sofia_presence_event_handler, NULL) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't bind!\n"); switch_goto_status(SWITCH_STATUS_GENERR, err); return SWITCH_STATUS_GENERR; } if (switch_event_bind(modname, SWITCH_EVENT_PRESENCE_PROBE, SWITCH_EVENT_SUBCLASS_ANY, sofia_presence_event_handler, NULL) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't bind!\n"); switch_goto_status(SWITCH_STATUS_GENERR, err); return SWITCH_STATUS_GENERR; } if (switch_event_bind(modname, SWITCH_EVENT_ROSTER, SWITCH_EVENT_SUBCLASS_ANY, sofia_presence_event_handler, NULL) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't bind!\n"); switch_goto_status(SWITCH_STATUS_GENERR, err); return SWITCH_STATUS_GENERR; } if (switch_event_bind(modname, SWITCH_EVENT_MESSAGE_WAITING, SWITCH_EVENT_SUBCLASS_ANY, sofia_presence_event_handler, NULL) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't bind!\n"); switch_goto_status(SWITCH_STATUS_GENERR, err); return SWITCH_STATUS_GENERR; } if (switch_event_bind(modname, SWITCH_EVENT_TRAP, SWITCH_EVENT_SUBCLASS_ANY, general_queue_event_handler, NULL) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't bind!\n"); switch_goto_status(SWITCH_STATUS_GENERR, err); return SWITCH_STATUS_GENERR; } if (switch_event_bind(modname, SWITCH_EVENT_NOTIFY, SWITCH_EVENT_SUBCLASS_ANY, general_queue_event_handler, NULL) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't bind!\n"); switch_goto_status(SWITCH_STATUS_GENERR, err); return SWITCH_STATUS_GENERR; } if (switch_event_bind(modname, SWITCH_EVENT_PHONE_FEATURE, SWITCH_EVENT_SUBCLASS_ANY, general_queue_event_handler, NULL) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't bind!\n"); switch_goto_status(SWITCH_STATUS_GENERR, err); return SWITCH_STATUS_GENERR; } if (switch_event_bind(modname, SWITCH_EVENT_SEND_MESSAGE, SWITCH_EVENT_SUBCLASS_ANY, general_queue_event_handler, NULL) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't bind!\n"); switch_goto_status(SWITCH_STATUS_GENERR, err); return SWITCH_STATUS_GENERR; } if (switch_event_bind(modname, SWITCH_EVENT_SEND_INFO, SWITCH_EVENT_SUBCLASS_ANY, general_queue_event_handler, NULL) != SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Couldn't bind!\n"); switch_goto_status(SWITCH_STATUS_GENERR, err); return SWITCH_STATUS_GENERR; } /* connect my internal structure to the blank pointer passed to me */ *module_interface = switch_loadable_module_create_module_interface(pool, modname); sofia_endpoint_interface = switch_loadable_module_create_interface(*module_interface, SWITCH_ENDPOINT_INTERFACE); sofia_endpoint_interface->interface_name = "sofia"; sofia_endpoint_interface->io_routines = &sofia_io_routines; sofia_endpoint_interface->state_handler = &sofia_event_handlers; sofia_endpoint_interface->recover_callback = sofia_recover_callback; management_interface = switch_loadable_module_create_interface(*module_interface, SWITCH_MANAGEMENT_INTERFACE); management_interface->relative_oid = "1001"; management_interface->management_function = sofia_manage; add_sofia_json_apis(module_interface); SWITCH_ADD_APP(app_interface, "sofia_sla", "private sofia sla function", "private sofia sla function", sofia_sla_function, "<uuid>", SAF_NONE); SWITCH_ADD_APP(app_interface, "sofia_stir_shaken_vs", "Verify SIP Identity header and store result in sip_verstat channel variable", "Verify SIP Identity header and store result in sip_verstat channel variable", sofia_stir_shaken_vs_function, "", SAF_SUPPORT_NOMEDIA); SWITCH_ADD_API(api_interface, "sofia", "Sofia Controls", sofia_function, "<cmd> <args>"); SWITCH_ADD_API(api_interface, "sofia_gateway_data", "Get data from a sofia gateway", sofia_gateway_data_function, "<gateway_name> [ivar|ovar|var] <name>"); switch_console_set_complete("add sofia ::[help:status"); switch_console_set_complete("add sofia status profile ::sofia::list_profiles reg"); switch_console_set_complete("add sofia status gateway ::sofia::list_gateways"); switch_console_set_complete("add sofia loglevel ::[all:default:tport:iptsec:nea:nta:nth_client:nth_server:nua:soa:sresolv:stun ::[0:1:2:3:4:5:6:7:8:9"); switch_console_set_complete("add sofia tracelevel ::[console:alert:crit:err:warning:notice:info:debug"); switch_console_set_complete("add sofia global ::[siptrace::standby::capture::watchdog ::[on:off"); switch_console_set_complete("add sofia global debug ::[presence:sla:none"); switch_console_set_complete("add sofia profile restart all"); switch_console_set_complete("add sofia profile ::sofia::list_profiles ::[start:rescan:restart:check_sync"); switch_console_set_complete("add sofia profile ::sofia::list_profiles stop wait"); switch_console_set_complete("add sofia profile ::sofia::list_profiles flush_inbound_reg reboot"); switch_console_set_complete("add sofia profile ::sofia::list_profiles ::[register:unregister all"); switch_console_set_complete("add sofia profile ::sofia::list_profiles ::[register:unregister:killgw:startgw ::sofia::list_profile_gateway"); switch_console_set_complete("add sofia profile ::sofia::list_profiles killgw _all_"); switch_console_set_complete("add sofia profile ::sofia::list_profiles startgw _all_"); switch_console_set_complete("add sofia profile ::sofia::list_profiles ::[siptrace:capture:watchdog ::[on:off"); switch_console_set_complete("add sofia profile ::sofia::list_profiles gwlist ::[up:down"); switch_console_set_complete("add sofia recover flush"); switch_console_set_complete("add sofia xmlstatus profile ::sofia::list_profiles reg"); switch_console_set_complete("add sofia xmlstatus gateway ::sofia::list_gateways"); switch_console_add_complete_func("::sofia::list_profiles", list_profiles); switch_console_add_complete_func("::sofia::list_gateways", list_gateways); switch_console_add_complete_func("::sofia::list_profile_gateway", list_profile_gateway); SWITCH_ADD_API(api_interface, "sofia_username_of", "Sofia Username Lookup", sofia_username_of_function, "[profile/]<user>@<domain>"); SWITCH_ADD_API(api_interface, "sofia_contact", "Sofia Contacts", sofia_contact_function, "[profile/]<user>@<domain>"); SWITCH_ADD_API(api_interface, "sofia_count_reg", "Count Sofia registration", sofia_count_reg_function, "[profile/]<user>@<domain>"); SWITCH_ADD_API(api_interface, "sofia_dig", "SIP DIG", sip_dig_function, "<url>"); SWITCH_ADD_API(api_interface, "sofia_presence_data", "Sofia Presence Data", sofia_presence_data_function, "[list|status|rpid|user_agent] [profile/]<user>@domain"); SWITCH_ADD_CHAT(chat_interface, SOFIA_CHAT_PROTO, sofia_presence_chat_send); crtp_init(*module_interface); sofia_stir_shaken_create_services(); /* indicate that the module should continue to be loaded */ return SWITCH_STATUS_SUCCESS; err: mod_sofia_shutdown_cleanup(); return status; }
-
sofia.c#config_sofia()
函数非常长,核心的处理逻辑如下:- 调用
switch_xml.c#switch_xml_open_cfg()
函数查找 sofia 的配置内容,该配置可能从本地的 sofia.conf.xml文件 获取,也可能通过 xml_curl 模块从远端服务器获取,这部分不了解的读者可参考FreeSWITCH 1.10 源码阅读(2)-xml_curl 模块原理 - 获取到配置后需要解析其内容,比较关键的是对于配置中 profiles 节点内容的解析,这个节点下面是一个个具体的
profile
配置。事实上,每一个 profile 配置都对应一个 Sofia-SIP 协议栈 UA,处理 profile 配置其实就是将 UA 的各项属性加载到结构体sofia_profile_t
实例,其中包括 UA 的 SIP 端口等各项关键属性。解析关闭后关键操作是调用sofia.c#launch_sofia_profile_thread()
函数使用 profile 配置拉起对应的 SIP UA
switch_status_t config_sofia(sofia_config_t reload, char *profile_name) { char *cf = "sofia.conf"; switch_xml_t cfg, xml = NULL, xprofile, param, settings, profiles; switch_status_t status = SWITCH_STATUS_SUCCESS; sofia_profile_t *profile = NULL; char url[512] = ""; int profile_found = 0; switch_event_t *params = NULL; sofia_profile_t *profile_already_started = NULL; ...... if (!(xml = switch_xml_open_cfg(cf, &cfg, params))) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Open of %s failed\n", cf); status = SWITCH_STATUS_FALSE; goto done; } ...... if ((profiles = switch_xml_child(cfg, "profiles"))) { for (xprofile = switch_xml_child(profiles, "profile"); xprofile; xprofile = xprofile->next) { char *xprofilename = (char *) switch_xml_attr_soft(xprofile, "name"); char *xprofiledomain = (char *) switch_xml_attr(xprofile, "domain"); ...... if (profile) { if (profile_already_started) { switch_xml_t gateways_tag, domain_tag, domains_tag, aliases_tag, alias_tag; if (sofia_test_flag(profile, TFLAG_ZRTP_PASSTHRU)) { sofia_set_flag(profile, TFLAG_LATE_NEGOTIATION); } if ((gateways_tag = switch_xml_child(xprofile, "gateways"))) { parse_gateways(profile, gateways_tag, NULL); } status = SWITCH_STATUS_SUCCESS; ...... if (profile->sipip) { switch_event_t *s_event; if (!profile->extsipport) profile->extsipport = profile->sip_port; launch_sofia_profile_thread(profile); if (profile->odbc_dsn) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_NOTICE, "Connecting ODBC Profile %s [%s]\n", profile->name, url); switch_yield(1000000); } else { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_NOTICE, "Started Profile %s [%s]\n", profile->name, url); } if ((switch_event_create_subclass(&s_event, SWITCH_EVENT_CUSTOM, MY_EVENT_PROFILE_START) == SWITCH_STATUS_SUCCESS)) { switch_event_add_header_string(s_event, SWITCH_STACK_BOTTOM, "module_name", "mod_sofia"); switch_event_add_header_string(s_event, SWITCH_STACK_BOTTOM, "profile_name", profile->name); switch_event_add_header_string(s_event, SWITCH_STACK_BOTTOM, "profile_uri", profile->url); switch_event_fire(&s_event); } } else { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_NOTICE, "Unable to start Profile %s due to no configured sip-ip\n", profile->name); sofia_profile_start_failure(profile, profile->name); } profile = NULL; } if (profile_found) { break; } } } } done: if (profile_already_started) { sofia_glue_release_profile(profile_already_started); } switch_event_destroy(¶ms); if (xml) { switch_xml_free(xml); } if (profile_name && !profile_found) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_WARNING, "No Such Profile '%s'\n", profile_name); status = SWITCH_STATUS_FALSE; } return status; }
- 调用
-
sofia.c#launch_sofia_profile_thread()
函数比较简单,关键逻辑是创建线程,将sofia.c#sofia_profile_thread_run()
函数作为线程任务运行void launch_sofia_profile_thread(sofia_profile_t *profile) { //switch_thread_t *thread; switch_threadattr_t *thd_attr = NULL; switch_threadattr_create(&thd_attr, profile->pool); switch_threadattr_detach_set(thd_attr, 1); switch_threadattr_stacksize_set(thd_attr, SWITCH_THREAD_STACKSIZE); switch_threadattr_priority_set(thd_attr, SWITCH_PRI_REALTIME); switch_thread_create(&profile->thread, thd_attr, sofia_profile_thread_run, profile, profile->pool); }
-
sofia.c#sofia_profile_thread_run()
函数逻辑比较散,整理完大致核心如下:- 首先调用库函数
su_root_create
创建 Sofia-SIP 协议栈需要的事件循环结构实例,不做深入分析 - 调用
sofia_glue.c#sofia_glue_init_sql()
函数初始化 SIP 相关库表,包括 sip_registrations、sip_dialogs等,不做进一步讨论 - 调用库函数
nua_create
使用指定的事件循环实例创建 Sofia-SIP UA 并监听相关端口,同时指定底层 SIP 事件处理的函数为sofia.c#sofia_event_callback()
,每当底层 Sofia-SIP UA 收到数据则回调该函数通知上层
void *SWITCH_THREAD_FUNC sofia_profile_thread_run(switch_thread_t *thread, void *obj) { sofia_profile_t *profile = (sofia_profile_t *) obj; //switch_memory_pool_t *pool; sip_alias_node_t *node; switch_event_t *s_event; int use_100rel = !sofia_test_pflag(profile, PFLAG_DISABLE_100REL); int use_timer = !sofia_test_pflag(profile, PFLAG_DISABLE_TIMER); int use_rfc_5626 = sofia_test_pflag(profile, PFLAG_ENABLE_RFC5626); const char *supported = NULL; int sanity, attempts = 0; switch_thread_t *worker_thread; switch_status_t st; char qname [128] = ""; ...... profile->s_root = su_root_create(NULL); //profile->home = su_home_new(sizeof(*profile->home)); switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "Creating agent for %s\n", profile->name); if (!sofia_glue_init_sql(profile)) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_CRIT, "Cannot Open SQL Database [%s]!\n", profile->name); sofia_profile_start_failure(profile, profile->name); goto db_fail; } supported = switch_core_sprintf(profile->pool, "%s%s%spath, replaces", use_100rel ? "100rel, " : "", use_timer ? "timer, " : "", use_rfc_5626 ? "outbound, " : ""); ...... do { profile->nua = nua_create(profile->s_root, /* Event loop */ sofia_event_callback, /* Callback for processing events */ profile, /* Additional data to pass to callback */ TAG_IF( ! sofia_test_pflag(profile, PFLAG_TLS) || ! profile->tls_only, NUTAG_URL(profile->bindurl)), NTATAG_USER_VIA(1), TPTAG_PONG2PING(1), NTATAG_TCP_RPORT(0), NTATAG_TLS_RPORT(0), #ifdef NTATAG_TLS_ORQ_CONNECT_TIMEOUT TAG_IF(profile->tls_orq_connect_timeout, NTATAG_TLS_ORQ_CONNECT_TIMEOUT(profile->tls_orq_connect_timeout)), /* Profile based timeout */ #endif NUTAG_RETRY_AFTER_ENABLE(0), NUTAG_AUTO_INVITE_100(0), TAG_IF(!strchr(profile->sipip, ':'), SOATAG_AF(SOA_AF_IP4_ONLY)), TAG_IF(strchr(profile->sipip, ':'), SOATAG_AF(SOA_AF_IP6_ONLY)), TAG_IF(sofia_test_pflag(profile, PFLAG_TLS), NUTAG_SIPS_URL(profile->tls_bindurl)), TAG_IF(profile->ws_bindurl, NUTAG_WS_URL(profile->ws_bindurl)), TAG_IF(profile->wss_bindurl, NUTAG_WSS_URL(profile->wss_bindurl)), TAG_IF(profile->tls_cert_dir, NUTAG_CERTIFICATE_DIR(profile->tls_cert_dir)), TAG_IF(sofia_test_pflag(profile, PFLAG_TLS) && profile->tls_passphrase, TPTAG_TLS_PASSPHRASE(profile->tls_passphrase)), TAG_IF(sofia_test_pflag(profile, PFLAG_TLS), TPTAG_TLS_VERIFY_POLICY(profile->tls_verify_policy)), TAG_IF(sofia_test_pflag(profile, PFLAG_TLS), TPTAG_TLS_VERIFY_DEPTH(profile->tls_verify_depth)), TAG_IF(sofia_test_pflag(profile, PFLAG_TLS), TPTAG_TLS_VERIFY_DATE(profile->tls_verify_date)), TAG_IF(sofia_test_pflag(profile, PFLAG_TLS) && profile->tls_verify_in_subjects, TPTAG_TLS_VERIFY_SUBJECTS(profile->tls_verify_in_subjects)), TAG_IF(sofia_test_pflag(profile, PFLAG_TLS), TPTAG_TLS_CIPHERS(profile->tls_ciphers)), TAG_IF(sofia_test_pflag(profile, PFLAG_TLS), TPTAG_TLS_VERSION(profile->tls_version)), TAG_IF(sofia_test_pflag(profile, PFLAG_TLS) && profile->tls_timeout, TPTAG_TLS_TIMEOUT(profile->tls_timeout)), TAG_IF(!strchr(profile->sipip, ':'), NTATAG_UDP_MTU(65535)), TAG_IF(sofia_test_pflag(profile, PFLAG_DISABLE_SRV), NTATAG_USE_SRV(0)), TAG_IF(sofia_test_pflag(profile, PFLAG_DISABLE_NAPTR), NTATAG_USE_NAPTR(0)), TAG_IF(sofia_test_pflag(profile, PFLAG_TCP_PINGPONG), TPTAG_PINGPONG(profile->tcp_pingpong)), TAG_IF(sofia_test_pflag(profile, PFLAG_TCP_PING2PONG), TPTAG_PINGPONG(profile->tcp_ping2pong)), TAG_IF(sofia_test_pflag(profile, PFLAG_DISABLE_SRV503), NTATAG_SRV_503(0)), TAG_IF(sofia_test_pflag(profile, PFLAG_SOCKET_TCP_KEEPALIVE), TPTAG_SOCKET_KEEPALIVE(profile->socket_tcp_keepalive)), TAG_IF(sofia_test_pflag(profile, PFLAG_TCP_KEEPALIVE), TPTAG_KEEPALIVE(profile->tcp_keepalive)), NTATAG_DEFAULT_PROXY(profile->outbound_proxy), NTATAG_SERVER_RPORT(profile->server_rport_level), NTATAG_CLIENT_RPORT(profile->client_rport_level), TPTAG_LOG(sofia_test_flag(profile, TFLAG_TPORT_LOG)), TPTAG_CAPT(sofia_test_flag(profile, TFLAG_CAPTURE) ? mod_sofia_globals.capture_server : NULL), TAG_IF(sofia_test_pflag(profile, PFLAG_SIPCOMPACT), NTATAG_SIPFLAGS(MSG_DO_COMPACT)), TAG_IF(profile->timer_t1, NTATAG_SIP_T1(profile->timer_t1)), TAG_IF(profile->timer_t1x64, NTATAG_SIP_T1X64(profile->timer_t1x64)), TAG_IF(profile->timer_t2, NTATAG_SIP_T2(profile->timer_t2)), TAG_IF(profile->timer_t4, NTATAG_SIP_T4(profile->timer_t4)), SIPTAG_ACCEPT_STR("application/sdp, multipart/mixed"), TAG_IF(sofia_test_pflag(profile, PFLAG_NO_CONNECTION_REUSE), TPTAG_REUSE(0)), TAG_END()); /* Last tag should always finish the sequence */ if (!ssl_error && !profile->nua) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_ERROR, "Error Creating SIP UA for profile: %s (%s) ATTEMPT %d (RETRY IN %d SEC)\n", profile->name, profile->bindurl, attempts + 1, profile->bind_attempt_interval); if (attempts < profile->bind_attempts) { switch_yield(1000000 * profile->bind_attempt_interval); } } } while (!profile->nua && attempts++ < profile->bind_attempts); ...... for (node = profile->aliases; node; node = node->next) { node->nua = nua_create(profile->s_root, /* Event loop */ sofia_event_callback, /* Callback for processing events */ profile, /* Additional data to pass to callback */ NTATAG_SERVER_RPORT(profile->server_rport_level), NUTAG_URL(node->url), TAG_END()); /* Last tag should always finish the sequence */ nua_set_params(node->nua, SIPTAG_USER_AGENT(SIP_NONE), NUTAG_APPL_METHOD("OPTIONS"), NUTAG_APPL_METHOD("REFER"), NUTAG_APPL_METHOD("SUBSCRIBE"), NUTAG_AUTOANSWER(0), NUTAG_AUTOACK(0), NUTAG_AUTOALERT(0), TAG_IF((profile->mflags & MFLAG_REGISTER), NUTAG_ALLOW("REGISTER")), TAG_IF((profile->mflags & MFLAG_REFER), NUTAG_ALLOW("REFER")), NUTAG_ALLOW("INFO"), TAG_IF(profile->pres_type, NUTAG_ALLOW("PUBLISH")), TAG_IF(profile->pres_type, NUTAG_ENABLEMESSAGE(1)), SIPTAG_SUPPORTED_STR(supported), TAG_IF(strcasecmp(profile->user_agent, "_undef_"), SIPTAG_USER_AGENT_STR(profile->user_agent)), TAG_END()); } switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "Activated db for %s\n", profile->name); switch_mutex_init(&profile->ireg_mutex, SWITCH_MUTEX_NESTED, profile->pool); switch_mutex_init(&profile->dbh_mutex, SWITCH_MUTEX_NESTED, profile->pool); switch_mutex_init(&profile->gateway_mutex, SWITCH_MUTEX_NESTED, profile->pool); switch_queue_create(&profile->event_queue, SOFIA_QUEUE_SIZE, profile->pool); switch_snprintf(qname, sizeof(qname), "sofia:%s", profile->name); switch_sql_queue_manager_init_name(qname, &profile->qm, 2, profile->odbc_dsn ? profile->odbc_dsn : profile->dbname, SWITCH_MAX_TRANS, profile->pre_trans_execute, profile->post_trans_execute, profile->inner_pre_trans_execute, profile->inner_post_trans_execute); switch_sql_queue_manager_start(profile->qm); ...... }
- 首先调用库函数
-
本节步骤4拉起了 Sofia-SIP 的底层 UA,此时回到本节步骤1第4步,
sofia.c#sofia_msg_thread_start()
函数将启动 Sofia 的消息线程,指定线程任务为sofia.c#sofia_msg_thread_run()
函数。该函数会不断轮询消息队列 mod_sofia_globals.msg_queue 中的消息,在上层进行 Sofia-SIP 协议栈消息的分发处理,至此 sofia 模块的加载基本完毕void sofia_msg_thread_start(int idx) { if (idx >= mod_sofia_globals.max_msg_queues || idx >= SOFIA_MAX_MSG_QUEUE || (idx < mod_sofia_globals.msg_queue_len && mod_sofia_globals.msg_queue_thread[idx])) { return; } switch_mutex_lock(mod_sofia_globals.mutex); if (idx >= mod_sofia_globals.msg_queue_len) { int i; mod_sofia_globals.msg_queue_len = idx + 1; for (i = 0; i < mod_sofia_globals.msg_queue_len; i++) { if (!mod_sofia_globals.msg_queue_thread[i]) { switch_threadattr_t *thd_attr = NULL; switch_threadattr_create(&thd_attr, mod_sofia_globals.pool); switch_threadattr_stacksize_set(thd_attr, SWITCH_THREAD_STACKSIZE); //switch_threadattr_priority_set(thd_attr, SWITCH_PRI_REALTIME); switch_thread_create(&mod_sofia_globals.msg_queue_thread[i], thd_attr, sofia_msg_thread_run, mod_sofia_globals.msg_queue, mod_sofia_globals.pool); } } } switch_mutex_unlock(mod_sofia_globals.mutex); }
2.2 呼入的处理流程
-
在上一节中 Sofia_SIP 协议栈的 UA 启动后 FreeSWITCH 已经有了呼叫能力,当外部 UA 发送 INVITE请求 到指定的端口,FreeSWITCH 底层监听该端口的 UA 将处理请求进而触发事件处理回调函数
sofia.c#sofia_event_callback()
执行。该函数的核心处理如下:- 首先对事件进行预处理,例如对于进来的 INVITE 等请求优先判断 session 是否过期、系统当前 session 总数是否超限等
- INVITE 请求对应事件为 nua_i_invite,对这个事件的处理是首先调用
switch_core_session.c#switch_core_session_request_uuid()
新建switch_core_session_t
结构体实例并为其新建状态为 CS_NEW 的 channel,然后调用sofia_glue.c#sofia_glue_new_pvt()
函数创建private_object_t
结构体实例用于保存会话的私有数据,最后通过sofia_glue.c#sofia_glue_attach_private()
函数将二者关联起来 - 新的 session 创建后,调用
switch_core_session.c#switch_core_session_thread_launch()
函数开线程专门执行后续耗时操作,避免阻塞当前线程 - 最后调用
sofai.c#sofia_queue_message()
方法将消息入队到 mod_sofia_globals.msg_queue
void sofia_event_callback(nua_event_t event, int status, char const *phrase, nua_t *nua, sofia_profile_t *profile, nua_handle_t *nh, sofia_private_t *sofia_private, sip_t const *sip, tagi_t tags[]) { sofia_dispatch_event_t *de; int critical = (((SOFIA_MSG_QUEUE_SIZE * mod_sofia_globals.max_msg_queues) * 900) / 1000); uint32_t sess_count = switch_core_session_count(); uint32_t sess_max = switch_core_session_limit(0); switch(event) { case nua_i_terminated: if ((status == 401 || status == 407 || status == 403) && sofia_private) { switch_core_session_t *session; if ((session = switch_core_session_locate(sofia_private->uuid))) { switch_channel_t *channel = switch_core_session_get_channel(session); int end = 0; if (switch_channel_direction(channel) == SWITCH_CALL_DIRECTION_INBOUND && !switch_channel_test_flag(channel, CF_ANSWERED)) { private_object_t *tech_pvt = switch_core_session_get_private(session); if (status == 403) { switch_channel_set_flag(channel, CF_NO_CDR); switch_channel_hangup(channel, SWITCH_CAUSE_CALL_REJECTED); } else { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "detaching session %s\n", sofia_private->uuid); if (!zstr(tech_pvt->call_id)) { tech_pvt->sofia_private = NULL; tech_pvt->nh = NULL; sofia_set_flag(tech_pvt, TFLAG_BYE); switch_mutex_lock(profile->flag_mutex); switch_core_hash_insert_dup_auto_free(profile->chat_hash, tech_pvt->call_id, switch_core_session_get_uuid(session)); switch_mutex_unlock(profile->flag_mutex); nua_handle_destroy(nh); } else { switch_channel_hangup(channel, SWITCH_CAUSE_DESTINATION_OUT_OF_ORDER); } } end++; } switch_core_session_rwunlock(session); if (end) { goto end; } } } break; case nua_i_invite: case nua_i_register: case nua_i_options: case nua_i_notify: case nua_i_info: if (event == nua_i_invite) { if (sip->sip_session_expires && profile->minimum_session_expires) { if (sip->sip_session_expires->x_delta < profile->minimum_session_expires) { char buf[64] = ""; switch_snprintf(buf, sizeof(buf), "Min-SE: %d", profile->minimum_session_expires); nua_respond(nh, SIP_422_SESSION_TIMER_TOO_SMALL, SIPTAG_HEADER_STR(buf),TAG_END()); goto end; } } } if (!sofia_private) { if (sess_count >= sess_max || !sofia_test_pflag(profile, PFLAG_RUNNING) || !switch_core_ready_inbound()) { nua_respond(nh, 503, "Maximum Calls In Progress", SIPTAG_RETRY_AFTER_STR("300"), NUTAG_WITH_THIS(nua), TAG_END()); nua_handle_destroy(nh); goto end; } if (switch_queue_size(mod_sofia_globals.msg_queue) > (unsigned int)critical) { nua_respond(nh, 503, "System Busy", SIPTAG_RETRY_AFTER_STR("300"), NUTAG_WITH_THIS(nua), TAG_END()); nua_handle_destroy(nh); goto end; } if (sofia_test_pflag(profile, PFLAG_STANDBY)) { nua_respond(nh, 503, "System Paused", NUTAG_WITH_THIS(nua), TAG_END()); nua_handle_destroy(nh); goto end; } } break; default: break; } switch_mutex_lock(profile->flag_mutex); profile->queued_events++; switch_mutex_unlock(profile->flag_mutex); de = su_alloc(nua_handle_get_home(nh), sizeof(*de)); memset(de, 0, sizeof(*de)); nua_save_event(nua, de->event); de->nh = nh ? nua_handle_ref(nh) : NULL; de->data = nua_event_data(de->event); de->sip = sip_object(de->data->e_msg); de->profile = profile; de->nua = (nua_t *)su_home_ref(nua_get_home(nua)); if (event == nua_i_invite && !sofia_private) { switch_core_session_t *session; private_object_t *tech_pvt = NULL; if (!(sofia_private = su_alloc(nua_handle_get_home(nh), sizeof(*sofia_private)))) { abort(); } memset(sofia_private, 0, sizeof(*sofia_private)); sofia_private->is_call++; sofia_private->is_static++; nua_handle_bind(nh, sofia_private); if (sip->sip_call_id && sip->sip_call_id->i_id) { char *uuid = NULL, *tmp; switch_mutex_lock(profile->flag_mutex); if ((tmp = (char *) switch_core_hash_find(profile->chat_hash, sip->sip_call_id->i_id))) { uuid = strdup(tmp); switch_core_hash_delete(profile->chat_hash, sip->sip_call_id->i_id); } switch_mutex_unlock(profile->flag_mutex); if (uuid) { if ((session = switch_core_session_locate(uuid))) { tech_pvt = switch_core_session_get_private(session); switch_copy_string(sofia_private->uuid_str, switch_core_session_get_uuid(session), sizeof(sofia_private->uuid_str)); sofia_private->uuid = sofia_private->uuid_str; switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "Re-attaching to session %s\n", sofia_private->uuid); de->init_session = session; sofia_clear_flag(tech_pvt, TFLAG_BYE); tech_pvt->sofia_private = NULL; tech_pvt->nh = NULL; switch_core_session_queue_signal_data(session, de); switch_core_session_rwunlock(session); session = NULL; free(uuid); uuid = NULL; goto end; } else { free(uuid); uuid = NULL; sip = NULL; } } } if (!sip || !sip->sip_call_id || zstr(sip->sip_call_id->i_id)) { nua_respond(nh, 503, "INVALID INVITE", TAG_END()); nua_destroy_event(de->event); su_free(nua_handle_get_home(nh), de); switch_mutex_lock(profile->flag_mutex); profile->queued_events--; switch_mutex_unlock(profile->flag_mutex); nua_handle_unref_user(nh); nua_unref_user(nua); goto end; } if (sofia_test_pflag(profile, PFLAG_CALLID_AS_UUID)) { session = switch_core_session_request_uuid(sofia_endpoint_interface, SWITCH_CALL_DIRECTION_INBOUND, SOF_NONE, NULL, sip->sip_call_id->i_id); } else { session = switch_core_session_request(sofia_endpoint_interface, SWITCH_CALL_DIRECTION_INBOUND, SOF_NONE, NULL); } if (session) { const char *channel_name = NULL; tech_pvt = sofia_glue_new_pvt(session); if (sip->sip_from) { channel_name = url_set_chanvars(session, sip->sip_from->a_url, sip_from); } if (!channel_name && sip->sip_contact) { channel_name = url_set_chanvars(session, sip->sip_contact->m_url, sip_contact); } if (sip->sip_referred_by) { channel_name = url_set_chanvars(session, sip->sip_referred_by->b_url, sip_referred_by); } sofia_glue_attach_private(session, profile, tech_pvt, channel_name); set_call_id(tech_pvt, sip); } else { nua_respond(nh, 503, "Maximum Calls In Progress", SIPTAG_RETRY_AFTER_STR("300"), TAG_END()); nua_destroy_event(de->event); su_free(nua_handle_get_home(nh), de); switch_mutex_lock(profile->flag_mutex); profile->queued_events--; switch_mutex_unlock(profile->flag_mutex); nua_handle_unref_user(nh); nua_unref_user(nua); goto end; } if (switch_core_session_thread_launch(session) != SWITCH_STATUS_SUCCESS) { char *uuid; if (!switch_core_session_running(session) && !switch_core_session_started(session)) { nua_handle_bind(nh, NULL); sofia_private_free(sofia_private); switch_core_session_destroy(&session); nua_respond(nh, 503, "Maximum Calls In Progress", SIPTAG_RETRY_AFTER_STR("300"), TAG_END()); } switch_mutex_lock(profile->flag_mutex); if ((uuid = switch_core_hash_find(profile->chat_hash, tech_pvt->call_id))) { free(uuid); uuid = NULL; switch_core_hash_delete(profile->chat_hash, tech_pvt->call_id); } switch_mutex_unlock(profile->flag_mutex); goto end; } switch_copy_string(sofia_private->uuid_str, switch_core_session_get_uuid(session), sizeof(sofia_private->uuid_str)); sofia_private->uuid = sofia_private->uuid_str; de->init_session = session; switch_core_session_queue_signal_data(session, de); goto end; } if (sofia_private && sofia_private != &mod_sofia_globals.destroy_private && sofia_private != &mod_sofia_globals.keep_private) { switch_core_session_t *session; if ((session = switch_core_session_locate(sofia_private->uuid))) { switch_core_session_queue_signal_data(session, de); switch_core_session_rwunlock(session); goto end; } } sofia_queue_message(de); end: //switch_cond_next(); return; }
-
switch_core_session.c#switch_core_session_thread_launch()
函数逻辑比较简单,关键如下:- 首先判断当前 session 是否已经有线程在处理,有的话直接 return
- 如果配置启用了线程池,则调用
switch_core_session.c#switch_core_session_thread_pool_launch()
函数将 session 投递到线程池处理,默认启用了线程池 - 如果没有启动线程池,则直接新开线程,指定
switch_core_session.c#switch_core_session_thread()
函数为线程任务负责处理 session
SWITCH_DECLARE(switch_status_t) switch_core_session_thread_launch(switch_core_session_t *session) { switch_status_t status = SWITCH_STATUS_FALSE; switch_thread_t *thread; switch_threadattr_t *thd_attr; if (switch_test_flag(session, SSF_THREAD_RUNNING) || switch_test_flag(session, SSF_THREAD_STARTED)) { status = SWITCH_STATUS_INUSE; goto end; } if (switch_test_flag((&runtime), SCF_SESSION_THREAD_POOL)) { return switch_core_session_thread_pool_launch(session); } switch_mutex_lock(session->mutex); if (switch_test_flag(session, SSF_THREAD_RUNNING)) { switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_CRIT, "Cannot double-launch thread!\n"); } else if (switch_test_flag(session, SSF_THREAD_STARTED)) { switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_CRIT, "Cannot launch thread again after it has already been run!\n"); } else { switch_set_flag(session, SSF_THREAD_RUNNING); switch_set_flag(session, SSF_THREAD_STARTED); switch_threadattr_create(&thd_attr, session->pool); switch_threadattr_detach_set(thd_attr, 1); switch_threadattr_stacksize_set(thd_attr, SWITCH_THREAD_STACKSIZE); if (switch_thread_create(&thread, thd_attr, switch_core_session_thread, session, session->pool) == SWITCH_STATUS_SUCCESS) { switch_set_flag(session, SSF_THREAD_STARTED); status = SWITCH_STATUS_SUCCESS; } else { switch_clear_flag(session, SSF_THREAD_RUNNING); switch_clear_flag(session, SSF_THREAD_STARTED); switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_CRIT, "Cannot create thread!\n"); thread_launch_failure(); } } switch_mutex_unlock(session->mutex); end: return status; }
-
switch_core_session.c#switch_core_session_thread_pool_launch()
函数的核心处理如下:- 封装 session 到
switch_thread_data_t
结构体,并指定这个结构体的执行函数为switch_core_session.c#switch_core_session_thread()
后将其入队 session_manager.thread_queue - 调用
switch_core_session.c#check_queue()
函数检查队列中的数据量,确定是否需要新增线程处理
SWITCH_DECLARE(switch_status_t) switch_core_session_thread_pool_launch(switch_core_session_t *session) { switch_status_t status = SWITCH_STATUS_INUSE; switch_thread_data_t *td; switch_mutex_lock(session->mutex); if (switch_test_flag(session, SSF_THREAD_RUNNING)) { switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_CRIT, "Cannot double-launch thread!\n"); } else if (switch_test_flag(session, SSF_THREAD_STARTED)) { switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_CRIT, "Cannot launch thread again after it has already been run!\n"); } else { switch_set_flag(session, SSF_THREAD_RUNNING); switch_set_flag(session, SSF_THREAD_STARTED); td = switch_core_session_alloc(session, sizeof(*td)); td->obj = session; td->func = switch_core_session_thread; status = switch_queue_push(session_manager.thread_queue, td); check_queue(); } switch_mutex_unlock(session->mutex); return status; }
- 封装 session 到
-
switch_core_session.c#check_queue()
函数会根据当前 session 管理器中的属性决定是否新建线程。如果需要新建线程,则将switch_core_session.c#switch_core_session_thread_pool_worker()
函数作为线程任务开启线程static switch_status_t check_queue(void) { switch_status_t status = SWITCH_STATUS_FALSE; switch_mutex_lock(session_manager.mutex); if (session_manager.running >= ++session_manager.busy) { switch_mutex_unlock(session_manager.mutex); return SWITCH_STATUS_SUCCESS; } ++session_manager.running; switch_mutex_unlock(session_manager.mutex); { switch_thread_t *thread; switch_threadattr_t *thd_attr; switch_memory_pool_t *pool; switch_thread_pool_node_t *node; switch_core_new_memory_pool(&pool); node = switch_core_alloc(pool, sizeof(*node)); node->pool = pool; switch_threadattr_create(&thd_attr, node->pool); switch_threadattr_detach_set(thd_attr, 1); switch_threadattr_stacksize_set(thd_attr, SWITCH_THREAD_STACKSIZE); switch_threadattr_priority_set(thd_attr, SWITCH_PRI_LOW); if (switch_thread_create(&thread, thd_attr, switch_core_session_thread_pool_worker, node, node->pool) != SWITCH_STATUS_SUCCESS) { switch_mutex_lock(session_manager.mutex); if (!--session_manager.running) { switch_thread_cond_signal(session_manager.cond); } switch_mutex_unlock(session_manager.mutex); switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_CRIT, "Thread Failure!\n"); switch_core_destroy_memory_pool(&pool); status = SWITCH_STATUS_GENERR; thread_launch_failure(); } else { status = SWITCH_STATUS_SUCCESS; } } return status; }
-
switch_core_session.c#switch_core_session_thread_pool_worker()
函数的核心是轮询队列session_manager.thread_queue,取出消息后执行消息结构体中封装的函数,也就是执行本节第3步提到的switch_core_session.c#switch_core_session_thread()
函数static void *SWITCH_THREAD_FUNC switch_core_session_thread_pool_worker(switch_thread_t *thread, void *obj) { switch_thread_pool_node_t *node = (switch_thread_pool_node_t *) obj; switch_memory_pool_t *pool = node->pool; #ifdef DEBUG_THREAD_POOL switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG10, "Worker Thread %ld Started\n", (long) (intptr_t) thread); #endif for (;;) { void *pop; switch_status_t check_status = switch_queue_pop_timeout(session_manager.thread_queue, &pop, 5000000); if (check_status == SWITCH_STATUS_SUCCESS) { switch_thread_data_t *td = (switch_thread_data_t *) pop; #ifdef DEBUG_THREAD_POOL switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG10, "Worker Thread %ld Processing\n", (long) (intptr_t) thread); #endif td->running = 1; td->func(thread, td->obj); td->running = 0; if (td->pool) { switch_memory_pool_t *pool = td->pool; td = NULL; switch_core_destroy_memory_pool(&pool); } else if (td->alloc) { free(td); } #ifdef DEBUG_THREAD_POOL switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG10, "Worker Thread %ld Done Processing\n", (long)(intptr_t) thread); #endif switch_mutex_lock(session_manager.mutex); session_manager.busy--; switch_mutex_unlock(session_manager.mutex); } else { switch_mutex_lock(session_manager.mutex); if (!switch_status_is_timeup(check_status) || session_manager.running > session_manager.busy) { if (!--session_manager.running) { switch_thread_cond_signal(session_manager.cond); } switch_mutex_unlock(session_manager.mutex); break; } switch_mutex_unlock(session_manager.mutex); } } #ifdef DEBUG_THREAD_POOL switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG10, "Worker Thread %ld Ended\n", (long)(intptr_t) thread); #endif switch_core_destroy_memory_pool(&pool); return NULL; }
-
switch_core_session.c#switch_core_session_thread()
函数的核心是调用switch_core_state_machine.c#switch_core_session_run()
函数将 session 投递到 FreeSWITCH 核心的状态机组件,在 while 循环中进行状态流转,由状态机触发处理其生命周期内的各种交互。核心状态机的处理稍显复杂,本文暂不分析,有兴趣的读者可以期待后续文章static void *SWITCH_THREAD_FUNC switch_core_session_thread(switch_thread_t *thread, void *obj) { switch_core_session_t *session = obj; switch_event_t *event; char *event_str = NULL; const char *val; session->thread = thread; session->thread_id = switch_thread_self(); switch_core_session_run(session); switch_core_media_bug_remove_all(session); if (session->soft_lock) { uint32_t loops = session->soft_lock * 10; switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_DEBUG, "Session %" SWITCH_SIZE_T_FMT " (%s) Soft-Locked, " "Waiting %u for external entities\n", session->id, switch_channel_get_name(session->channel), session->soft_lock); while(--loops > 0) { if (!session->soft_lock) break; switch_yield(100000); } } switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_DEBUG, "Session %" SWITCH_SIZE_T_FMT " (%s) Locked, Waiting on external entities\n", session->id, switch_channel_get_name(session->channel)); switch_core_session_write_lock(session); switch_set_flag(session, SSF_DESTROYED); if ((val = switch_channel_get_variable(session->channel, "memory_debug")) && switch_true(val)) { if (switch_event_create(&event, SWITCH_EVENT_GENERAL) == SWITCH_STATUS_SUCCESS) { switch_channel_event_set_data(session->channel, event); switch_event_serialize(event, &event_str, SWITCH_FALSE); switch_assert(event_str); switch_core_memory_pool_tag(switch_core_session_get_pool(session), switch_core_session_strdup(session, event_str)); free(event_str); switch_event_destroy(&event); } } switch_core_session_rwunlock(session); switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_NOTICE, "Session %" SWITCH_SIZE_T_FMT " (%s) Ended\n", session->id, switch_channel_get_name(session->channel)); switch_set_flag(session, SSF_DESTROYABLE); switch_core_session_destroy(&session); return NULL; }
-
经过以上步骤,外部 INVITE 请求在 FreeSWITCH 系统中产生了一个新的 session,并且该 session 已经被投入核心状态机,那么呼叫进程的变化其实就是由状态变更推动。此时回到本节步骤1第4步,
sofai.c#sofia_queue_message()
函数的主要处理是如下几步:- 首先判断 mod_sofia_globals.msg_queue队列 容量是否超限,如果超限需要调用
sofai.c#sofia_msg_thread_start()
函数新增线程。这个函数在2.1节步骤5提到过,此处不再赘述 - 调用底层函数,将 Sofia-SIP 回调到上层的事件入队到 mod_sofia_globals.msg_queue
void sofia_queue_message(sofia_dispatch_event_t *de) { int launch = 0; if (mod_sofia_globals.running == 0 || !mod_sofia_globals.msg_queue) { /* Calling with SWITCH_TRUE as we are sure this is the stack's thread */ sofia_process_dispatch_event(&de); return; } if (de->profile && sofia_test_pflag(de->profile, PFLAG_THREAD_PER_REG) && de->data->e_event == nua_i_register && DE_THREAD_CNT < mod_sofia_globals.max_reg_threads) { sofia_process_dispatch_event_in_thread(&de); return; } if ((switch_queue_size(mod_sofia_globals.msg_queue) > (SOFIA_MSG_QUEUE_SIZE * (unsigned int)msg_queue_threads))) { launch++; } if (launch) { if (mod_sofia_globals.msg_queue_len < mod_sofia_globals.max_msg_queues) { sofia_msg_thread_start(mod_sofia_globals.msg_queue_len + 1); } } switch_queue_push(mod_sofia_globals.msg_queue, de); }
- 首先判断 mod_sofia_globals.msg_queue队列 容量是否超限,如果超限需要调用
-
此时结合2.1节步骤5的内容,我们知道mod_sofia_globals.msg_queue队列中的消息是由
sofia.c#sofia_msg_thread_run()
消费,可以看到这个线程任务在 for 空循环中不断轮询队列,拿到消息数据后则调用sofia.c#sofia_process_dispatch_event()
函数分发处理void *SWITCH_THREAD_FUNC sofia_msg_thread_run(switch_thread_t *thread, void *obj) { void *pop; switch_queue_t *q = (switch_queue_t *) obj; int my_id; for (my_id = 0; my_id < mod_sofia_globals.msg_queue_len; my_id++) { if (mod_sofia_globals.msg_queue_thread[my_id] == thread) { break; } } switch_mutex_lock(mod_sofia_globals.mutex); msg_queue_threads++; switch_mutex_unlock(mod_sofia_globals.mutex); switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_WARNING, "MSG Thread %d Started\n", my_id); for(;;) { if (switch_queue_pop(q, &pop) != SWITCH_STATUS_SUCCESS) { switch_cond_next(); continue; } if (pop) { sofia_dispatch_event_t *de = (sofia_dispatch_event_t *) pop; sofia_process_dispatch_event(&de); } else { break; } } switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_WARNING, "MSG Thread Ended\n"); switch_mutex_lock(mod_sofia_globals.mutex); msg_queue_threads--; switch_mutex_unlock(mod_sofia_globals.mutex); return NULL; }
-
sofia.c#sofia_process_dispatch_event()
函数的核心处理一目了然,显然是调用sofia.c#our_sofia_event_callback()
对事件消息进行处理void sofia_process_dispatch_event(sofia_dispatch_event_t **dep) { sofia_dispatch_event_t *de = *dep; nua_handle_t *nh = de->nh; nua_t *nua = de->nua; sofia_profile_t *profile = de->profile; sofia_private_t *sofia_private = nua_handle_magic(de->nh); *dep = NULL; our_sofia_event_callback(de->data->e_event, de->data->e_status, de->data->e_phrase, de->nua, de->profile, de->nh, sofia_private, de->sip, de, (tagi_t *) de->data->e_tags); nua_destroy_event(de->event); su_free(nua_handle_get_home(nh), de); switch_mutex_lock(profile->flag_mutex); profile->queued_events--; switch_mutex_unlock(profile->flag_mutex); /* This is not a stack thread, need to call via stack (_user) using events */ if (nh) nua_handle_unref_user(nh); nua_unref_user(nua); }
-
sofia.c#our_sofia_event_callback()
是实际消费处理底层 Sofia-SIP 协议栈事件的入口,该函数主要是针对各个消息进行不同处理,其中前缀为 nua_i 的事件代表底层收到的请求消息,前缀为 nua_r 的事件表示底层收到的响应消息。对于事件 nua_i_invite,通常其会调用到sofia.c#sofia_handle_sip_i_invit()
函数进行处理//sofia_dispatch_event_t *de static void our_sofia_event_callback(nua_event_t event, int status, char const *phrase, nua_t *nua, sofia_profile_t *profile, nua_handle_t *nh, sofia_private_t *sofia_private, sip_t const *sip, sofia_dispatch_event_t *de, tagi_t tags[]) { struct private_object *tech_pvt = NULL; auth_res_t auth_res = AUTH_FORBIDDEN; switch_core_session_t *session = NULL; switch_channel_t *channel = NULL; sofia_gateway_t *gateway = NULL; int locked = 0; int check_destroy = 1; profile->last_sip_event = switch_time_now(); /* sofia_private will be == &mod_sofia_globals.keep_private whenever a request is done with a new handle that has to be freed whenever the request is done */ ...... switch (event) { ...... case nua_i_invite: if (session && sofia_private) { if (sofia_private->is_call > 1) { sofia_handle_sip_i_reinvite(session, nua, profile, nh, sofia_private, sip, de, tags); } else { sofia_private->is_call++; sofia_handle_sip_i_invite(session, nua, profile, nh, sofia_private, sip, de, tags); } } break; } ...... }
-
sofia.c#sofia_handle_sip_i_invit()
函数实现非常长,主要处理是解析校验 INVITE 请求内容,关键如下,至此对 FreeSWITCH 中 SIP 呼入处理流程的分析暂告段落,此后 FreeSWITCH 对外部 UA 呼叫的目标 UA 发起呼叫的流程可期待后续文章- 如果 INVITE 请求参数不符合要求则直接调用库函数
nua_respond()
响应外部 UA。如果校验正常,则将相关属性设置到 session 中保存,包括媒体协商的 SDP 等内容 - 根据 profile 配置调用
switch_core.c#switch_check_network_list_ip_port_token()
函数对 INVITE 请求进行 ACL 检查,检查不通过直接响应外部 UA 403 状态码 - 通过宏定义
mod_sofia.h#sofia_reg_handle_register()
对呼入的外部UA(caller)进行注册鉴权检查 - 调用
switch_caller.c#switch_caller_profile_new()
函数创建switch_caller_profile_t
结构体实例,用与保存 caller 命中的拨号计划等信息
void sofia_handle_sip_i_invite(switch_core_session_t *session, nua_t *nua, sofia_profile_t *profile, nua_handle_t *nh, sofia_private_t *sofia_private, sip_t const *sip, sofia_dispatch_event_t *de, tagi_t tags[]) { char key[128] = ""; sip_unknown_t *un; sip_remote_party_id_t *rpid = NULL; sip_p_asserted_identity_t *passerted = NULL; sip_p_preferred_identity_t *ppreferred = NULL; sip_privacy_t *privacy = NULL; sip_alert_info_t *alert_info = NULL; sip_call_info_t *call_info = NULL; private_object_t *tech_pvt = NULL; switch_channel_t *channel = NULL; //const char *channel_name = NULL; const char *displayname = NULL; const char *destination_number = NULL; const char *from_user = NULL, *from_host = NULL; const char *referred_by_user = NULL;//, *referred_by_host = NULL; const char *context = NULL; const char *dialplan = NULL; char network_ip[80] = ""; char proxied_client_ip[80]; switch_event_t *v_event = NULL; switch_xml_t x_user = NULL; uint32_t sess_count = switch_core_session_count(); uint32_t sess_max = switch_core_session_limit(0); int is_auth = 0, calling_myself = 0; int network_port = 0; char *is_nat = NULL; char *aniii = NULL; char acl_token[512] = ""; sofia_transport_t transport; const char *gw_name = NULL; const char *gw_param_name = NULL; char *call_info_str = NULL; nua_handle_t *bnh = NULL; char sip_acl_authed_by[512] = ""; char sip_acl_token[512] = ""; const char *dialog_from_user = "", *dialog_from_host = "", *to_user = "", *to_host = "", *contact_user = "", *contact_host = ""; const char *user_agent = "", *call_id = ""; url_t *from = NULL, *to = NULL, *contact = NULL; const char *to_tag = ""; const char *from_tag = ""; char *sql = NULL; char *acl_context = NULL; const char *r_sdp = NULL; int is_tcp = 0, is_tls = 0; const char *uparams = NULL; char *name_params = NULL; const char *req_uri = NULL; char *req_user = NULL; switch_time_t sip_invite_time; const char *session_id_header; sofia_glue_store_session_id(session, profile, sip, 0); ...... if (profile->acl_count) { uint32_t x = 0; int ok = 1; char *last_acl = NULL; const char *token = NULL; int acl_port = sofia_test_pflag(profile, PFLAG_USE_PORT_FOR_ACL_CHECK) ? network_port : 0; for (x = 0; x < profile->acl_count; x++) { last_acl = profile->acl[x]; switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "verifying acl \"%s\" for ip/port %s:%i.\n", switch_str_nil(last_acl), network_ip, acl_port); if ((ok = switch_check_network_list_ip_port_token(network_ip, acl_port, last_acl, &token))) { if (profile->acl_pass_context[x]) { acl_context = profile->acl_pass_context[x]; } if(!token && profile->acl_inbound_x_token_header) { const char * x_auth_token = sofia_glue_get_unknown_header(sip, profile->acl_inbound_x_token_header); if (!zstr(x_auth_token)) { token = x_auth_token; } } break; } if (profile->acl_fail_context[x]) { acl_context = profile->acl_fail_context[x]; } else { acl_context = NULL; } } if (ok) { if (token) { switch_set_string(acl_token, token); } if (sofia_test_pflag(profile, PFLAG_AUTH_CALLS)) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "IP %s Approved by acl \"%s[%s]\". Access Granted.\n", network_ip, switch_str_nil(last_acl), acl_token); switch_set_string(sip_acl_authed_by, last_acl); switch_set_string(sip_acl_token, acl_token); is_auth = 1; } } else { int network_ip_is_proxy = 0; const char* x_auth_ip = network_ip; /* Check if network_ip is a proxy allowed to send us calls */ if (profile->proxy_acl_count) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "%d acls to check for proxy\n", profile->proxy_acl_count); for (x = 0; x < profile->proxy_acl_count; x++) { last_acl = profile->proxy_acl[x]; switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "checking %s against acl %s\n", network_ip, last_acl); if (switch_check_network_list_ip_port_token(network_ip, network_port, last_acl, &token)) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_INFO, "%s is a proxy according to the %s acl\n", network_ip, last_acl); network_ip_is_proxy = 1; break; } } } /* * if network_ip is a proxy allowed to send calls, check for auth * ip header and see if it matches against the inbound acl */ if (network_ip_is_proxy) { const char * x_auth_port = sofia_glue_get_unknown_header(sip, "X-AUTH-PORT"); int x_auth_port_i = sofia_test_pflag(profile, PFLAG_USE_PORT_FOR_ACL_CHECK) ? zstr(x_auth_port) ? 0 : atoi(x_auth_port) : 0; /* * if network_ip is a proxy allowed to send calls, * authorize call if proxy provided matched token header */ if (profile->acl_proxy_x_token_header) { const char * x_auth_token = sofia_glue_get_unknown_header(sip, profile->acl_proxy_x_token_header); if (!zstr(x_auth_token)) { token = x_auth_token; switch_copy_string(proxied_client_ip, x_auth_ip, sizeof(proxied_client_ip)); ok = 1; } } if (!ok && (x_auth_ip = sofia_glue_get_unknown_header(sip, "X-AUTH-IP")) && !zstr(x_auth_ip)) { for (x = 0; x < profile->acl_count; x++) { last_acl = profile->acl[x]; switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "verifying acl \"%s\" from proxy for ip/port %s:%i.\n", switch_str_nil(last_acl), x_auth_ip, x_auth_port_i); if ((ok = switch_check_network_list_ip_port_token(x_auth_ip, x_auth_port_i, last_acl, &token))) { switch_copy_string(proxied_client_ip, x_auth_ip, sizeof(proxied_client_ip)); if (profile->acl_pass_context[x]) { acl_context = profile->acl_pass_context[x]; } break; } if (profile->acl_fail_context[x]) { acl_context = profile->acl_fail_context[x]; } else { acl_context = NULL; } } } else { x_auth_ip = network_ip; } } if (ok) { if (token) { switch_set_string(acl_token, token); } if (sofia_test_pflag(profile, PFLAG_AUTH_CALLS)) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "IP %s Approved by acl \"%s[%s]\". Access Granted.\n", x_auth_ip, switch_str_nil(last_acl), acl_token); switch_set_string(sip_acl_authed_by, last_acl); switch_set_string(sip_acl_token, acl_token); is_auth = 1; } } else { if (!sofia_test_pflag(profile, PFLAG_AUTH_CALLS)) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_WARNING, "IP %s Rejected by acl \"%s\"\n", x_auth_ip, switch_str_nil(last_acl)); if (!acl_context) { nua_respond(nh, SIP_403_FORBIDDEN, TAG_IF(!zstr(session_id_header), SIPTAG_HEADER_STR(session_id_header)), TAG_END()); goto fail; } else { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "IP %s Rejected by acl \"%s\". Falling back to Digest auth.\n", x_auth_ip, switch_str_nil(last_acl)); } } } } } if (!is_auth && sofia_test_pflag(profile, PFLAG_AUTH_CALLS) && sofia_test_pflag(profile, PFLAG_AUTH_CALLS_ACL_ONLY)) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "IP/Port %s %i Rejected by acls and auth-calls-acl-only flag is set, rejecting call\n", network_ip, network_port); nua_respond(nh, SIP_403_FORBIDDEN, TAG_IF(!zstr(session_id_header), SIPTAG_HEADER_STR(session_id_header)), TAG_END()); goto fail; } if (!is_auth && sofia_test_pflag(profile, PFLAG_AUTH_CALLS) && sofia_test_pflag(profile, PFLAG_BLIND_AUTH)) { char *user = NULL; switch_status_t blind_result = SWITCH_STATUS_FALSE; if (!strcmp(network_ip, profile->sipip) && network_port == profile->sip_port) { calling_myself++; } if (sip->sip_from) { user = switch_core_session_sprintf(session, "%s@%s", sip->sip_from->a_url->url_user, sip->sip_from->a_url->url_host); blind_result = sofia_locate_user(user, session, sip, &x_user); } if (!sofia_test_pflag(profile, PFLAG_BLIND_AUTH_ENFORCE_RESULT) || blind_result == SWITCH_STATUS_SUCCESS) { is_auth++; } else if (sofia_test_pflag(profile, PFLAG_BLIND_AUTH_REPLY_403)) { switch_log_printf(SWITCH_CHANNEL_LOG, SWITCH_LOG_DEBUG, "blind auth enforce 403 enabled and couldn't find user %s, rejecting call\n", user); nua_respond(nh, SIP_403_FORBIDDEN, TAG_END()); goto fail; } } if (sip->sip_from) { tech_pvt->from_user = switch_core_session_strdup(session, sip->sip_from->a_url->url_user); } tech_pvt->mparams.remote_ip = switch_core_session_strdup(session, network_ip); tech_pvt->mparams.remote_port = network_port; if (!is_auth && (sofia_test_pflag(profile, PFLAG_AUTH_CALLS) || (!sofia_test_pflag(profile, PFLAG_BLIND_AUTH) && (sip->sip_proxy_authorization || sip->sip_authorization)))) { if (!strcmp(network_ip, profile->sipip) && network_port == profile->sip_port) { calling_myself++; } else { switch_event_create(&v_event, SWITCH_EVENT_REQUEST_PARAMS); if (sofia_reg_handle_register(nua, profile, nh, sip, de, REG_INVITE, key, sizeof(key), &v_event, NULL, NULL, &x_user)) { if (v_event) { switch_event_destroy(&v_event); } if (x_user) { switch_xml_free(x_user); } if (sip->sip_authorization || sip->sip_proxy_authorization) { goto fail; } return; } } is_auth++; } channel = tech_pvt->channel = switch_core_session_get_channel(session); switch_channel_set_variable_printf(channel, "sip_local_network_addr", "%s", profile->extsipip ? profile->extsipip : profile->sipip); switch_channel_set_variable_printf(channel, "sip_network_ip", "%s", network_ip); switch_channel_set_variable_printf(channel, "sip_network_port", "%d", network_port); switch_channel_set_variable_printf(channel, "sip_invite_stamp", "%" SWITCH_TIME_T_FMT, sip_invite_time); if (*acl_token) { if (x_user) { switch_xml_free(x_user); x_user = NULL; } switch_channel_set_variable(channel, "acl_token", acl_token); if (sofia_locate_user(acl_token, session, sip, &x_user) == SWITCH_STATUS_SUCCESS) { switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_DEBUG, "Authenticating user %s\n", acl_token); } else { switch_log_printf(SWITCH_CHANNEL_SESSION_LOG(session), SWITCH_LOG_WARNING, "Error Authenticating user %s\n", acl_token); if (sofia_test_pflag(profile, PFLAG_AUTH_REQUIRE_USER)) { nua_respond(nh, SIP_480_TEMPORARILY_UNAVAILABLE, TAG_END()); if (v_event) { switch_event_destroy(&v_event); } goto fail; } } } if (sip->sip_via) { char tmp[35] = ""; const char *ipv6 = strchr(tech_pvt->mparams.remote_ip, ':'); transport = sofia_glue_via2transport(sip->sip_via); tech_pvt->record_route = switch_core_session_sprintf(session, "sip:%s%s%s:%d;transport=%s", ipv6 ? "[" : "", tech_pvt->mparams.remote_ip, ipv6 ? "]" : "", tech_pvt->mparams.remote_port, sofia_glue_transport2str(transport)); switch_channel_set_variable(channel, "sip_received_ip", tech_pvt->mparams.remote_ip); snprintf(tmp, sizeof(tmp), "%d", tech_pvt->mparams.remote_port); switch_channel_set_variable(channel, "sip_received_port", tmp); switch_channel_set_variable(channel, "sip_via_protocol", sofia_glue_transport2str(sofia_glue_via2transport(sip->sip_via))); } if (*key != '\0') { tech_pvt->key = switch_core_session_strdup(session, key); } if (is_auth) { switch_channel_set_variable(channel, "sip_authorized", "true"); if (!zstr(sip_acl_authed_by)) { switch_channel_set_variable(channel, "sip_acl_authed_by", sip_acl_authed_by); } if (!zstr(sip_acl_token)) { switch_channel_set_variable(channel, "sip_acl_token", sip_acl_token); } } if (calling_myself) { switch_channel_set_variable(channel, "sip_looped_call", "true"); } tech_pvt->caller_profile = switch_caller_profile_new(switch_core_session_get_pool(session), NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, MODNAME, NULL, NULL); switch_channel_set_caller_profile(channel, tech_pvt->caller_profile); if (x_user) { const char *ruser = NULL, *rdomain = NULL, *user = switch_xml_attr(x_user, "id"), *domain = switch_xml_attr(x_user, "domain-name"); if (v_event) { switch_event_header_t *hp; for (hp = v_event->headers; hp; hp = hp->next) { switch_channel_set_variable(channel, hp->name, hp->value); } ruser = switch_event_get_header(v_event, "user_name"); rdomain = switch_event_get_header(v_event, "domain_name"); switch_channel_set_variable(channel, "requested_user_name", ruser); switch_channel_set_variable(channel, "requested_domain_name", rdomain); } if (!user) user = ruser; if (!domain) domain = rdomain; switch_ivr_set_user_xml(session, NULL, user, domain, x_user); switch_xml_free(x_user); x_user = NULL; } ...... }
- 如果 INVITE 请求参数不符合要求则直接调用库函数