Commit f882cc6d authored by 翟艳秋(20软)'s avatar 翟艳秋(20软)

chore: 修改了部分docstring以及README.md

parent a62be560
### 文件结构
1、界面文件——try_with_gui.py
1、界面文件——main_gui.py
2、各功能模块:
......@@ -11,8 +11,17 @@
- speech_synthesis.py:生成旁白语音
- split_wav.py:对视频中的音频进行提取、切分等操作
- PaddlePaddle_DeepSpeech2文件夹:使用PaddleSpeech获取音频对应文本
- infer_path.py:基于ASR的长音频识别
- ./utils/audio_vad.py:使用VAD将长音频分割为n段较短的音频
3、文档
- 存放在docs文件夹中。
- **内容**:主要是上述界面文件和功能模块中相关函数的功能和使用方法的介绍。
4、配置文件
* speakers.json:各个说话人的相关配置信息
* ./speaker_audio:存放各个说话人的样例音频
* main_gui.spec:使用pyinstaller进行打包时的配置文件,具体的打包方法见“打包方法.md”文件。
* eagle_2.ico:应用图标
......@@ -74,7 +74,7 @@ def detect_with_asr(video_path: str, book_path: str, start_time=0, end_time=-1,
book_path (str): 旁白表格输出地址
start_time (int, optional): 视频实际开始时间. Defaults to 0.
end_time (int, optional): 视频实际结束时间. Defaults to -1.
state (_type_, optional): 用于通信的状态关键字. Defaults to None.
state (optional): 用于通信的状态关键字. Defaults to None.
"""
if state is None:
state = [None]
......
......@@ -147,7 +147,7 @@
<span class="sd"> book_path (str): 旁白表格输出地址</span>
<span class="sd"> start_time (int, optional): 视频实际开始时间. Defaults to 0.</span>
<span class="sd"> end_time (int, optional): 视频实际结束时间. Defaults to -1.</span>
<span class="sd"> state (_type_, optional): 用于通信的状态关键字. Defaults to None.</span>
<span class="sd"> state (optional): 用于通信的状态关键字. Defaults to None.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">if</span> <span class="n">state</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">state</span> <span class="o">=</span> <span class="p">[</span><span class="kc">None</span><span class="p">]</span>
......
......@@ -184,7 +184,8 @@
<div class="viewcode-block" id="create_detail_day"><a class="viewcode-back" href="../main_gui.html#main_gui.create_detail_day">[文档]</a><span class="k">def</span> <span class="nf">create_detail_day</span><span class="p">()</span> <span class="o">-&gt;</span> <span class="nb">str</span><span class="p">:</span>
<span class="sd">&quot;&quot;&quot;生成当天日期</span>
<span class="sd"> :return: [str], 当天日期</span>
<span class="sd"> Returns:</span>
<span class="sd"> str: 当天日期</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="n">daytime</span> <span class="o">=</span> <span class="n">datetime</span><span class="o">.</span><span class="n">datetime</span><span class="o">.</span><span class="n">now</span><span class="p">()</span><span class="o">.</span><span class="n">strftime</span><span class="p">(</span><span class="s1">&#39;day&#39;</span> <span class="o">+</span> <span class="s1">&#39;%Y_%m_</span><span class="si">%d</span><span class="s1">&#39;</span><span class="p">)</span>
<span class="k">return</span> <span class="n">daytime</span></div>
......
......@@ -76,6 +76,11 @@
<span class="sd">主要是根据视频有无字幕,在基于OCR的方法和基于ASR的方法中进行选择调用。</span>
<span class="sd">(视频有无字幕的信息可以通过人机交互界面传入,也可以通过调用detect_movie函数检测得到)</span>
<span class="sd">.. code-block:: python</span>
<span class="sd"> from narratage_detection import detect</span>
<span class="sd"> detect(video_path, start_time, end_time, book_path, state, subtitle)</span>
<span class="sd">&quot;&quot;&quot;</span>
<span class="kn">import</span> <span class="nn">os.path</span>
<span class="kn">import</span> <span class="nn">argparse</span>
......
......@@ -79,7 +79,7 @@
<span class="sd"> - 将字幕导出为字幕文件;</span>
<span class="sd"> - 调用azure提供的TTS接口(说话人由用户选择),合成旁白音频,存放到用户选择的文件夹中;</span>
<span class="sd"> - 根据旁白的时间戳及文本、语速等信息,对应调整原视频中音频的音量。</span>
<span class="sd"> - 将合成的旁白融合到原视频的音频中,并输出到用户选择的文件夹下。</span>
<span class="sd"> - 将合成的旁白融合到原视频的音频中,并输出到用户选择的文件夹下。(暂时放弃)</span>
<span class="sd">外部调用方式:</span>
......
......@@ -71,8 +71,15 @@
<div itemprop="articleBody">
<h1>split_wav 源代码</h1><div class="highlight"><pre>
<span></span><span class="c1"># 最简单的是音轨分离,直接将背景音乐的轨道剥离,只剩下人声道后即可根据空白片段进行切割</span>
<span></span><span class="sd">&quot;&quot;&quot;音频的一些前处理(目前仅提供音频提取功能)</span>
<span class="sd">主要包含以下内容:</span>
<span class="sd"> - 从视频中读取音频</span>
<span class="sd"> - 将音频划分为n个文件,并存储到本地</span>
<span class="sd"> - 从音频中提取人声</span>
<span class="sd"> - 从音频中提取背景音乐</span>
<span class="sd">&quot;&quot;&quot;</span>
<span class="c1"># 最简单的是音轨分离,直接将背景音乐的轨道剥离,只剩下人声道后即可根据空白片段进行切割</span>
<span class="c1"># 只有一个音轨时,使用音乐检索系统,分割人声和背景音乐声</span>
<span class="kn">import</span> <span class="nn">os</span>
......
......@@ -26,3 +26,9 @@ Welcome to accessibility movie's documentation!
split_wav
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
\ No newline at end of file
......@@ -116,7 +116,7 @@
<li><p><strong>book_path</strong> (<em>str</em>) – 旁白表格输出地址</p></li>
<li><p><strong>start_time</strong> (<em>int</em><em>, </em><em>optional</em>) – 视频实际开始时间. Defaults to 0.</p></li>
<li><p><strong>end_time</strong> (<em>int</em><em>, </em><em>optional</em>) – 视频实际结束时间. Defaults to -1.</p></li>
<li><p><strong>state</strong> (<em>_type_</em><em>, </em><em>optional</em>) – 用于通信的状态关键字. Defaults to None.</p></li>
<li><p><strong>state</strong> (<em>optional</em>) – 用于通信的状态关键字. Defaults to None.</p></li>
</ul>
</dd>
</dl>
......
......@@ -88,6 +88,14 @@
<li class="toctree-l1"><a class="reference internal" href="split_wav.html">split_wav</a></li>
</ul>
</div>
</section>
<section id="indices-and-tables">
<h1>Indices and tables<a class="headerlink" href="#indices-and-tables" title="永久链接至标题"></a></h1>
<ul class="simple">
<li><p><a class="reference internal" href="genindex.html"><span class="std std-ref">索引</span></a></p></li>
<li><p><a class="reference internal" href="py-modindex.html"><span class="std std-ref">模块索引</span></a></p></li>
<li><p><a class="reference internal" href="search.html"><span class="std std-ref">搜索页面</span></a></p></li>
</ul>
</section>
......
......@@ -196,7 +196,10 @@
<dd><p>生成当天日期</p>
<dl class="field-list simple">
<dt class="field-odd">返回</dt>
<dd class="field-odd"><p>[str], 当天日期</p>
<dd class="field-odd"><p>当天日期</p>
</dd>
<dt class="field-even">返回类型</dt>
<dd class="field-even"><p>str</p>
</dd>
</dl>
</dd></dl>
......
......@@ -78,6 +78,10 @@
<p>检测视频中的字幕及旁白区间</p>
<p>主要是根据视频有无字幕,在基于OCR的方法和基于ASR的方法中进行选择调用。
(视频有无字幕的信息可以通过人机交互界面传入,也可以通过调用detect_movie函数检测得到)</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">narratage_detection</span> <span class="kn">import</span> <span class="n">detect</span>
<span class="n">detect</span><span class="p">(</span><span class="n">video_path</span><span class="p">,</span> <span class="n">start_time</span><span class="p">,</span> <span class="n">end_time</span><span class="p">,</span> <span class="n">book_path</span><span class="p">,</span> <span class="n">state</span><span class="p">,</span> <span class="n">subtitle</span><span class="p">)</span>
</pre></div>
</div>
<dl class="py function">
<dt class="sig sig-object py" id="narratage_detection.detect">
<span class="sig-prename descclassname"><span class="pre">narratage_detection.</span></span><span class="sig-name descname"><span class="pre">detect</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">video_path</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">str</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">start_time</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">float</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">end_time</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">float</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">book_path</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">str</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">state</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">subtitle</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">int</span></span><span class="w"> </span><span class="o"><span class="pre">=</span></span><span class="w"> </span><span class="default_value"><span class="pre">0</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/narratage_detection.html#detect"><span class="viewcode-link"><span class="pre">[源代码]</span></span></a><a class="headerlink" href="#narratage_detection.detect" title="永久链接至目标"></a></dt>
......
This diff is collapsed.
......@@ -82,7 +82,7 @@
<li><p>将字幕导出为字幕文件;</p></li>
<li><p>调用azure提供的TTS接口(说话人由用户选择),合成旁白音频,存放到用户选择的文件夹中;</p></li>
<li><p>根据旁白的时间戳及文本、语速等信息,对应调整原视频中音频的音量。</p></li>
<li><p>将合成的旁白融合到原视频的音频中,并输出到用户选择的文件夹下。</p></li>
<li><p>将合成的旁白融合到原视频的音频中,并输出到用户选择的文件夹下。(暂时放弃)</p></li>
</ul>
</dd>
</dl>
......
......@@ -74,6 +74,16 @@
<section id="module-split_wav">
<span id="split-wav"></span><h1>split_wav<a class="headerlink" href="#module-split_wav" title="永久链接至标题"></a></h1>
<p>音频的一些前处理(目前仅提供音频提取功能)</p>
<dl class="simple">
<dt>主要包含以下内容:</dt><dd><ul class="simple">
<li><p>从视频中读取音频</p></li>
<li><p>将音频划分为n个文件,并存储到本地</p></li>
<li><p>从音频中提取人声</p></li>
<li><p>从音频中提取背景音乐</p></li>
</ul>
</dd>
</dl>
<dl class="py function">
<dt class="sig sig-object py" id="split_wav.extract_audio">
<span class="sig-prename descclassname"><span class="pre">split_wav.</span></span><span class="sig-name descname"><span class="pre">extract_audio</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">video_path</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">str</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">root</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">str</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">start_time</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">float</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">end_time</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">float</span></span></em><span class="sig-paren">)</span> <span class="sig-return"><span class="sig-return-icon">&#x2192;</span> <span class="sig-return-typehint"><span class="pre">str</span></span></span><a class="reference internal" href="_modules/split_wav.html#extract_audio"><span class="viewcode-link"><span class="pre">[源代码]</span></span></a><a class="headerlink" href="#split_wav.extract_audio" title="永久链接至目标"></a></dt>
......
......@@ -12,14 +12,6 @@ set BUILDDIR=build
if "%1" == "" goto help
if "%1" == "pdf" (
%SPHINXBUILD% -b pdf %ALLSPHINXOPTS% %BUILDDIR%/pdf
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pdf files are in %BUILDDIR%/pdf.
goto end
)
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
......
......@@ -26,3 +26,9 @@ Welcome to accessibility movie's documentation!
split_wav
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
\ No newline at end of file
......@@ -111,7 +111,8 @@ def play_audio():
def create_detail_day() -> str:
"""生成当天日期
:return: [str], 当天日期
Returns:
str: 当天日期
"""
daytime = datetime.datetime.now().strftime('day' + '%Y_%m_%d')
return daytime
......
......@@ -3,6 +3,11 @@
主要是根据视频有无字幕,在基于OCR的方法和基于ASR的方法中进行选择调用。
(视频有无字幕的信息可以通过人机交互界面传入,也可以通过调用detect_movie函数检测得到)
.. code-block:: python
from narratage_detection import detect
detect(video_path, start_time, end_time, book_path, state, subtitle)
"""
import os.path
import argparse
......
"""音频的一些前处理(目前仅提供音频提取功能)
主要包含以下内容:
- 从视频中读取音频
- 将音频划分为n个文件,并存储到本地
- 从音频中提取人声
- 从音频中提取背景音乐
"""
# 最简单的是音轨分离,直接将背景音乐的轨道剥离,只剩下人声道后即可根据空白片段进行切割
# 只有一个音轨时,使用音乐检索系统,分割人声和背景音乐声
import os
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment