Skip to content
Projects
Groups
Snippets
Help
This project
Loading...
Sign in / Register
Toggle navigation
P
pinyin2hanzi
Project
Project
Details
Activity
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
邵子睿(21软)
pinyin2hanzi
Commits
cf45cfe1
Commit
cf45cfe1
authored
Dec 06, 2021
by
szr712
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
支持多卡训练
parent
a3eec639
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
3 additions
and
2 deletions
+3
-2
Batch.py
Batch.py
+1
-1
Process.py
Process.py
+1
-0
log.txt
log.txt
+1
-1
No files found.
Batch.py
View file @
cf45cfe1
...
...
@@ -79,7 +79,7 @@ class MyIterator(data.Iterator):
if
r
<
p
and
char
in
self
.
yunmus
:
new_ex
.
src
[
i
]
=
char
[:
-
1
]
+
"0"
self
.
dataset
.
examples
.
append
(
new_ex
)
print
(
"data len:{}"
.
format
(
len
(
self
.
dataset
.
examples
)))
#
print("data len:{}".format(len(self.dataset.examples)))
# print("src:{}\ntrg:{}".format(type(ex.src),type(ex.trg)))
if
self
.
sort
:
...
...
Process.py
View file @
cf45cfe1
...
...
@@ -164,6 +164,7 @@ def create_dataset(opt, SRC, TRG):
opt
.
trg_pad
=
TRG
.
vocab
.
stoi
[
'<pad>'
]
opt
.
train_len
=
get_len
(
train_iter
)
print
(
"train len:{}"
.
format
(
opt
.
train_len
))
return
train_iter
...
...
log.txt
View file @
cf45cfe1
...
...
@@ -42,4 +42,4 @@ CUDA_VISIBLE_DEVICES=2 nohup python train_token_classification.py -src_data data
CUDA_VISIBLE_DEVICES=1 python train_token_classification.py -src_data data/train_file/pinyin_split_random_wo_tones -trg_data data/train_file/hanzi_split_random_wo_tones -epochs 100 -model_name token_classification_split_new -src_voc ./data/voc/pinyin.txt -trg_voc ./data/voc/hanzi.txt
CUDA_VISIBLE_DEVICES=
1 python train_token_classification.py -src_data data/train_file/pinyin_split_random_wo_tones -trg_data data/train_file/hanzi_split_random_wo_tones -epochs 100 -model_name token_classification_split_new -src_voc ./data/voc/pinyin.txt -trg_voc ./data/voc/hanzi.txt -gpus 4,5,6,7
CUDA_VISIBLE_DEVICES=
5,6,7,8 python train_token_classification.py -src_data data/train_file/pinyin_split_random_wo_tones -trg_data data/train_file/hanzi_split_random_wo_tones -epochs 100 -model_name token_classification_split_new -src_voc ./data/voc/pinyin.txt -trg_voc ./data/voc/hanzi.txt -batchsize 128 -master_batch_size 32
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment