Feb 13 04:11:17.550935 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 13 04:11:17.550948 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 04:11:17.550955 kernel: BIOS-provided physical RAM map: Feb 13 04:11:17.550959 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 04:11:17.550962 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 04:11:17.550966 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 04:11:17.550971 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 04:11:17.550975 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 04:11:17.550978 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000825ddfff] usable Feb 13 04:11:17.550982 kernel: BIOS-e820: [mem 0x00000000825de000-0x00000000825defff] ACPI NVS Feb 13 04:11:17.550987 kernel: BIOS-e820: [mem 0x00000000825df000-0x00000000825dffff] reserved Feb 13 04:11:17.550991 kernel: BIOS-e820: [mem 0x00000000825e0000-0x000000008afccfff] usable Feb 13 04:11:17.550994 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 13 04:11:17.550998 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 13 04:11:17.551003 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 13 04:11:17.551008 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 13 04:11:17.551012 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 13 04:11:17.551017 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 13 04:11:17.551021 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 04:11:17.551025 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 04:11:17.551029 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 04:11:17.551033 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 04:11:17.551037 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 04:11:17.551041 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 13 04:11:17.551045 kernel: NX (Execute Disable) protection: active Feb 13 04:11:17.551050 kernel: SMBIOS 3.2.1 present. Feb 13 04:11:17.551055 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Feb 13 04:11:17.551059 kernel: tsc: Detected 3400.000 MHz processor Feb 13 04:11:17.551063 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 04:11:17.551068 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 04:11:17.551072 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 04:11:17.551077 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 13 04:11:17.551081 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 04:11:17.551085 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 13 04:11:17.551090 kernel: Using GB pages for direct mapping Feb 13 04:11:17.551094 kernel: ACPI: Early table checksum verification disabled Feb 13 04:11:17.551099 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 04:11:17.551104 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 04:11:17.551108 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 13 04:11:17.551113 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 04:11:17.551119 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 13 04:11:17.551123 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 13 04:11:17.551129 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 13 04:11:17.551134 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 04:11:17.551138 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 04:11:17.551143 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 04:11:17.551147 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 04:11:17.551152 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 04:11:17.551157 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 04:11:17.551161 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 04:11:17.551167 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 04:11:17.551172 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 04:11:17.551176 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 04:11:17.551181 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 04:11:17.551186 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 04:11:17.551190 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 04:11:17.551195 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 04:11:17.551199 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 04:11:17.551205 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 04:11:17.551210 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 13 04:11:17.551214 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 04:11:17.551219 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 04:11:17.551223 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 04:11:17.551228 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 13 04:11:17.551233 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 04:11:17.551238 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 04:11:17.551242 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 04:11:17.551248 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 04:11:17.551252 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 04:11:17.551261 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 13 04:11:17.551266 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 13 04:11:17.551270 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 13 04:11:17.551275 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 13 04:11:17.551279 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 13 04:11:17.551284 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 13 04:11:17.551290 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 13 04:11:17.551294 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 13 04:11:17.551299 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 13 04:11:17.551304 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 13 04:11:17.551309 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 13 04:11:17.551313 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 13 04:11:17.551318 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 13 04:11:17.551322 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 13 04:11:17.551327 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 13 04:11:17.551333 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 13 04:11:17.551337 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 13 04:11:17.551342 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 13 04:11:17.551346 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 13 04:11:17.551351 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 13 04:11:17.551356 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 13 04:11:17.551360 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 13 04:11:17.551365 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 13 04:11:17.551369 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 13 04:11:17.551375 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 13 04:11:17.551380 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 13 04:11:17.551384 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 13 04:11:17.551389 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 13 04:11:17.551394 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 13 04:11:17.551398 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 13 04:11:17.551403 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 13 04:11:17.551407 kernel: No NUMA configuration found Feb 13 04:11:17.551412 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 13 04:11:17.551417 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 13 04:11:17.551422 kernel: Zone ranges: Feb 13 04:11:17.551427 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 04:11:17.551432 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 04:11:17.551436 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 13 04:11:17.551441 kernel: Movable zone start for each node Feb 13 04:11:17.551445 kernel: Early memory node ranges Feb 13 04:11:17.551450 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 04:11:17.551454 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 04:11:17.551459 kernel: node 0: [mem 0x0000000040400000-0x00000000825ddfff] Feb 13 04:11:17.551465 kernel: node 0: [mem 0x00000000825e0000-0x000000008afccfff] Feb 13 04:11:17.551469 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 13 04:11:17.551474 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 13 04:11:17.551478 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 13 04:11:17.551483 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 13 04:11:17.551488 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 04:11:17.551496 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 04:11:17.551502 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 04:11:17.551507 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 04:11:17.551512 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 13 04:11:17.551517 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 13 04:11:17.551523 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 13 04:11:17.551528 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 13 04:11:17.551533 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 04:11:17.551538 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 04:11:17.551543 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 04:11:17.551548 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 04:11:17.551553 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 04:11:17.551558 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 04:11:17.551563 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 04:11:17.551568 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 04:11:17.551573 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 04:11:17.551578 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 04:11:17.551583 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 04:11:17.551588 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 04:11:17.551593 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 04:11:17.551599 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 04:11:17.551604 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 04:11:17.551608 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 04:11:17.551613 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 04:11:17.551618 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 04:11:17.551623 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 04:11:17.551628 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 04:11:17.551633 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 04:11:17.551638 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 04:11:17.551644 kernel: TSC deadline timer available Feb 13 04:11:17.551649 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 04:11:17.551654 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 13 04:11:17.551659 kernel: Booting paravirtualized kernel on bare hardware Feb 13 04:11:17.551664 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 04:11:17.551669 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 13 04:11:17.551674 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 13 04:11:17.551679 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 13 04:11:17.551684 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 04:11:17.551690 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 13 04:11:17.551695 kernel: Policy zone: Normal Feb 13 04:11:17.551701 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 04:11:17.551706 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 04:11:17.551711 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 04:11:17.551716 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 04:11:17.551721 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 04:11:17.551727 kernel: Memory: 32724720K/33452980K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 13 04:11:17.551732 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 04:11:17.551737 kernel: ftrace: allocating 34475 entries in 135 pages Feb 13 04:11:17.551742 kernel: ftrace: allocated 135 pages with 4 groups Feb 13 04:11:17.551747 kernel: rcu: Hierarchical RCU implementation. Feb 13 04:11:17.551752 kernel: rcu: RCU event tracing is enabled. Feb 13 04:11:17.551757 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 04:11:17.551762 kernel: Rude variant of Tasks RCU enabled. Feb 13 04:11:17.551767 kernel: Tracing variant of Tasks RCU enabled. Feb 13 04:11:17.551773 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 04:11:17.551778 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 04:11:17.551783 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 04:11:17.551788 kernel: random: crng init done Feb 13 04:11:17.551793 kernel: Console: colour dummy device 80x25 Feb 13 04:11:17.551798 kernel: printk: console [tty0] enabled Feb 13 04:11:17.551803 kernel: printk: console [ttyS1] enabled Feb 13 04:11:17.551808 kernel: ACPI: Core revision 20210730 Feb 13 04:11:17.551813 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 13 04:11:17.551818 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 04:11:17.551824 kernel: DMAR: Host address width 39 Feb 13 04:11:17.551829 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 04:11:17.551834 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 04:11:17.551839 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 13 04:11:17.551844 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 13 04:11:17.551849 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 04:11:17.551854 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 04:11:17.551859 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 04:11:17.551864 kernel: x2apic enabled Feb 13 04:11:17.551870 kernel: Switched APIC routing to cluster x2apic. Feb 13 04:11:17.551875 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 04:11:17.551880 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 04:11:17.551885 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 04:11:17.551890 kernel: process: using mwait in idle threads Feb 13 04:11:17.551895 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 04:11:17.551900 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 04:11:17.551905 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 04:11:17.551910 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 04:11:17.551916 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 13 04:11:17.551921 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 04:11:17.551926 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 04:11:17.551931 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 04:11:17.551936 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 04:11:17.551941 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 13 04:11:17.551946 kernel: TAA: Mitigation: TSX disabled Feb 13 04:11:17.551950 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 04:11:17.551956 kernel: SRBDS: Mitigation: Microcode Feb 13 04:11:17.551961 kernel: GDS: Vulnerable: No microcode Feb 13 04:11:17.551966 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 04:11:17.551972 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 04:11:17.551976 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 04:11:17.551981 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 04:11:17.551986 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 04:11:17.551991 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 04:11:17.551996 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 04:11:17.552001 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 04:11:17.552006 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 04:11:17.552011 kernel: Freeing SMP alternatives memory: 32K Feb 13 04:11:17.552016 kernel: pid_max: default: 32768 minimum: 301 Feb 13 04:11:17.552021 kernel: LSM: Security Framework initializing Feb 13 04:11:17.552026 kernel: SELinux: Initializing. Feb 13 04:11:17.552031 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 04:11:17.552036 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 04:11:17.552041 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 04:11:17.552046 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 04:11:17.552051 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 04:11:17.552056 kernel: ... version: 4 Feb 13 04:11:17.552061 kernel: ... bit width: 48 Feb 13 04:11:17.552066 kernel: ... generic registers: 4 Feb 13 04:11:17.552071 kernel: ... value mask: 0000ffffffffffff Feb 13 04:11:17.552077 kernel: ... max period: 00007fffffffffff Feb 13 04:11:17.552082 kernel: ... fixed-purpose events: 3 Feb 13 04:11:17.552087 kernel: ... event mask: 000000070000000f Feb 13 04:11:17.552092 kernel: signal: max sigframe size: 2032 Feb 13 04:11:17.552097 kernel: rcu: Hierarchical SRCU implementation. Feb 13 04:11:17.552102 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 04:11:17.552107 kernel: smp: Bringing up secondary CPUs ... Feb 13 04:11:17.552112 kernel: x86: Booting SMP configuration: Feb 13 04:11:17.552117 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 13 04:11:17.552122 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 04:11:17.552128 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 13 04:11:17.552133 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 04:11:17.552138 kernel: smpboot: Max logical packages: 1 Feb 13 04:11:17.552143 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 04:11:17.552148 kernel: devtmpfs: initialized Feb 13 04:11:17.552153 kernel: x86/mm: Memory block size: 128MB Feb 13 04:11:17.552158 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x825de000-0x825defff] (4096 bytes) Feb 13 04:11:17.552163 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 13 04:11:17.552169 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 04:11:17.552174 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 04:11:17.552179 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 04:11:17.552184 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 04:11:17.552189 kernel: audit: initializing netlink subsys (disabled) Feb 13 04:11:17.552194 kernel: audit: type=2000 audit(1707797472.040:1): state=initialized audit_enabled=0 res=1 Feb 13 04:11:17.552199 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 04:11:17.552204 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 04:11:17.552209 kernel: cpuidle: using governor menu Feb 13 04:11:17.552215 kernel: ACPI: bus type PCI registered Feb 13 04:11:17.552220 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 04:11:17.552225 kernel: dca service started, version 1.12.1 Feb 13 04:11:17.552230 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 04:11:17.552235 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 13 04:11:17.552240 kernel: PCI: Using configuration type 1 for base access Feb 13 04:11:17.552245 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 04:11:17.552249 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 04:11:17.552257 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 04:11:17.552263 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 04:11:17.552268 kernel: ACPI: Added _OSI(Module Device) Feb 13 04:11:17.552273 kernel: ACPI: Added _OSI(Processor Device) Feb 13 04:11:17.552278 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 04:11:17.552283 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 04:11:17.552288 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 13 04:11:17.552293 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 13 04:11:17.552298 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 13 04:11:17.552303 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 04:11:17.552309 kernel: ACPI: Dynamic OEM Table Load: Feb 13 04:11:17.552314 kernel: ACPI: SSDT 0xFFFF90DA80212D00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 04:11:17.552320 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 13 04:11:17.552324 kernel: ACPI: Dynamic OEM Table Load: Feb 13 04:11:17.552329 kernel: ACPI: SSDT 0xFFFF90DA81AE4800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 04:11:17.552334 kernel: ACPI: Dynamic OEM Table Load: Feb 13 04:11:17.552339 kernel: ACPI: SSDT 0xFFFF90DA81A58000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 04:11:17.552344 kernel: ACPI: Dynamic OEM Table Load: Feb 13 04:11:17.552349 kernel: ACPI: SSDT 0xFFFF90DA81A5E000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 04:11:17.552354 kernel: ACPI: Dynamic OEM Table Load: Feb 13 04:11:17.552360 kernel: ACPI: SSDT 0xFFFF90DA8014E000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 04:11:17.552365 kernel: ACPI: Dynamic OEM Table Load: Feb 13 04:11:17.552370 kernel: ACPI: SSDT 0xFFFF90DA81AE0000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 04:11:17.552375 kernel: ACPI: Interpreter enabled Feb 13 04:11:17.552380 kernel: ACPI: PM: (supports S0 S5) Feb 13 04:11:17.552385 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 04:11:17.552390 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 04:11:17.552395 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 04:11:17.552399 kernel: HEST: Table parsing has been initialized. Feb 13 04:11:17.552405 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 04:11:17.552410 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 04:11:17.552415 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 04:11:17.552420 kernel: ACPI: PM: Power Resource [USBC] Feb 13 04:11:17.552425 kernel: ACPI: PM: Power Resource [V0PR] Feb 13 04:11:17.552430 kernel: ACPI: PM: Power Resource [V1PR] Feb 13 04:11:17.552435 kernel: ACPI: PM: Power Resource [V2PR] Feb 13 04:11:17.552440 kernel: ACPI: PM: Power Resource [WRST] Feb 13 04:11:17.552445 kernel: ACPI: PM: Power Resource [FN00] Feb 13 04:11:17.552451 kernel: ACPI: PM: Power Resource [FN01] Feb 13 04:11:17.552456 kernel: ACPI: PM: Power Resource [FN02] Feb 13 04:11:17.552461 kernel: ACPI: PM: Power Resource [FN03] Feb 13 04:11:17.552465 kernel: ACPI: PM: Power Resource [FN04] Feb 13 04:11:17.552470 kernel: ACPI: PM: Power Resource [PIN] Feb 13 04:11:17.552475 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 04:11:17.552542 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 04:11:17.552588 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 04:11:17.552632 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 04:11:17.552639 kernel: PCI host bridge to bus 0000:00 Feb 13 04:11:17.552685 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 04:11:17.552724 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 04:11:17.552761 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 04:11:17.552798 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 13 04:11:17.552834 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 04:11:17.552873 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 04:11:17.552925 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 04:11:17.552974 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 04:11:17.553019 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 04:11:17.553065 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 04:11:17.553107 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 13 04:11:17.553156 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 04:11:17.553199 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 13 04:11:17.553246 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 04:11:17.553295 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 13 04:11:17.553339 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 04:11:17.553387 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 04:11:17.553431 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 13 04:11:17.553473 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 13 04:11:17.553518 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 04:11:17.553561 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 04:11:17.553607 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 04:11:17.553648 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 04:11:17.553696 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 04:11:17.553739 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 13 04:11:17.553781 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 04:11:17.553826 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 04:11:17.553868 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 13 04:11:17.553909 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 04:11:17.553955 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 04:11:17.554000 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 13 04:11:17.554042 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 04:11:17.554088 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 04:11:17.554130 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 13 04:11:17.554171 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 13 04:11:17.554212 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 13 04:11:17.554261 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 13 04:11:17.554312 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 13 04:11:17.554355 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 13 04:11:17.554397 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 04:11:17.554443 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 04:11:17.554488 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 04:11:17.554534 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 04:11:17.554578 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 04:11:17.554626 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 04:11:17.554668 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 04:11:17.554714 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 04:11:17.554757 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 04:11:17.554806 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 13 04:11:17.554850 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 13 04:11:17.554896 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 04:11:17.554938 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 04:11:17.555004 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 04:11:17.555052 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 04:11:17.555094 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 13 04:11:17.555135 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 04:11:17.555181 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 04:11:17.555223 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 04:11:17.555299 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 04:11:17.555367 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 04:11:17.555410 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 13 04:11:17.555454 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 13 04:11:17.555497 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 04:11:17.555539 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 04:11:17.555587 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 04:11:17.555630 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 04:11:17.555676 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 13 04:11:17.555718 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 13 04:11:17.555762 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 04:11:17.555805 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 04:11:17.555849 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 04:11:17.555890 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 04:11:17.555969 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 04:11:17.556031 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 04:11:17.556084 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 13 04:11:17.556128 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 13 04:11:17.556170 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 04:11:17.556213 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 13 04:11:17.556281 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 04:11:17.556343 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 04:11:17.556384 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 04:11:17.556430 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 04:11:17.556476 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 04:11:17.556521 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 13 04:11:17.556563 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 04:11:17.556606 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 13 04:11:17.556648 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 04:11:17.556690 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 04:11:17.556733 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 04:11:17.556776 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 04:11:17.556820 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 04:11:17.556867 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 04:11:17.556912 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 13 04:11:17.556955 kernel: pci 0000:06:00.0: supports D1 D2 Feb 13 04:11:17.556999 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 04:11:17.557041 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 04:11:17.557085 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 04:11:17.557127 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 04:11:17.557172 kernel: pci_bus 0000:07: extended config space not accessible Feb 13 04:11:17.557221 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 04:11:17.557293 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 13 04:11:17.557359 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 13 04:11:17.557405 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 04:11:17.557451 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 04:11:17.557499 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 04:11:17.557545 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 04:11:17.557590 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 04:11:17.557633 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 04:11:17.557677 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 04:11:17.557684 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 04:11:17.557690 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 04:11:17.557697 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 04:11:17.557702 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 04:11:17.557707 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 04:11:17.557712 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 04:11:17.557717 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 04:11:17.557723 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 04:11:17.557728 kernel: iommu: Default domain type: Translated Feb 13 04:11:17.557733 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 04:11:17.557777 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 13 04:11:17.557823 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 04:11:17.557868 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 13 04:11:17.557875 kernel: vgaarb: loaded Feb 13 04:11:17.557881 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 04:11:17.557886 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 04:11:17.557891 kernel: PTP clock support registered Feb 13 04:11:17.557897 kernel: PCI: Using ACPI for IRQ routing Feb 13 04:11:17.557902 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 04:11:17.557907 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 04:11:17.557913 kernel: e820: reserve RAM buffer [mem 0x825de000-0x83ffffff] Feb 13 04:11:17.557918 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 13 04:11:17.557923 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 13 04:11:17.557928 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 13 04:11:17.557933 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 13 04:11:17.557939 kernel: clocksource: Switched to clocksource tsc-early Feb 13 04:11:17.557944 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 04:11:17.557949 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 04:11:17.557954 kernel: pnp: PnP ACPI init Feb 13 04:11:17.558001 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 04:11:17.558045 kernel: pnp 00:02: [dma 0 disabled] Feb 13 04:11:17.558086 kernel: pnp 00:03: [dma 0 disabled] Feb 13 04:11:17.558127 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 04:11:17.558165 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 04:11:17.558205 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 04:11:17.558247 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 04:11:17.558329 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 04:11:17.558367 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 04:11:17.558405 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 04:11:17.558443 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 04:11:17.558479 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 04:11:17.558517 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 04:11:17.558556 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 04:11:17.558598 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 04:11:17.558637 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 04:11:17.558674 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 04:11:17.558711 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 04:11:17.558748 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 04:11:17.558786 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 04:11:17.558825 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 04:11:17.558865 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 04:11:17.558873 kernel: pnp: PnP ACPI: found 10 devices Feb 13 04:11:17.558879 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 04:11:17.558884 kernel: NET: Registered PF_INET protocol family Feb 13 04:11:17.558889 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 04:11:17.558894 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 04:11:17.558901 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 04:11:17.558906 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 04:11:17.558912 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 13 04:11:17.558917 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 04:11:17.558922 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 04:11:17.558927 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 04:11:17.558933 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 04:11:17.558938 kernel: NET: Registered PF_XDP protocol family Feb 13 04:11:17.558980 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 13 04:11:17.559024 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 13 04:11:17.559067 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 13 04:11:17.559110 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 04:11:17.559154 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 04:11:17.559198 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 04:11:17.559241 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 04:11:17.559327 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 04:11:17.559370 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 04:11:17.559413 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 04:11:17.559455 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 04:11:17.559497 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 04:11:17.559538 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 04:11:17.559580 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 04:11:17.559623 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 04:11:17.559665 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 04:11:17.559707 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 04:11:17.559748 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 04:11:17.559792 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 04:11:17.559834 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 04:11:17.559878 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 04:11:17.559920 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 04:11:17.559964 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 04:11:17.560006 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 04:11:17.560045 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 04:11:17.560083 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 04:11:17.560120 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 04:11:17.560157 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 04:11:17.560193 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 13 04:11:17.560230 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 04:11:17.560315 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 13 04:11:17.560357 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 04:11:17.560399 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 13 04:11:17.560438 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 13 04:11:17.560481 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 04:11:17.560519 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 13 04:11:17.560564 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 13 04:11:17.560604 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 13 04:11:17.560646 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 04:11:17.560687 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 13 04:11:17.560694 kernel: PCI: CLS 64 bytes, default 64 Feb 13 04:11:17.560700 kernel: DMAR: No ATSR found Feb 13 04:11:17.560705 kernel: DMAR: No SATC found Feb 13 04:11:17.560710 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 04:11:17.560754 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 04:11:17.560797 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 04:11:17.560839 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 13 04:11:17.560882 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 13 04:11:17.560923 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 13 04:11:17.560965 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 13 04:11:17.561006 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 13 04:11:17.561048 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 13 04:11:17.561090 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 13 04:11:17.561133 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 13 04:11:17.561174 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 13 04:11:17.561215 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 13 04:11:17.561259 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 13 04:11:17.561323 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 13 04:11:17.561367 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 13 04:11:17.561409 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 13 04:11:17.561452 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 13 04:11:17.561496 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 13 04:11:17.561538 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 13 04:11:17.561581 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 13 04:11:17.561623 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 13 04:11:17.561667 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 13 04:11:17.561711 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 13 04:11:17.561756 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 13 04:11:17.561800 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 04:11:17.561847 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 13 04:11:17.561892 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 13 04:11:17.561900 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 04:11:17.561905 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 04:11:17.561911 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 13 04:11:17.561916 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 13 04:11:17.561921 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 04:11:17.561927 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 04:11:17.561933 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 04:11:17.561979 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 04:11:17.561987 kernel: Initialise system trusted keyrings Feb 13 04:11:17.561992 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 04:11:17.561998 kernel: Key type asymmetric registered Feb 13 04:11:17.562003 kernel: Asymmetric key parser 'x509' registered Feb 13 04:11:17.562008 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 13 04:11:17.562013 kernel: io scheduler mq-deadline registered Feb 13 04:11:17.562020 kernel: io scheduler kyber registered Feb 13 04:11:17.562025 kernel: io scheduler bfq registered Feb 13 04:11:17.562067 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 13 04:11:17.562110 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 13 04:11:17.562153 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 13 04:11:17.562196 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 13 04:11:17.562239 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 13 04:11:17.562285 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 13 04:11:17.562336 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 04:11:17.562344 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 04:11:17.562350 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 04:11:17.562355 kernel: pstore: Registered erst as persistent store backend Feb 13 04:11:17.562361 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 04:11:17.562366 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 04:11:17.562371 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 04:11:17.562377 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 04:11:17.562383 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 13 04:11:17.562426 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 04:11:17.562434 kernel: i8042: PNP: No PS/2 controller found. Feb 13 04:11:17.562473 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 04:11:17.562528 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 04:11:17.562566 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-13T04:11:16 UTC (1707797476) Feb 13 04:11:17.562604 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 04:11:17.562611 kernel: fail to initialize ptp_kvm Feb 13 04:11:17.562618 kernel: intel_pstate: Intel P-state driver initializing Feb 13 04:11:17.562623 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 04:11:17.562628 kernel: intel_pstate: HWP enabled Feb 13 04:11:17.562633 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 13 04:11:17.562638 kernel: vesafb: scrolling: redraw Feb 13 04:11:17.562644 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 13 04:11:17.562649 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000054c68c41, using 768k, total 768k Feb 13 04:11:17.562654 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 04:11:17.562659 kernel: fb0: VESA VGA frame buffer device Feb 13 04:11:17.562666 kernel: NET: Registered PF_INET6 protocol family Feb 13 04:11:17.562671 kernel: Segment Routing with IPv6 Feb 13 04:11:17.562676 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 04:11:17.562681 kernel: NET: Registered PF_PACKET protocol family Feb 13 04:11:17.562686 kernel: Key type dns_resolver registered Feb 13 04:11:17.562691 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 13 04:11:17.562697 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 04:11:17.562702 kernel: IPI shorthand broadcast: enabled Feb 13 04:11:17.562707 kernel: sched_clock: Marking stable (1677604771, 1339877372)->(4436939550, -1419457407) Feb 13 04:11:17.562713 kernel: registered taskstats version 1 Feb 13 04:11:17.562718 kernel: Loading compiled-in X.509 certificates Feb 13 04:11:17.562723 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 13 04:11:17.562728 kernel: Key type .fscrypt registered Feb 13 04:11:17.562733 kernel: Key type fscrypt-provisioning registered Feb 13 04:11:17.562738 kernel: pstore: Using crash dump compression: deflate Feb 13 04:11:17.562744 kernel: ima: Allocated hash algorithm: sha1 Feb 13 04:11:17.562749 kernel: ima: No architecture policies found Feb 13 04:11:17.562755 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 13 04:11:17.562760 kernel: Write protecting the kernel read-only data: 28672k Feb 13 04:11:17.562765 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 13 04:11:17.562770 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 13 04:11:17.562776 kernel: Run /init as init process Feb 13 04:11:17.562781 kernel: with arguments: Feb 13 04:11:17.562786 kernel: /init Feb 13 04:11:17.562791 kernel: with environment: Feb 13 04:11:17.562796 kernel: HOME=/ Feb 13 04:11:17.562802 kernel: TERM=linux Feb 13 04:11:17.562807 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 04:11:17.562813 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 04:11:17.562819 systemd[1]: Detected architecture x86-64. Feb 13 04:11:17.562825 systemd[1]: Running in initrd. Feb 13 04:11:17.562830 systemd[1]: No hostname configured, using default hostname. Feb 13 04:11:17.562836 systemd[1]: Hostname set to . Feb 13 04:11:17.562841 systemd[1]: Initializing machine ID from random generator. Feb 13 04:11:17.562847 systemd[1]: Queued start job for default target initrd.target. Feb 13 04:11:17.562852 systemd[1]: Started systemd-ask-password-console.path. Feb 13 04:11:17.562858 systemd[1]: Reached target cryptsetup.target. Feb 13 04:11:17.562863 systemd[1]: Reached target paths.target. Feb 13 04:11:17.562868 systemd[1]: Reached target slices.target. Feb 13 04:11:17.562873 systemd[1]: Reached target swap.target. Feb 13 04:11:17.562879 systemd[1]: Reached target timers.target. Feb 13 04:11:17.562884 systemd[1]: Listening on iscsid.socket. Feb 13 04:11:17.562890 systemd[1]: Listening on iscsiuio.socket. Feb 13 04:11:17.562896 systemd[1]: Listening on systemd-journald-audit.socket. Feb 13 04:11:17.562901 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 13 04:11:17.562906 systemd[1]: Listening on systemd-journald.socket. Feb 13 04:11:17.562912 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Feb 13 04:11:17.562917 systemd[1]: Listening on systemd-networkd.socket. Feb 13 04:11:17.562923 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Feb 13 04:11:17.562928 kernel: clocksource: Switched to clocksource tsc Feb 13 04:11:17.562934 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 04:11:17.562939 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 04:11:17.562945 systemd[1]: Reached target sockets.target. Feb 13 04:11:17.562950 systemd[1]: Starting kmod-static-nodes.service... Feb 13 04:11:17.562955 systemd[1]: Finished network-cleanup.service. Feb 13 04:11:17.562961 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 04:11:17.562966 systemd[1]: Starting systemd-journald.service... Feb 13 04:11:17.562971 systemd[1]: Starting systemd-modules-load.service... Feb 13 04:11:17.562979 systemd-journald[266]: Journal started Feb 13 04:11:17.563004 systemd-journald[266]: Runtime Journal (/run/log/journal/1340dedd3efe48b98d573f413afa41a6) is 8.0M, max 640.1M, 632.1M free. Feb 13 04:11:17.564917 systemd-modules-load[267]: Inserted module 'overlay' Feb 13 04:11:17.570000 audit: BPF prog-id=6 op=LOAD Feb 13 04:11:17.589293 kernel: audit: type=1334 audit(1707797477.570:2): prog-id=6 op=LOAD Feb 13 04:11:17.589324 systemd[1]: Starting systemd-resolved.service... Feb 13 04:11:17.638301 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 04:11:17.638317 systemd[1]: Starting systemd-vconsole-setup.service... Feb 13 04:11:17.668300 kernel: Bridge firewalling registered Feb 13 04:11:17.668318 systemd[1]: Started systemd-journald.service. Feb 13 04:11:17.683186 systemd-modules-load[267]: Inserted module 'br_netfilter' Feb 13 04:11:17.731984 kernel: audit: type=1130 audit(1707797477.690:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:17.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:17.689247 systemd-resolved[269]: Positive Trust Anchors: Feb 13 04:11:17.808258 kernel: SCSI subsystem initialized Feb 13 04:11:17.808271 kernel: audit: type=1130 audit(1707797477.743:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:17.808281 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 04:11:17.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:17.689251 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 04:11:17.908875 kernel: device-mapper: uevent: version 1.0.3 Feb 13 04:11:17.908907 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 13 04:11:17.908924 kernel: audit: type=1130 audit(1707797477.864:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:17.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:17.689275 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 04:11:17.983525 kernel: audit: type=1130 audit(1707797477.917:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:17.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:17.690846 systemd-resolved[269]: Defaulting to hostname 'linux'. Feb 13 04:11:17.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:17.691462 systemd[1]: Started systemd-resolved.service. Feb 13 04:11:18.091327 kernel: audit: type=1130 audit(1707797477.991:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:18.091358 kernel: audit: type=1130 audit(1707797478.044:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:18.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:17.744423 systemd[1]: Finished kmod-static-nodes.service. Feb 13 04:11:17.866342 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 04:11:17.909298 systemd-modules-load[267]: Inserted module 'dm_multipath' Feb 13 04:11:17.918745 systemd[1]: Finished systemd-modules-load.service. Feb 13 04:11:17.992618 systemd[1]: Finished systemd-vconsole-setup.service. Feb 13 04:11:18.045548 systemd[1]: Reached target nss-lookup.target. Feb 13 04:11:18.099859 systemd[1]: Starting dracut-cmdline-ask.service... Feb 13 04:11:18.119792 systemd[1]: Starting systemd-sysctl.service... Feb 13 04:11:18.120091 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 04:11:18.122873 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 04:11:18.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:18.123627 systemd[1]: Finished systemd-sysctl.service. Feb 13 04:11:18.172269 kernel: audit: type=1130 audit(1707797478.121:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:18.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:18.185615 systemd[1]: Finished dracut-cmdline-ask.service. Feb 13 04:11:18.251332 kernel: audit: type=1130 audit(1707797478.184:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:18.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:18.242861 systemd[1]: Starting dracut-cmdline.service... Feb 13 04:11:18.265354 dracut-cmdline[292]: dracut-dracut-053 Feb 13 04:11:18.265354 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 13 04:11:18.265354 dracut-cmdline[292]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 04:11:18.365294 kernel: Loading iSCSI transport class v2.0-870. Feb 13 04:11:18.365310 kernel: iscsi: registered transport (tcp) Feb 13 04:11:18.365318 kernel: iscsi: registered transport (qla4xxx) Feb 13 04:11:18.383604 kernel: QLogic iSCSI HBA Driver Feb 13 04:11:18.399272 systemd[1]: Finished dracut-cmdline.service. Feb 13 04:11:18.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:18.409694 systemd[1]: Starting dracut-pre-udev.service... Feb 13 04:11:18.536291 kernel: raid6: avx2x4 gen() 23705 MB/s Feb 13 04:11:18.571290 kernel: raid6: avx2x4 xor() 21532 MB/s Feb 13 04:11:18.606291 kernel: raid6: avx2x2 gen() 54924 MB/s Feb 13 04:11:18.641291 kernel: raid6: avx2x2 xor() 33575 MB/s Feb 13 04:11:18.676290 kernel: raid6: avx2x1 gen() 49019 MB/s Feb 13 04:11:18.710291 kernel: raid6: avx2x1 xor() 30248 MB/s Feb 13 04:11:18.744290 kernel: raid6: sse2x4 gen() 23173 MB/s Feb 13 04:11:18.778292 kernel: raid6: sse2x4 xor() 12912 MB/s Feb 13 04:11:18.812290 kernel: raid6: sse2x2 gen() 23251 MB/s Feb 13 04:11:18.846297 kernel: raid6: sse2x2 xor() 14140 MB/s Feb 13 04:11:18.880291 kernel: raid6: sse2x1 gen() 19261 MB/s Feb 13 04:11:18.931840 kernel: raid6: sse2x1 xor() 9420 MB/s Feb 13 04:11:18.931856 kernel: raid6: using algorithm avx2x2 gen() 54924 MB/s Feb 13 04:11:18.931863 kernel: raid6: .... xor() 33575 MB/s, rmw enabled Feb 13 04:11:18.949879 kernel: raid6: using avx2x2 recovery algorithm Feb 13 04:11:18.995288 kernel: xor: automatically using best checksumming function avx Feb 13 04:11:19.072290 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 13 04:11:19.077556 systemd[1]: Finished dracut-pre-udev.service. Feb 13 04:11:19.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:19.085000 audit: BPF prog-id=7 op=LOAD Feb 13 04:11:19.085000 audit: BPF prog-id=8 op=LOAD Feb 13 04:11:19.087315 systemd[1]: Starting systemd-udevd.service... Feb 13 04:11:19.095347 systemd-udevd[475]: Using default interface naming scheme 'v252'. Feb 13 04:11:19.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:19.100402 systemd[1]: Started systemd-udevd.service. Feb 13 04:11:19.140381 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation Feb 13 04:11:19.116898 systemd[1]: Starting dracut-pre-trigger.service... Feb 13 04:11:19.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:19.142826 systemd[1]: Finished dracut-pre-trigger.service. Feb 13 04:11:19.158287 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 04:11:19.208299 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 04:11:19.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:19.236276 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 04:11:19.257266 kernel: ACPI: bus type USB registered Feb 13 04:11:19.257298 kernel: libata version 3.00 loaded. Feb 13 04:11:19.257306 kernel: usbcore: registered new interface driver usbfs Feb 13 04:11:19.257313 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 04:11:19.291879 kernel: usbcore: registered new interface driver hub Feb 13 04:11:19.326751 kernel: usbcore: registered new device driver usb Feb 13 04:11:19.327260 kernel: AES CTR mode by8 optimization enabled Feb 13 04:11:19.344265 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 04:11:19.377659 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 04:11:19.415066 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 04:11:19.415166 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 13 04:11:19.415234 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 04:11:19.415298 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 04:11:19.419261 kernel: pps pps0: new PPS source ptp0 Feb 13 04:11:19.419343 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 13 04:11:19.419412 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 04:11:19.419474 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:4a Feb 13 04:11:19.419534 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 13 04:11:19.419594 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 04:11:19.453257 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 04:11:19.468259 kernel: pps pps1: new PPS source ptp1 Feb 13 04:11:19.468337 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 04:11:19.500184 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 13 04:11:19.500289 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 04:11:19.532595 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 04:11:19.532864 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 04:11:19.568743 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:4b Feb 13 04:11:19.569142 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 04:11:19.569339 kernel: hub 1-0:1.0: USB hub found Feb 13 04:11:19.569505 kernel: hub 1-0:1.0: 16 ports detected Feb 13 04:11:19.570266 kernel: hub 2-0:1.0: USB hub found Feb 13 04:11:19.570504 kernel: hub 2-0:1.0: 10 ports detected Feb 13 04:11:19.570671 kernel: usb: port power management may be unreliable Feb 13 04:11:19.596379 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 13 04:11:19.609616 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 13 04:11:19.609684 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 04:11:19.625364 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 04:11:19.656261 kernel: scsi host0: ahci Feb 13 04:11:19.693544 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 13 04:11:19.693620 kernel: scsi host1: ahci Feb 13 04:11:19.729875 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 04:11:19.729950 kernel: scsi host2: ahci Feb 13 04:11:19.824292 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 04:11:19.824320 kernel: scsi host3: ahci Feb 13 04:11:19.824334 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 13 04:11:19.906303 kernel: scsi host4: ahci Feb 13 04:11:19.931936 kernel: scsi host5: ahci Feb 13 04:11:19.932048 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 04:11:19.950458 kernel: scsi host6: ahci Feb 13 04:11:19.977288 kernel: hub 1-14:1.0: USB hub found Feb 13 04:11:19.977410 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 138 Feb 13 04:11:20.009260 kernel: hub 1-14:1.0: 4 ports detected Feb 13 04:11:20.009397 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 138 Feb 13 04:11:20.009406 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 138 Feb 13 04:11:20.061863 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 138 Feb 13 04:11:20.061879 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 138 Feb 13 04:11:20.079109 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 138 Feb 13 04:11:20.096210 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 138 Feb 13 04:11:20.170296 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 04:11:20.217083 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 13 04:11:20.217196 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 04:11:20.312273 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 04:11:20.427259 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 04:11:20.427277 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 04:11:20.444264 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 04:11:20.459303 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 04:11:20.474288 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 04:11:20.490317 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 04:11:20.504260 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 04:11:20.504352 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 04:11:20.538303 kernel: port_module: 9 callbacks suppressed Feb 13 04:11:20.538319 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 13 04:11:20.538388 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 04:11:20.568264 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 04:11:20.568341 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 04:11:20.617301 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 04:11:20.666525 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 04:11:20.666541 kernel: ata1.00: Features: NCQ-prio Feb 13 04:11:20.666549 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 04:11:20.696245 kernel: ata2.00: Features: NCQ-prio Feb 13 04:11:20.715305 kernel: ata1.00: configured for UDMA/133 Feb 13 04:11:20.715372 kernel: ata2.00: configured for UDMA/133 Feb 13 04:11:20.715388 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 04:11:20.746303 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 04:11:20.794336 kernel: usbcore: registered new interface driver usbhid Feb 13 04:11:20.794354 kernel: usbhid: USB HID core driver Feb 13 04:11:20.827302 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 04:11:20.827317 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 04:11:20.859618 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 04:11:20.859637 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 04:11:20.873635 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 04:11:20.873714 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 04:11:20.891313 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 04:11:20.891390 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 04:11:20.891458 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 04:11:20.891467 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 04:11:20.907423 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 04:11:20.907499 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 04:11:20.922080 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 04:11:20.953101 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 04:11:20.953177 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 04:11:20.985869 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 04:11:21.101761 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 04:11:21.101784 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 04:11:21.138190 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 04:11:21.138207 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 04:11:21.154266 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 04:11:21.206406 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 04:11:21.206422 kernel: GPT:9289727 != 937703087 Feb 13 04:11:21.206430 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 04:11:21.223538 kernel: GPT:9289727 != 937703087 Feb 13 04:11:21.238292 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 04:11:21.254802 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 04:11:21.287809 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 04:11:21.287824 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 04:11:21.323262 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Feb 13 04:11:21.347148 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 13 04:11:21.397550 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Feb 13 04:11:21.397632 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (522) Feb 13 04:11:21.387555 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 13 04:11:21.413514 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 13 04:11:21.437659 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 13 04:11:21.462914 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 04:11:21.473349 systemd[1]: Starting disk-uuid.service... Feb 13 04:11:21.507370 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 04:11:21.507384 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 04:11:21.507435 disk-uuid[687]: Primary Header is updated. Feb 13 04:11:21.507435 disk-uuid[687]: Secondary Entries is updated. Feb 13 04:11:21.507435 disk-uuid[687]: Secondary Header is updated. Feb 13 04:11:21.581401 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 04:11:21.581412 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 04:11:21.581418 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 04:11:21.581425 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 04:11:22.567079 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 04:11:22.586089 disk-uuid[689]: The operation has completed successfully. Feb 13 04:11:22.594508 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 04:11:22.621924 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 04:11:22.719179 kernel: audit: type=1130 audit(1707797482.628:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:22.719193 kernel: audit: type=1131 audit(1707797482.628:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:22.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:22.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:22.621982 systemd[1]: Finished disk-uuid.service. Feb 13 04:11:22.748363 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 04:11:22.633598 systemd[1]: Starting verity-setup.service... Feb 13 04:11:22.777987 systemd[1]: Found device dev-mapper-usr.device. Feb 13 04:11:22.778805 systemd[1]: Mounting sysusr-usr.mount... Feb 13 04:11:22.799458 systemd[1]: Finished verity-setup.service. Feb 13 04:11:22.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:22.861260 kernel: audit: type=1130 audit(1707797482.813:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:22.887902 systemd[1]: Mounted sysusr-usr.mount. Feb 13 04:11:22.903355 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 13 04:11:22.896545 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 13 04:11:22.983209 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 04:11:22.983226 kernel: BTRFS info (device sda6): using free space tree Feb 13 04:11:22.983234 kernel: BTRFS info (device sda6): has skinny extents Feb 13 04:11:22.983241 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 04:11:22.896944 systemd[1]: Starting ignition-setup.service... Feb 13 04:11:22.916638 systemd[1]: Starting parse-ip-for-networkd.service... Feb 13 04:11:23.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:22.991760 systemd[1]: Finished ignition-setup.service. Feb 13 04:11:23.109873 kernel: audit: type=1130 audit(1707797483.005:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:23.109961 kernel: audit: type=1130 audit(1707797483.062:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:23.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:23.006678 systemd[1]: Finished parse-ip-for-networkd.service. Feb 13 04:11:23.141741 kernel: audit: type=1334 audit(1707797483.119:24): prog-id=9 op=LOAD Feb 13 04:11:23.119000 audit: BPF prog-id=9 op=LOAD Feb 13 04:11:23.063916 systemd[1]: Starting ignition-fetch-offline.service... Feb 13 04:11:23.121651 systemd[1]: Starting systemd-networkd.service... Feb 13 04:11:23.157401 systemd-networkd[869]: lo: Link UP Feb 13 04:11:23.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:23.198577 ignition[868]: Ignition 2.14.0 Feb 13 04:11:23.235319 kernel: audit: type=1130 audit(1707797483.173:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:23.157404 systemd-networkd[869]: lo: Gained carrier Feb 13 04:11:23.198581 ignition[868]: Stage: fetch-offline Feb 13 04:11:23.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:23.157714 systemd-networkd[869]: Enumeration completed Feb 13 04:11:23.385541 kernel: audit: type=1130 audit(1707797483.253:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:23.385553 kernel: audit: type=1130 audit(1707797483.315:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:23.385561 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 04:11:23.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:23.198607 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 04:11:23.408335 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Feb 13 04:11:23.157783 systemd[1]: Started systemd-networkd.service. Feb 13 04:11:23.198624 ignition[868]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 04:11:23.158322 systemd-networkd[869]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 04:11:23.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:23.206194 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 04:11:23.463355 iscsid[899]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 13 04:11:23.463355 iscsid[899]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 13 04:11:23.463355 iscsid[899]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 13 04:11:23.463355 iscsid[899]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 13 04:11:23.463355 iscsid[899]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 13 04:11:23.463355 iscsid[899]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 13 04:11:23.463355 iscsid[899]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 13 04:11:23.625389 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 04:11:23.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:23.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:23.174394 systemd[1]: Reached target network.target. Feb 13 04:11:23.206260 ignition[868]: parsed url from cmdline: "" Feb 13 04:11:23.216911 unknown[868]: fetched base config from "system" Feb 13 04:11:23.206262 ignition[868]: no config URL provided Feb 13 04:11:23.216916 unknown[868]: fetched user config from "system" Feb 13 04:11:23.206265 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 04:11:23.229831 systemd[1]: Starting iscsiuio.service... Feb 13 04:11:23.206283 ignition[868]: parsing config with SHA512: 1f3ba5f317e1eb6c5f2cae57086ebda9d91e693d367687db2949ca7514525df1ef78f00e555760ee7ece56c46d3d6d8bd1a2200be409910c2a4d93b9189aa2a5 Feb 13 04:11:23.235494 systemd[1]: Started iscsiuio.service. Feb 13 04:11:23.217190 ignition[868]: fetch-offline: fetch-offline passed Feb 13 04:11:23.255030 systemd[1]: Finished ignition-fetch-offline.service. Feb 13 04:11:23.217194 ignition[868]: POST message to Packet Timeline Feb 13 04:11:23.316555 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 04:11:23.217201 ignition[868]: POST Status error: resource requires networking Feb 13 04:11:23.317099 systemd[1]: Starting ignition-kargs.service... Feb 13 04:11:23.217234 ignition[868]: Ignition finished successfully Feb 13 04:11:23.386492 systemd-networkd[869]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 04:11:23.389928 ignition[887]: Ignition 2.14.0 Feb 13 04:11:23.399855 systemd[1]: Starting iscsid.service... Feb 13 04:11:23.389932 ignition[887]: Stage: kargs Feb 13 04:11:23.422495 systemd[1]: Started iscsid.service. Feb 13 04:11:23.389987 ignition[887]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 04:11:23.436822 systemd[1]: Starting dracut-initqueue.service... Feb 13 04:11:23.389996 ignition[887]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 04:11:23.448581 systemd[1]: Finished dracut-initqueue.service. Feb 13 04:11:23.391372 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 04:11:23.471372 systemd[1]: Reached target remote-fs-pre.target. Feb 13 04:11:23.392786 ignition[887]: kargs: kargs passed Feb 13 04:11:23.516474 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 04:11:23.392790 ignition[887]: POST message to Packet Timeline Feb 13 04:11:23.548612 systemd[1]: Reached target remote-fs.target. Feb 13 04:11:23.392801 ignition[887]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 04:11:23.571116 systemd[1]: Starting dracut-pre-mount.service... Feb 13 04:11:23.395412 ignition[887]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33875->[::1]:53: read: connection refused Feb 13 04:11:23.587603 systemd[1]: Finished dracut-pre-mount.service. Feb 13 04:11:23.595775 ignition[887]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 04:11:23.609277 systemd-networkd[869]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 04:11:23.596230 ignition[887]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36896->[::1]:53: read: connection refused Feb 13 04:11:23.637756 systemd-networkd[869]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 04:11:23.666732 systemd-networkd[869]: enp1s0f1np1: Link UP Feb 13 04:11:23.666952 systemd-networkd[869]: enp1s0f1np1: Gained carrier Feb 13 04:11:23.683794 systemd-networkd[869]: enp1s0f0np0: Link UP Feb 13 04:11:23.684159 systemd-networkd[869]: eno2: Link UP Feb 13 04:11:23.684531 systemd-networkd[869]: eno1: Link UP Feb 13 04:11:23.997129 ignition[887]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 04:11:23.998418 ignition[887]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40335->[::1]:53: read: connection refused Feb 13 04:11:24.440740 systemd-networkd[869]: enp1s0f0np0: Gained carrier Feb 13 04:11:24.449532 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Feb 13 04:11:24.481590 systemd-networkd[869]: enp1s0f0np0: DHCPv4 address 139.178.94.233/31, gateway 139.178.94.232 acquired from 145.40.83.140 Feb 13 04:11:24.798836 ignition[887]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 04:11:24.800358 ignition[887]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44095->[::1]:53: read: connection refused Feb 13 04:11:25.315752 systemd-networkd[869]: enp1s0f1np1: Gained IPv6LL Feb 13 04:11:25.699780 systemd-networkd[869]: enp1s0f0np0: Gained IPv6LL Feb 13 04:11:26.401416 ignition[887]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 04:11:26.402863 ignition[887]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35519->[::1]:53: read: connection refused Feb 13 04:11:29.606138 ignition[887]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 04:11:29.648512 ignition[887]: GET result: OK Feb 13 04:11:29.904192 ignition[887]: Ignition finished successfully Feb 13 04:11:29.908939 systemd[1]: Finished ignition-kargs.service. Feb 13 04:11:29.991579 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 13 04:11:29.991611 kernel: audit: type=1130 audit(1707797489.919:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:29.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:29.929689 ignition[916]: Ignition 2.14.0 Feb 13 04:11:29.922512 systemd[1]: Starting ignition-disks.service... Feb 13 04:11:29.929714 ignition[916]: Stage: disks Feb 13 04:11:29.929785 ignition[916]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 04:11:29.929795 ignition[916]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 04:11:29.931201 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 04:11:29.932850 ignition[916]: disks: disks passed Feb 13 04:11:29.932854 ignition[916]: POST message to Packet Timeline Feb 13 04:11:29.932863 ignition[916]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 04:11:29.955997 ignition[916]: GET result: OK Feb 13 04:11:30.139505 ignition[916]: Ignition finished successfully Feb 13 04:11:30.142582 systemd[1]: Finished ignition-disks.service. Feb 13 04:11:30.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:30.154857 systemd[1]: Reached target initrd-root-device.target. Feb 13 04:11:30.236440 kernel: audit: type=1130 audit(1707797490.153:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:30.222436 systemd[1]: Reached target local-fs-pre.target. Feb 13 04:11:30.222550 systemd[1]: Reached target local-fs.target. Feb 13 04:11:30.245451 systemd[1]: Reached target sysinit.target. Feb 13 04:11:30.245566 systemd[1]: Reached target basic.target. Feb 13 04:11:30.267308 systemd[1]: Starting systemd-fsck-root.service... Feb 13 04:11:30.289176 systemd-fsck[931]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 13 04:11:30.304805 systemd[1]: Finished systemd-fsck-root.service. Feb 13 04:11:30.392457 kernel: audit: type=1130 audit(1707797490.312:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:30.392473 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 13 04:11:30.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:30.321331 systemd[1]: Mounting sysroot.mount... Feb 13 04:11:30.399932 systemd[1]: Mounted sysroot.mount. Feb 13 04:11:30.413555 systemd[1]: Reached target initrd-root-fs.target. Feb 13 04:11:30.421200 systemd[1]: Mounting sysroot-usr.mount... Feb 13 04:11:30.446131 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 13 04:11:30.454784 systemd[1]: Starting flatcar-static-network.service... Feb 13 04:11:30.470387 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 04:11:30.470422 systemd[1]: Reached target ignition-diskful.target. Feb 13 04:11:30.489419 systemd[1]: Mounted sysroot-usr.mount. Feb 13 04:11:30.512419 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 04:11:30.523639 systemd[1]: Starting initrd-setup-root.service... Feb 13 04:11:30.644730 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (943) Feb 13 04:11:30.644751 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 04:11:30.644764 kernel: BTRFS info (device sda6): using free space tree Feb 13 04:11:30.644772 kernel: BTRFS info (device sda6): has skinny extents Feb 13 04:11:30.644779 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 04:11:30.644790 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 04:11:30.705238 kernel: audit: type=1130 audit(1707797490.651:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:30.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:30.705356 coreos-metadata[938]: Feb 13 04:11:30.563 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 04:11:30.705356 coreos-metadata[938]: Feb 13 04:11:30.601 INFO Fetch successful Feb 13 04:11:30.705356 coreos-metadata[938]: Feb 13 04:11:30.661 INFO wrote hostname ci-3510.3.2-a-5480a8887f to /sysroot/etc/hostname Feb 13 04:11:30.912528 kernel: audit: type=1130 audit(1707797490.712:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:30.912542 kernel: audit: type=1131 audit(1707797490.712:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:30.912551 kernel: audit: type=1130 audit(1707797490.831:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:30.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:30.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:30.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:30.575586 systemd[1]: Finished initrd-setup-root.service. Feb 13 04:11:30.926425 coreos-metadata[939]: Feb 13 04:11:30.564 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 04:11:30.926425 coreos-metadata[939]: Feb 13 04:11:30.586 INFO Fetch successful Feb 13 04:11:30.945416 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Feb 13 04:11:30.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:30.653425 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 04:11:31.029483 kernel: audit: type=1130 audit(1707797490.960:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:31.029515 initrd-setup-root[966]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 04:11:30.653465 systemd[1]: Finished flatcar-static-network.service. Feb 13 04:11:31.049487 initrd-setup-root[974]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 04:11:30.713597 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 13 04:11:31.068542 ignition[1017]: INFO : Ignition 2.14.0 Feb 13 04:11:31.068542 ignition[1017]: INFO : Stage: mount Feb 13 04:11:31.068542 ignition[1017]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 04:11:31.068542 ignition[1017]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 04:11:31.068542 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 04:11:31.068542 ignition[1017]: INFO : mount: mount passed Feb 13 04:11:31.068542 ignition[1017]: INFO : POST message to Packet Timeline Feb 13 04:11:31.068542 ignition[1017]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 04:11:31.068542 ignition[1017]: INFO : GET result: OK Feb 13 04:11:30.832539 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 04:11:30.898842 systemd[1]: Starting ignition-mount.service... Feb 13 04:11:30.919694 systemd[1]: Starting sysroot-boot.service... Feb 13 04:11:30.933750 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 13 04:11:30.933799 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 13 04:11:30.938944 systemd[1]: Finished sysroot-boot.service. Feb 13 04:11:31.348948 ignition[1017]: INFO : Ignition finished successfully Feb 13 04:11:31.351526 systemd[1]: Finished ignition-mount.service. Feb 13 04:11:31.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:31.367655 systemd[1]: Starting ignition-files.service... Feb 13 04:11:31.439506 kernel: audit: type=1130 audit(1707797491.364:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:31.433076 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 04:11:31.497453 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1034) Feb 13 04:11:31.497468 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 04:11:31.497476 kernel: BTRFS info (device sda6): using free space tree Feb 13 04:11:31.521147 kernel: BTRFS info (device sda6): has skinny extents Feb 13 04:11:31.570307 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 04:11:31.572156 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 04:11:31.589397 ignition[1053]: INFO : Ignition 2.14.0 Feb 13 04:11:31.589397 ignition[1053]: INFO : Stage: files Feb 13 04:11:31.589397 ignition[1053]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 04:11:31.589397 ignition[1053]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 04:11:31.589397 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 04:11:31.589397 ignition[1053]: DEBUG : files: compiled without relabeling support, skipping Feb 13 04:11:31.589397 ignition[1053]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 04:11:31.589397 ignition[1053]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 04:11:31.591567 unknown[1053]: wrote ssh authorized keys file for user: core Feb 13 04:11:31.692502 ignition[1053]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 04:11:31.692502 ignition[1053]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 04:11:31.692502 ignition[1053]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 04:11:31.692502 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 13 04:11:31.692502 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 13 04:11:32.104102 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 04:11:32.227821 ignition[1053]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 13 04:11:32.252523 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 13 04:11:32.252523 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 13 04:11:32.252523 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 13 04:11:32.613224 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 04:11:32.675165 ignition[1053]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 13 04:11:32.698455 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 13 04:11:32.698455 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 13 04:11:32.698455 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 13 04:11:32.762825 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 13 04:11:32.949223 ignition[1053]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 13 04:11:32.974505 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 13 04:11:32.974505 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 13 04:11:32.974505 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 13 04:11:33.023398 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 04:11:33.619720 ignition[1053]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 13 04:11:33.619720 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 13 04:11:33.677355 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1073) Feb 13 04:11:33.677370 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 13 04:11:33.677370 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 04:11:33.677370 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 13 04:11:33.677370 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 13 04:11:33.677370 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 04:11:33.677370 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 04:11:33.677370 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 04:11:33.677370 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Feb 13 04:11:33.677370 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2019501091" Feb 13 04:11:33.677370 ignition[1053]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2019501091": device or resource busy Feb 13 04:11:33.677370 ignition[1053]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2019501091", trying btrfs: device or resource busy Feb 13 04:11:33.677370 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2019501091" Feb 13 04:11:33.677370 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2019501091" Feb 13 04:11:33.677370 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem2019501091" Feb 13 04:11:33.677370 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem2019501091" Feb 13 04:11:33.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:33.982446 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(e): [started] processing unit "packet-phone-home.service" Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(e): [finished] processing unit "packet-phone-home.service" Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(f): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(f): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(14): [started] setting preset to enabled for "packet-phone-home.service" Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(14): [finished] setting preset to enabled for "packet-phone-home.service" Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(15): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(15): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(16): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 04:11:33.982446 ignition[1053]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 04:11:34.367565 kernel: audit: type=1130 audit(1707797493.922:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:33.678381 systemd[1]: mnt-oem2019501091.mount: Deactivated successfully. Feb 13 04:11:34.382822 ignition[1053]: INFO : files: op(17): [started] setting preset to enabled for "prepare-critools.service" Feb 13 04:11:34.382822 ignition[1053]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-critools.service" Feb 13 04:11:34.382822 ignition[1053]: INFO : files: createResultFile: createFiles: op(18): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 04:11:34.382822 ignition[1053]: INFO : files: createResultFile: createFiles: op(18): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 04:11:34.382822 ignition[1053]: INFO : files: files passed Feb 13 04:11:34.382822 ignition[1053]: INFO : POST message to Packet Timeline Feb 13 04:11:34.382822 ignition[1053]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 04:11:34.382822 ignition[1053]: INFO : GET result: OK Feb 13 04:11:34.382822 ignition[1053]: INFO : Ignition finished successfully Feb 13 04:11:33.914459 systemd[1]: Finished ignition-files.service. Feb 13 04:11:34.569523 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 04:11:34.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:33.930103 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 13 04:11:33.991525 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 13 04:11:34.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:33.991924 systemd[1]: Starting ignition-quench.service... Feb 13 04:11:34.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.020820 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 13 04:11:34.037808 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 04:11:34.037875 systemd[1]: Finished ignition-quench.service. Feb 13 04:11:34.056735 systemd[1]: Reached target ignition-complete.target. Feb 13 04:11:34.084445 systemd[1]: Starting initrd-parse-etc.service... Feb 13 04:11:34.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.129173 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 04:11:34.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.129280 systemd[1]: Finished initrd-parse-etc.service. Feb 13 04:11:34.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.147746 systemd[1]: Reached target initrd-fs.target. Feb 13 04:11:34.776394 ignition[1103]: INFO : Ignition 2.14.0 Feb 13 04:11:34.776394 ignition[1103]: INFO : Stage: umount Feb 13 04:11:34.776394 ignition[1103]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 04:11:34.776394 ignition[1103]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 04:11:34.776394 ignition[1103]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 04:11:34.776394 ignition[1103]: INFO : umount: umount passed Feb 13 04:11:34.776394 ignition[1103]: INFO : POST message to Packet Timeline Feb 13 04:11:34.776394 ignition[1103]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 04:11:34.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.168525 systemd[1]: Reached target initrd.target. Feb 13 04:11:35.014973 kernel: kauditd_printk_skb: 17 callbacks suppressed Feb 13 04:11:35.014988 kernel: audit: type=1131 audit(1707797494.920:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:35.015023 iscsid[899]: iscsid shutting down. Feb 13 04:11:35.136811 kernel: audit: type=1130 audit(1707797495.022:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:35.136822 kernel: audit: type=1131 audit(1707797495.022:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:35.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:35.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:35.136865 ignition[1103]: INFO : GET result: OK Feb 13 04:11:35.136865 ignition[1103]: INFO : Ignition finished successfully Feb 13 04:11:35.264617 kernel: audit: type=1131 audit(1707797495.144:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:35.264630 kernel: audit: type=1131 audit(1707797495.207:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:35.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:35.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.186666 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 13 04:11:34.188734 systemd[1]: Starting dracut-pre-pivot.service... Feb 13 04:11:34.221971 systemd[1]: Finished dracut-pre-pivot.service. Feb 13 04:11:35.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.236800 systemd[1]: Starting initrd-cleanup.service... Feb 13 04:11:35.428504 kernel: audit: type=1131 audit(1707797495.303:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:35.428521 kernel: audit: type=1131 audit(1707797495.369:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:35.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.268850 systemd[1]: Stopped target nss-lookup.target. Feb 13 04:11:35.495849 kernel: audit: type=1131 audit(1707797495.436:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:35.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.286904 systemd[1]: Stopped target remote-cryptsetup.target. Feb 13 04:11:35.563325 kernel: audit: type=1131 audit(1707797495.503:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:35.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.309936 systemd[1]: Stopped target timers.target. Feb 13 04:11:34.331983 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 04:11:34.332374 systemd[1]: Stopped dracut-pre-pivot.service. Feb 13 04:11:34.354184 systemd[1]: Stopped target initrd.target. Feb 13 04:11:34.374893 systemd[1]: Stopped target basic.target. Feb 13 04:11:35.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.389834 systemd[1]: Stopped target ignition-complete.target. Feb 13 04:11:35.692539 kernel: audit: type=1131 audit(1707797495.610:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:35.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:35.684000 audit: BPF prog-id=6 op=UNLOAD Feb 13 04:11:34.410866 systemd[1]: Stopped target ignition-diskful.target. Feb 13 04:11:34.431885 systemd[1]: Stopped target initrd-root-device.target. Feb 13 04:11:34.455884 systemd[1]: Stopped target remote-fs.target. Feb 13 04:11:35.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.479862 systemd[1]: Stopped target remote-fs-pre.target. Feb 13 04:11:35.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.494909 systemd[1]: Stopped target sysinit.target. Feb 13 04:11:35.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.512891 systemd[1]: Stopped target local-fs.target. Feb 13 04:11:34.530867 systemd[1]: Stopped target local-fs-pre.target. Feb 13 04:11:35.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.546854 systemd[1]: Stopped target swap.target. Feb 13 04:11:34.562725 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 04:11:34.563091 systemd[1]: Stopped dracut-pre-mount.service. Feb 13 04:11:35.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.578102 systemd[1]: Stopped target cryptsetup.target. Feb 13 04:11:35.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.600739 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 04:11:35.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.601092 systemd[1]: Stopped dracut-initqueue.service. Feb 13 04:11:35.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.619018 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 04:11:35.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:35.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.619396 systemd[1]: Stopped ignition-fetch-offline.service. Feb 13 04:11:34.644094 systemd[1]: Stopped target paths.target. Feb 13 04:11:34.657759 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 04:11:34.662482 systemd[1]: Stopped systemd-ask-password-console.path. Feb 13 04:11:34.673855 systemd[1]: Stopped target slices.target. Feb 13 04:11:34.688802 systemd[1]: Stopped target sockets.target. Feb 13 04:11:34.703891 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 04:11:34.704299 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 13 04:11:34.722981 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 04:11:34.723346 systemd[1]: Stopped ignition-files.service. Feb 13 04:11:34.739992 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 04:11:34.740370 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 13 04:11:34.758013 systemd[1]: Stopping ignition-mount.service... Feb 13 04:11:34.769491 systemd[1]: Stopping iscsid.service... Feb 13 04:11:34.783447 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 04:11:34.783582 systemd[1]: Stopped kmod-static-nodes.service. Feb 13 04:11:34.791291 systemd[1]: Stopping sysroot-boot.service... Feb 13 04:11:34.809377 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 04:11:36.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:34.809582 systemd[1]: Stopped systemd-udev-trigger.service. Feb 13 04:11:34.821748 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 04:11:34.821930 systemd[1]: Stopped dracut-pre-trigger.service. Feb 13 04:11:34.862744 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 04:11:34.864825 systemd[1]: iscsid.service: Deactivated successfully. Feb 13 04:11:34.865067 systemd[1]: Stopped iscsid.service. Feb 13 04:11:34.874820 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 04:11:34.874988 systemd[1]: Closed iscsid.socket. Feb 13 04:11:34.888664 systemd[1]: Stopping iscsiuio.service... Feb 13 04:11:34.903954 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 13 04:11:34.904180 systemd[1]: Stopped iscsiuio.service. Feb 13 04:11:34.922342 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 04:11:34.922561 systemd[1]: Finished initrd-cleanup.service. Feb 13 04:11:35.023669 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 04:11:35.023708 systemd[1]: Stopped ignition-mount.service. Feb 13 04:11:35.145592 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 04:11:35.145630 systemd[1]: Stopped sysroot-boot.service. Feb 13 04:11:35.208865 systemd[1]: Stopped target network.target. Feb 13 04:11:35.273527 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 04:11:35.273546 systemd[1]: Closed iscsiuio.socket. Feb 13 04:11:35.290386 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 04:11:35.290408 systemd[1]: Stopped ignition-disks.service. Feb 13 04:11:35.304454 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 04:11:35.304484 systemd[1]: Stopped ignition-kargs.service. Feb 13 04:11:35.390450 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 04:11:35.390528 systemd[1]: Stopped ignition-setup.service. Feb 13 04:11:35.437497 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 04:11:35.437519 systemd[1]: Stopped initrd-setup-root.service. Feb 13 04:11:35.524864 systemd[1]: Stopping systemd-networkd.service... Feb 13 04:11:35.568486 systemd-networkd[869]: enp1s0f0np0: DHCPv6 lease lost Feb 13 04:11:35.570546 systemd[1]: Stopping systemd-resolved.service... Feb 13 04:11:35.576333 systemd-networkd[869]: enp1s0f1np1: DHCPv6 lease lost Feb 13 04:11:35.591721 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 04:11:36.146000 audit: BPF prog-id=9 op=UNLOAD Feb 13 04:11:35.591766 systemd[1]: Stopped systemd-resolved.service. Feb 13 04:11:35.612416 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 04:11:35.612470 systemd[1]: Stopped systemd-networkd.service. Feb 13 04:11:35.684591 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 04:11:35.684606 systemd[1]: Closed systemd-networkd.socket. Feb 13 04:11:35.702972 systemd[1]: Stopping network-cleanup.service... Feb 13 04:11:35.711615 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 04:11:35.711646 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 13 04:11:35.733471 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 04:11:35.733582 systemd[1]: Stopped systemd-sysctl.service. Feb 13 04:11:35.748628 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 04:11:35.748677 systemd[1]: Stopped systemd-modules-load.service. Feb 13 04:11:35.764834 systemd[1]: Stopping systemd-udevd.service... Feb 13 04:11:35.782357 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 04:11:35.783760 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 04:11:35.784070 systemd[1]: Stopped systemd-udevd.service. Feb 13 04:11:36.148278 systemd-journald[266]: Received SIGTERM from PID 1 (n/a). Feb 13 04:11:35.796215 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 04:11:35.796370 systemd[1]: Closed systemd-udevd-control.socket. Feb 13 04:11:35.808674 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 04:11:35.808770 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 13 04:11:35.823592 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 04:11:35.823717 systemd[1]: Stopped dracut-pre-udev.service. Feb 13 04:11:35.839637 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 04:11:35.839762 systemd[1]: Stopped dracut-cmdline.service. Feb 13 04:11:35.854579 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 04:11:35.854701 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 13 04:11:35.871308 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 13 04:11:35.885287 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 04:11:35.885314 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 13 04:11:35.885534 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 04:11:35.885577 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 13 04:11:36.047350 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 04:11:36.047593 systemd[1]: Stopped network-cleanup.service. Feb 13 04:11:36.060899 systemd[1]: Reached target initrd-switch-root.target. Feb 13 04:11:36.080150 systemd[1]: Starting initrd-switch-root.service... Feb 13 04:11:36.102104 systemd[1]: Switching root. Feb 13 04:11:36.148725 systemd-journald[266]: Journal stopped Feb 13 04:11:39.758894 kernel: SELinux: Class mctp_socket not defined in policy. Feb 13 04:11:39.758907 kernel: SELinux: Class anon_inode not defined in policy. Feb 13 04:11:39.758915 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 13 04:11:39.758922 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 04:11:39.758926 kernel: SELinux: policy capability open_perms=1 Feb 13 04:11:39.758931 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 04:11:39.758938 kernel: SELinux: policy capability always_check_network=0 Feb 13 04:11:39.758943 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 04:11:39.758948 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 04:11:39.758954 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 04:11:39.758959 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 04:11:39.758965 systemd[1]: Successfully loaded SELinux policy in 298.924ms. Feb 13 04:11:39.758972 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.721ms. Feb 13 04:11:39.758978 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 04:11:39.758986 systemd[1]: Detected architecture x86-64. Feb 13 04:11:39.758992 systemd[1]: Detected first boot. Feb 13 04:11:39.758997 systemd[1]: Hostname set to . Feb 13 04:11:39.759004 systemd[1]: Initializing machine ID from random generator. Feb 13 04:11:39.759009 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 13 04:11:39.759015 systemd[1]: Populated /etc with preset unit settings. Feb 13 04:11:39.759021 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 04:11:39.759028 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 04:11:39.759035 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 04:11:39.759041 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 04:11:39.759047 systemd[1]: Stopped initrd-switch-root.service. Feb 13 04:11:39.759053 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 04:11:39.759059 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 13 04:11:39.759066 systemd[1]: Created slice system-addon\x2drun.slice. Feb 13 04:11:39.759073 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 13 04:11:39.759078 systemd[1]: Created slice system-getty.slice. Feb 13 04:11:39.759084 systemd[1]: Created slice system-modprobe.slice. Feb 13 04:11:39.759090 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 13 04:11:39.759096 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 13 04:11:39.759102 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 13 04:11:39.759108 systemd[1]: Created slice user.slice. Feb 13 04:11:39.759114 systemd[1]: Started systemd-ask-password-console.path. Feb 13 04:11:39.759121 systemd[1]: Started systemd-ask-password-wall.path. Feb 13 04:11:39.759127 systemd[1]: Set up automount boot.automount. Feb 13 04:11:39.759132 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 13 04:11:39.759139 systemd[1]: Stopped target initrd-switch-root.target. Feb 13 04:11:39.759146 systemd[1]: Stopped target initrd-fs.target. Feb 13 04:11:39.759152 systemd[1]: Stopped target initrd-root-fs.target. Feb 13 04:11:39.759158 systemd[1]: Reached target integritysetup.target. Feb 13 04:11:39.759165 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 04:11:39.759172 systemd[1]: Reached target remote-fs.target. Feb 13 04:11:39.759178 systemd[1]: Reached target slices.target. Feb 13 04:11:39.759184 systemd[1]: Reached target swap.target. Feb 13 04:11:39.759190 systemd[1]: Reached target torcx.target. Feb 13 04:11:39.759196 systemd[1]: Reached target veritysetup.target. Feb 13 04:11:39.759203 systemd[1]: Listening on systemd-coredump.socket. Feb 13 04:11:39.759209 systemd[1]: Listening on systemd-initctl.socket. Feb 13 04:11:39.759215 systemd[1]: Listening on systemd-networkd.socket. Feb 13 04:11:39.759223 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 04:11:39.759229 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 04:11:39.759235 systemd[1]: Listening on systemd-userdbd.socket. Feb 13 04:11:39.759242 systemd[1]: Mounting dev-hugepages.mount... Feb 13 04:11:39.759248 systemd[1]: Mounting dev-mqueue.mount... Feb 13 04:11:39.759257 systemd[1]: Mounting media.mount... Feb 13 04:11:39.759264 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 04:11:39.759271 systemd[1]: Mounting sys-kernel-debug.mount... Feb 13 04:11:39.759277 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 13 04:11:39.759283 systemd[1]: Mounting tmp.mount... Feb 13 04:11:39.759289 systemd[1]: Starting flatcar-tmpfiles.service... Feb 13 04:11:39.759296 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 13 04:11:39.759302 systemd[1]: Starting kmod-static-nodes.service... Feb 13 04:11:39.759308 systemd[1]: Starting modprobe@configfs.service... Feb 13 04:11:39.759315 systemd[1]: Starting modprobe@dm_mod.service... Feb 13 04:11:39.759322 systemd[1]: Starting modprobe@drm.service... Feb 13 04:11:39.759329 systemd[1]: Starting modprobe@efi_pstore.service... Feb 13 04:11:39.759335 systemd[1]: Starting modprobe@fuse.service... Feb 13 04:11:39.759341 kernel: fuse: init (API version 7.34) Feb 13 04:11:39.759347 systemd[1]: Starting modprobe@loop.service... Feb 13 04:11:39.759353 kernel: loop: module loaded Feb 13 04:11:39.759359 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 04:11:39.759366 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 04:11:39.759373 systemd[1]: Stopped systemd-fsck-root.service. Feb 13 04:11:39.759379 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 04:11:39.759386 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 04:11:39.759392 systemd[1]: Stopped systemd-journald.service. Feb 13 04:11:39.759398 systemd[1]: Starting systemd-journald.service... Feb 13 04:11:39.759404 systemd[1]: Starting systemd-modules-load.service... Feb 13 04:11:39.759412 systemd-journald[1253]: Journal started Feb 13 04:11:39.759437 systemd-journald[1253]: Runtime Journal (/run/log/journal/8bbf9516b5f64deface324175e3fa9c6) is 8.0M, max 640.1M, 632.1M free. Feb 13 04:11:36.500000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 04:11:36.772000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 04:11:36.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 04:11:36.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 04:11:36.774000 audit: BPF prog-id=10 op=LOAD Feb 13 04:11:36.774000 audit: BPF prog-id=10 op=UNLOAD Feb 13 04:11:36.774000 audit: BPF prog-id=11 op=LOAD Feb 13 04:11:36.774000 audit: BPF prog-id=11 op=UNLOAD Feb 13 04:11:36.845000 audit[1143]: AVC avc: denied { associate } for pid=1143 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 13 04:11:36.845000 audit[1143]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a58dc a1=c00002ce58 a2=c00002bb00 a3=32 items=0 ppid=1126 pid=1143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 04:11:36.845000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 04:11:36.871000 audit[1143]: AVC avc: denied { associate } for pid=1143 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 13 04:11:36.871000 audit[1143]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a59b5 a2=1ed a3=0 items=2 ppid=1126 pid=1143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 04:11:36.871000 audit: CWD cwd="/" Feb 13 04:11:36.871000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:36.871000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:36.871000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 04:11:38.415000 audit: BPF prog-id=12 op=LOAD Feb 13 04:11:38.415000 audit: BPF prog-id=3 op=UNLOAD Feb 13 04:11:38.415000 audit: BPF prog-id=13 op=LOAD Feb 13 04:11:38.415000 audit: BPF prog-id=14 op=LOAD Feb 13 04:11:38.415000 audit: BPF prog-id=4 op=UNLOAD Feb 13 04:11:38.415000 audit: BPF prog-id=5 op=UNLOAD Feb 13 04:11:38.416000 audit: BPF prog-id=15 op=LOAD Feb 13 04:11:38.416000 audit: BPF prog-id=12 op=UNLOAD Feb 13 04:11:38.416000 audit: BPF prog-id=16 op=LOAD Feb 13 04:11:38.416000 audit: BPF prog-id=17 op=LOAD Feb 13 04:11:38.416000 audit: BPF prog-id=13 op=UNLOAD Feb 13 04:11:38.416000 audit: BPF prog-id=14 op=UNLOAD Feb 13 04:11:38.416000 audit: BPF prog-id=18 op=LOAD Feb 13 04:11:38.416000 audit: BPF prog-id=15 op=UNLOAD Feb 13 04:11:38.417000 audit: BPF prog-id=19 op=LOAD Feb 13 04:11:38.417000 audit: BPF prog-id=20 op=LOAD Feb 13 04:11:38.417000 audit: BPF prog-id=16 op=UNLOAD Feb 13 04:11:38.417000 audit: BPF prog-id=17 op=UNLOAD Feb 13 04:11:38.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:38.467000 audit: BPF prog-id=18 op=UNLOAD Feb 13 04:11:38.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:38.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:39.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:39.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:39.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:39.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:39.731000 audit: BPF prog-id=21 op=LOAD Feb 13 04:11:39.731000 audit: BPF prog-id=22 op=LOAD Feb 13 04:11:39.731000 audit: BPF prog-id=23 op=LOAD Feb 13 04:11:39.731000 audit: BPF prog-id=19 op=UNLOAD Feb 13 04:11:39.731000 audit: BPF prog-id=20 op=UNLOAD Feb 13 04:11:39.755000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 04:11:39.755000 audit[1253]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffcb31a7650 a2=4000 a3=7ffcb31a76ec items=0 ppid=1 pid=1253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 04:11:39.755000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 13 04:11:38.414855 systemd[1]: Queued start job for default target multi-user.target. Feb 13 04:11:36.843350 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 04:11:38.418604 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 04:11:36.843870 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 04:11:36.843889 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 04:11:36.843916 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:36Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 13 04:11:36.843925 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:36Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 13 04:11:36.843953 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:36Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 13 04:11:36.843965 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:36Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 13 04:11:36.844127 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:36Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 13 04:11:36.844164 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 04:11:36.844176 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 04:11:36.844750 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 13 04:11:36.844784 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 13 04:11:36.844802 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 13 04:11:36.844815 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 13 04:11:36.844830 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 13 04:11:36.844842 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 13 04:11:38.062605 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:38Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 04:11:38.062744 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:38Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 04:11:38.062798 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:38Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 04:11:38.062892 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:38Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 04:11:38.062921 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:38Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 13 04:11:38.062960 /usr/lib/systemd/system-generators/torcx-generator[1143]: time="2024-02-13T04:11:38Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 13 04:11:39.790456 systemd[1]: Starting systemd-network-generator.service... Feb 13 04:11:39.813439 systemd[1]: Starting systemd-remount-fs.service... Feb 13 04:11:39.834317 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 04:11:39.866816 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 04:11:39.866836 systemd[1]: Stopped verity-setup.service. Feb 13 04:11:39.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:39.901312 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 04:11:39.915311 systemd[1]: Started systemd-journald.service. Feb 13 04:11:39.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:39.923919 systemd[1]: Mounted dev-hugepages.mount. Feb 13 04:11:39.937042 kernel: kauditd_printk_skb: 66 callbacks suppressed Feb 13 04:11:39.937064 kernel: audit: type=1130 audit(1707797499.922:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:39.984535 systemd[1]: Mounted dev-mqueue.mount. Feb 13 04:11:39.991527 systemd[1]: Mounted media.mount. Feb 13 04:11:39.998532 systemd[1]: Mounted sys-kernel-debug.mount. Feb 13 04:11:40.007529 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 13 04:11:40.016504 systemd[1]: Mounted tmp.mount. Feb 13 04:11:40.023579 systemd[1]: Finished flatcar-tmpfiles.service. Feb 13 04:11:40.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.032588 systemd[1]: Finished kmod-static-nodes.service. Feb 13 04:11:40.074397 kernel: audit: type=1130 audit(1707797500.031:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.082673 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 04:11:40.082749 systemd[1]: Finished modprobe@configfs.service. Feb 13 04:11:40.126459 kernel: audit: type=1130 audit(1707797500.081:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.135624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 04:11:40.135684 systemd[1]: Finished modprobe@dm_mod.service. Feb 13 04:11:40.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.181313 kernel: audit: type=1130 audit(1707797500.134:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.181345 kernel: audit: type=1131 audit(1707797500.134:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.236610 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 04:11:40.236670 systemd[1]: Finished modprobe@drm.service. Feb 13 04:11:40.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.284309 kernel: audit: type=1130 audit(1707797500.235:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.284328 kernel: audit: type=1131 audit(1707797500.235:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.341600 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 04:11:40.341660 systemd[1]: Finished modprobe@efi_pstore.service. Feb 13 04:11:40.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.391303 kernel: audit: type=1130 audit(1707797500.340:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.391321 kernel: audit: type=1131 audit(1707797500.340:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.450587 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 04:11:40.450660 systemd[1]: Finished modprobe@fuse.service. Feb 13 04:11:40.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.504458 kernel: audit: type=1130 audit(1707797500.449:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.512596 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 04:11:40.512654 systemd[1]: Finished modprobe@loop.service. Feb 13 04:11:40.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.521598 systemd[1]: Finished systemd-modules-load.service. Feb 13 04:11:40.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.530572 systemd[1]: Finished systemd-network-generator.service. Feb 13 04:11:40.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.539580 systemd[1]: Finished systemd-remount-fs.service. Feb 13 04:11:40.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.548572 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 04:11:40.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.557791 systemd[1]: Reached target network-pre.target. Feb 13 04:11:40.567337 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 13 04:11:40.576405 systemd[1]: Mounting sys-kernel-config.mount... Feb 13 04:11:40.583440 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 04:11:40.585140 systemd[1]: Starting systemd-hwdb-update.service... Feb 13 04:11:40.592822 systemd[1]: Starting systemd-journal-flush.service... Feb 13 04:11:40.596288 systemd-journald[1253]: Time spent on flushing to /var/log/journal/8bbf9516b5f64deface324175e3fa9c6 is 14.707ms for 1604 entries. Feb 13 04:11:40.596288 systemd-journald[1253]: System Journal (/var/log/journal/8bbf9516b5f64deface324175e3fa9c6) is 8.0M, max 195.6M, 187.6M free. Feb 13 04:11:40.646699 systemd-journald[1253]: Received client request to flush runtime journal. Feb 13 04:11:40.609406 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 04:11:40.609846 systemd[1]: Starting systemd-random-seed.service... Feb 13 04:11:40.627401 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 13 04:11:40.627867 systemd[1]: Starting systemd-sysctl.service... Feb 13 04:11:40.635035 systemd[1]: Starting systemd-sysusers.service... Feb 13 04:11:40.641915 systemd[1]: Starting systemd-udev-settle.service... Feb 13 04:11:40.650429 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 13 04:11:40.658409 systemd[1]: Mounted sys-kernel-config.mount. Feb 13 04:11:40.666505 systemd[1]: Finished systemd-journal-flush.service. Feb 13 04:11:40.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.674529 systemd[1]: Finished systemd-random-seed.service. Feb 13 04:11:40.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.682497 systemd[1]: Finished systemd-sysctl.service. Feb 13 04:11:40.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.690498 systemd[1]: Finished systemd-sysusers.service. Feb 13 04:11:40.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.699487 systemd[1]: Reached target first-boot-complete.target. Feb 13 04:11:40.707615 udevadm[1269]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 04:11:40.910143 systemd[1]: Finished systemd-hwdb-update.service. Feb 13 04:11:40.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.918000 audit: BPF prog-id=24 op=LOAD Feb 13 04:11:40.919000 audit: BPF prog-id=25 op=LOAD Feb 13 04:11:40.919000 audit: BPF prog-id=7 op=UNLOAD Feb 13 04:11:40.919000 audit: BPF prog-id=8 op=UNLOAD Feb 13 04:11:40.920584 systemd[1]: Starting systemd-udevd.service... Feb 13 04:11:40.932033 systemd-udevd[1270]: Using default interface naming scheme 'v252'. Feb 13 04:11:40.949575 systemd[1]: Started systemd-udevd.service. Feb 13 04:11:40.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:40.959569 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 13 04:11:40.959000 audit: BPF prog-id=26 op=LOAD Feb 13 04:11:40.960739 systemd[1]: Starting systemd-networkd.service... Feb 13 04:11:40.986000 audit: BPF prog-id=27 op=LOAD Feb 13 04:11:41.007164 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 04:11:41.007239 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 04:11:41.007263 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1290) Feb 13 04:11:41.005000 audit: BPF prog-id=28 op=LOAD Feb 13 04:11:41.034263 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 04:11:41.056000 audit: BPF prog-id=29 op=LOAD Feb 13 04:11:41.058354 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 04:11:41.080086 systemd[1]: Starting systemd-userdbd.service... Feb 13 04:11:41.080259 kernel: ACPI: button: Power Button [PWRF] Feb 13 04:11:41.036000 audit[1338]: AVC avc: denied { confidentiality } for pid=1338 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 04:11:41.036000 audit[1338]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=560241a88130 a1=4d8bc a2=7fdbeb61bbc5 a3=5 items=42 ppid=1270 pid=1338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 04:11:41.036000 audit: CWD cwd="/" Feb 13 04:11:41.036000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=1 name=(null) inode=16213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=2 name=(null) inode=16213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=3 name=(null) inode=16214 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=4 name=(null) inode=16213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=5 name=(null) inode=16215 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=6 name=(null) inode=16213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=7 name=(null) inode=16216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=8 name=(null) inode=16216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=9 name=(null) inode=16217 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=10 name=(null) inode=16216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=11 name=(null) inode=16218 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=12 name=(null) inode=16216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=13 name=(null) inode=16219 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=14 name=(null) inode=16216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=15 name=(null) inode=16220 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=16 name=(null) inode=16216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=17 name=(null) inode=16221 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=18 name=(null) inode=16213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=19 name=(null) inode=16222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=20 name=(null) inode=16222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=21 name=(null) inode=16223 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=22 name=(null) inode=16222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=23 name=(null) inode=16224 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=24 name=(null) inode=16222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=25 name=(null) inode=16225 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=26 name=(null) inode=16222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=27 name=(null) inode=16226 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=28 name=(null) inode=16222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=29 name=(null) inode=16227 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=30 name=(null) inode=16213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=31 name=(null) inode=16228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=32 name=(null) inode=16228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=33 name=(null) inode=16229 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=34 name=(null) inode=16228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=35 name=(null) inode=16230 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=36 name=(null) inode=16228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=37 name=(null) inode=16231 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=38 name=(null) inode=16228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=39 name=(null) inode=16232 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=40 name=(null) inode=16228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PATH item=41 name=(null) inode=16233 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 04:11:41.036000 audit: PROCTITLE proctitle="(udev-worker)" Feb 13 04:11:41.108264 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 04:11:41.108420 kernel: IPMI message handler: version 39.2 Feb 13 04:11:41.108440 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 04:11:41.108559 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 04:11:41.150681 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 04:11:41.161101 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 04:11:41.240264 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 13 04:11:41.248469 systemd[1]: Started systemd-userdbd.service. Feb 13 04:11:41.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:41.272263 kernel: ipmi device interface Feb 13 04:11:41.299265 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 04:11:41.299308 kernel: ipmi_si: IPMI System Interface driver Feb 13 04:11:41.319341 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 04:11:41.364570 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 04:11:41.364594 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 04:11:41.386333 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 04:11:41.432333 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 04:11:41.496576 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 04:11:41.496684 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 04:11:41.496712 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 04:11:41.592871 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 13 04:11:41.593002 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 04:11:41.593084 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 13 04:11:41.616262 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 13 04:11:41.632742 systemd-networkd[1309]: bond0: netdev ready Feb 13 04:11:41.634986 systemd-networkd[1309]: lo: Link UP Feb 13 04:11:41.634989 systemd-networkd[1309]: lo: Gained carrier Feb 13 04:11:41.635460 systemd-networkd[1309]: Enumeration completed Feb 13 04:11:41.635539 systemd[1]: Started systemd-networkd.service. Feb 13 04:11:41.635734 systemd-networkd[1309]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 04:11:41.643464 systemd-networkd[1309]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:42:76:1d.network. Feb 13 04:11:41.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:41.698990 kernel: intel_rapl_common: Found RAPL domain package Feb 13 04:11:41.699035 kernel: intel_rapl_common: Found RAPL domain core Feb 13 04:11:41.699054 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 04:11:41.800261 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 04:11:41.821260 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 04:11:41.823568 systemd[1]: Finished systemd-udev-settle.service. Feb 13 04:11:41.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:41.832055 systemd[1]: Starting lvm2-activation-early.service... Feb 13 04:11:41.846590 lvm[1375]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 04:11:41.874647 systemd[1]: Finished lvm2-activation-early.service. Feb 13 04:11:41.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:41.883400 systemd[1]: Reached target cryptsetup.target. Feb 13 04:11:41.892944 systemd[1]: Starting lvm2-activation.service... Feb 13 04:11:41.895071 lvm[1376]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 04:11:41.927686 systemd[1]: Finished lvm2-activation.service. Feb 13 04:11:41.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:41.935426 systemd[1]: Reached target local-fs-pre.target. Feb 13 04:11:41.943344 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 04:11:41.943358 systemd[1]: Reached target local-fs.target. Feb 13 04:11:41.951343 systemd[1]: Reached target machines.target. Feb 13 04:11:41.959924 systemd[1]: Starting ldconfig.service... Feb 13 04:11:41.967642 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 13 04:11:41.967670 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 04:11:41.968163 systemd[1]: Starting systemd-boot-update.service... Feb 13 04:11:41.976928 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 13 04:11:41.987181 systemd[1]: Starting systemd-machine-id-commit.service... Feb 13 04:11:41.987351 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 13 04:11:41.987393 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 13 04:11:41.988165 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 13 04:11:41.988480 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1378 (bootctl) Feb 13 04:11:41.989434 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 13 04:11:42.001868 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 13 04:11:42.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:42.099041 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 13 04:11:42.108316 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 04:11:42.143307 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 13 04:11:42.145125 systemd-networkd[1309]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:42:76:1c.network. Feb 13 04:11:42.180299 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 04:11:42.253647 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 04:11:42.306346 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 04:11:42.306415 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 04:11:42.349302 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 13 04:11:42.349341 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 13 04:11:42.370040 systemd-networkd[1309]: bond0: Link UP Feb 13 04:11:42.370252 systemd-networkd[1309]: enp1s0f1np1: Link UP Feb 13 04:11:42.370392 systemd-networkd[1309]: enp1s0f1np1: Gained carrier Feb 13 04:11:42.371404 systemd-networkd[1309]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:42:76:1c.network. Feb 13 04:11:42.395574 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 04:11:42.409258 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.429657 systemd-fsck[1386]: fsck.fat 4.2 (2021-01-31) Feb 13 04:11:42.429657 systemd-fsck[1386]: /dev/sda1: 789 files, 115339/258078 clusters Feb 13 04:11:42.430261 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.430270 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 04:11:42.430630 systemd[1]: Finished systemd-machine-id-commit.service. Feb 13 04:11:42.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:42.446481 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 13 04:11:42.452301 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:42.472216 systemd[1]: Mounting boot.mount... Feb 13 04:11:42.472274 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.487259 systemd[1]: Mounted boot.mount. Feb 13 04:11:42.493293 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.512260 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.519140 systemd[1]: Finished systemd-boot-update.service. Feb 13 04:11:42.532262 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:42.546661 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 13 04:11:42.551258 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 04:11:42.567238 systemd[1]: Starting audit-rules.service... Feb 13 04:11:42.570295 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.586708 systemd[1]: Starting clean-ca-certificates.service... Feb 13 04:11:42.589298 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.589000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 13 04:11:42.589000 audit[1404]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff32006620 a2=420 a3=0 items=0 ppid=1389 pid=1404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 04:11:42.589000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 13 04:11:42.591075 augenrules[1404]: No rules Feb 13 04:11:42.604894 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 13 04:11:42.606294 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.624847 systemd[1]: Starting systemd-resolved.service... Feb 13 04:11:42.625296 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.633134 ldconfig[1377]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 04:11:42.643308 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.643385 systemd[1]: Starting systemd-timesyncd.service... Feb 13 04:11:42.657828 systemd[1]: Starting systemd-update-utmp.service... Feb 13 04:11:42.662315 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.675608 systemd[1]: Finished ldconfig.service. Feb 13 04:11:42.679257 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.693472 systemd[1]: Finished audit-rules.service. Feb 13 04:11:42.697260 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.710458 systemd[1]: Finished clean-ca-certificates.service. Feb 13 04:11:42.715257 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.729452 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 13 04:11:42.732294 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.749257 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.750087 systemd[1]: Starting systemd-update-done.service... Feb 13 04:11:42.764344 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 04:11:42.764717 systemd[1]: Finished systemd-update-done.service. Feb 13 04:11:42.766293 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.766578 systemd-networkd[1309]: enp1s0f0np0: Link UP Feb 13 04:11:42.766746 systemd-networkd[1309]: bond0: Gained carrier Feb 13 04:11:42.766841 systemd-networkd[1309]: enp1s0f0np0: Gained carrier Feb 13 04:11:42.782065 systemd[1]: Finished systemd-update-utmp.service. Feb 13 04:11:42.783296 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 04:11:42.783323 kernel: bond0: (slave enp1s0f1np1): link status definitely down, disabling slave Feb 13 04:11:42.783338 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 04:11:42.829304 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 13 04:11:42.829333 kernel: bond0: active interface up! Feb 13 04:11:42.840022 systemd-networkd[1309]: enp1s0f1np1: Link DOWN Feb 13 04:11:42.840025 systemd-networkd[1309]: enp1s0f1np1: Lost carrier Feb 13 04:11:42.862693 systemd[1]: Started systemd-timesyncd.service. Feb 13 04:11:42.863890 systemd-resolved[1411]: Positive Trust Anchors: Feb 13 04:11:42.863895 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 04:11:42.863914 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 04:11:42.867972 systemd-resolved[1411]: Using system hostname 'ci-3510.3.2-a-5480a8887f'. Feb 13 04:11:42.870419 systemd[1]: Reached target time-set.target. Feb 13 04:11:42.988312 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 04:11:42.992650 systemd-networkd[1309]: enp1s0f1np1: Link UP Feb 13 04:11:42.992828 systemd-networkd[1309]: enp1s0f1np1: Gained carrier Feb 13 04:11:42.993644 systemd[1]: Started systemd-resolved.service. Feb 13 04:11:43.001376 systemd[1]: Reached target network.target. Feb 13 04:11:43.010354 systemd[1]: Reached target nss-lookup.target. Feb 13 04:11:43.019382 systemd[1]: Reached target sysinit.target. Feb 13 04:11:43.028410 systemd[1]: Started motdgen.path. Feb 13 04:11:43.035380 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 13 04:11:43.045398 systemd[1]: Started logrotate.timer. Feb 13 04:11:43.058532 systemd[1]: Started mdadm.timer. Feb 13 04:11:43.066298 kernel: bond0: (slave enp1s0f1np1): link status up, enabling it in 200 ms Feb 13 04:11:43.066327 kernel: bond0: (slave enp1s0f1np1): invalid new link 3 on slave Feb 13 04:11:43.086342 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 13 04:11:43.094324 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 04:11:43.094341 systemd[1]: Reached target paths.target. Feb 13 04:11:43.101331 systemd[1]: Reached target timers.target. Feb 13 04:11:43.108466 systemd[1]: Listening on dbus.socket. Feb 13 04:11:43.116928 systemd[1]: Starting docker.socket... Feb 13 04:11:43.125785 systemd[1]: Listening on sshd.socket. Feb 13 04:11:43.132427 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 04:11:43.132639 systemd[1]: Listening on docker.socket. Feb 13 04:11:43.139414 systemd[1]: Reached target sockets.target. Feb 13 04:11:43.147368 systemd[1]: Reached target basic.target. Feb 13 04:11:43.154392 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 04:11:43.154405 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 04:11:43.154833 systemd[1]: Starting containerd.service... Feb 13 04:11:43.161754 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 13 04:11:43.170863 systemd[1]: Starting coreos-metadata.service... Feb 13 04:11:43.177795 systemd[1]: Starting dbus.service... Feb 13 04:11:43.183893 systemd[1]: Starting enable-oem-cloudinit.service... Feb 13 04:11:43.188632 jq[1427]: false Feb 13 04:11:43.191228 systemd[1]: Starting extend-filesystems.service... Feb 13 04:11:43.192154 coreos-metadata[1420]: Feb 13 04:11:43.192 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 04:11:43.197810 dbus-daemon[1426]: [system] SELinux support is enabled Feb 13 04:11:43.198363 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 13 04:11:43.199079 systemd[1]: Starting motdgen.service... Feb 13 04:11:43.200137 extend-filesystems[1428]: Found sda Feb 13 04:11:43.219628 extend-filesystems[1428]: Found sda1 Feb 13 04:11:43.219628 extend-filesystems[1428]: Found sda2 Feb 13 04:11:43.219628 extend-filesystems[1428]: Found sda3 Feb 13 04:11:43.219628 extend-filesystems[1428]: Found usr Feb 13 04:11:43.219628 extend-filesystems[1428]: Found sda4 Feb 13 04:11:43.219628 extend-filesystems[1428]: Found sda6 Feb 13 04:11:43.219628 extend-filesystems[1428]: Found sda7 Feb 13 04:11:43.219628 extend-filesystems[1428]: Found sda9 Feb 13 04:11:43.219628 extend-filesystems[1428]: Checking size of /dev/sda9 Feb 13 04:11:43.219628 extend-filesystems[1428]: Resized partition /dev/sda9 Feb 13 04:11:43.326340 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 13 04:11:43.326370 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 04:11:43.205953 systemd[1]: Starting prepare-cni-plugins.service... Feb 13 04:11:43.326434 coreos-metadata[1423]: Feb 13 04:11:43.201 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 04:11:43.326560 extend-filesystems[1444]: resize2fs 1.46.5 (30-Dec-2021) Feb 13 04:11:43.240007 systemd[1]: Starting prepare-critools.service... Feb 13 04:11:43.266862 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 13 04:11:43.293134 systemd[1]: Starting sshd-keygen.service... Feb 13 04:11:43.318602 systemd[1]: Starting systemd-logind.service... Feb 13 04:11:43.338335 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 04:11:43.338857 systemd[1]: Starting tcsd.service... Feb 13 04:11:43.340334 systemd-logind[1456]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 04:11:43.340345 systemd-logind[1456]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 04:11:43.340354 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 04:11:43.340499 systemd-logind[1456]: New seat seat0. Feb 13 04:11:43.352567 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 04:11:43.352904 systemd[1]: Starting update-engine.service... Feb 13 04:11:43.359916 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 13 04:11:43.361752 jq[1459]: true Feb 13 04:11:43.368617 systemd[1]: Started dbus.service. Feb 13 04:11:43.377012 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 04:11:43.377098 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 13 04:11:43.377266 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 04:11:43.377342 systemd[1]: Finished motdgen.service. Feb 13 04:11:43.385016 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 04:11:43.385097 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 13 04:11:43.391988 tar[1461]: ./ Feb 13 04:11:43.391988 tar[1461]: ./macvlan Feb 13 04:11:43.396202 jq[1465]: true Feb 13 04:11:43.396658 update_engine[1458]: I0213 04:11:43.396201 1458 main.cc:92] Flatcar Update Engine starting Feb 13 04:11:43.396877 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 04:11:43.397896 tar[1462]: crictl Feb 13 04:11:43.400164 update_engine[1458]: I0213 04:11:43.400127 1458 update_check_scheduler.cc:74] Next update check in 11m11s Feb 13 04:11:43.401997 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 04:11:43.402139 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 13 04:11:43.402223 systemd[1]: Started systemd-logind.service. Feb 13 04:11:43.406258 env[1466]: time="2024-02-13T04:11:43.406226390Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 13 04:11:43.414785 tar[1461]: ./static Feb 13 04:11:43.415736 systemd[1]: Started update-engine.service. Feb 13 04:11:43.418644 env[1466]: time="2024-02-13T04:11:43.418624417Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 04:11:43.418884 env[1466]: time="2024-02-13T04:11:43.418873520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 04:11:43.419521 env[1466]: time="2024-02-13T04:11:43.419505976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 04:11:43.419556 env[1466]: time="2024-02-13T04:11:43.419521372Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 04:11:43.419662 env[1466]: time="2024-02-13T04:11:43.419649316Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 04:11:43.419693 env[1466]: time="2024-02-13T04:11:43.419662265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 04:11:43.419693 env[1466]: time="2024-02-13T04:11:43.419671835Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 13 04:11:43.419693 env[1466]: time="2024-02-13T04:11:43.419677374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 04:11:43.421455 env[1466]: time="2024-02-13T04:11:43.421445991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 04:11:43.421579 env[1466]: time="2024-02-13T04:11:43.421571413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 04:11:43.421652 env[1466]: time="2024-02-13T04:11:43.421643207Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 04:11:43.421671 env[1466]: time="2024-02-13T04:11:43.421653369Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 04:11:43.423524 env[1466]: time="2024-02-13T04:11:43.423512555Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 13 04:11:43.423552 env[1466]: time="2024-02-13T04:11:43.423525806Z" level=info msg="metadata content store policy set" policy=shared Feb 13 04:11:43.423691 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Feb 13 04:11:43.424574 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 13 04:11:43.430181 env[1466]: time="2024-02-13T04:11:43.430170885Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 04:11:43.430204 env[1466]: time="2024-02-13T04:11:43.430186413Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 04:11:43.430204 env[1466]: time="2024-02-13T04:11:43.430196255Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 04:11:43.430242 env[1466]: time="2024-02-13T04:11:43.430216823Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 04:11:43.430242 env[1466]: time="2024-02-13T04:11:43.430226550Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 04:11:43.430242 env[1466]: time="2024-02-13T04:11:43.430234415Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 04:11:43.430298 env[1466]: time="2024-02-13T04:11:43.430241279Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 04:11:43.430298 env[1466]: time="2024-02-13T04:11:43.430249082Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 04:11:43.430298 env[1466]: time="2024-02-13T04:11:43.430260472Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 13 04:11:43.430298 env[1466]: time="2024-02-13T04:11:43.430272911Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 04:11:43.430298 env[1466]: time="2024-02-13T04:11:43.430280829Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 04:11:43.430298 env[1466]: time="2024-02-13T04:11:43.430294271Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 04:11:43.430390 env[1466]: time="2024-02-13T04:11:43.430345698Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 04:11:43.430407 env[1466]: time="2024-02-13T04:11:43.430391194Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 04:11:43.430528 env[1466]: time="2024-02-13T04:11:43.430521120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 04:11:43.430550 env[1466]: time="2024-02-13T04:11:43.430535962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 04:11:43.430550 env[1466]: time="2024-02-13T04:11:43.430543740Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 04:11:43.430583 env[1466]: time="2024-02-13T04:11:43.430571640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 04:11:43.430583 env[1466]: time="2024-02-13T04:11:43.430579720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 04:11:43.430615 env[1466]: time="2024-02-13T04:11:43.430586657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 04:11:43.430615 env[1466]: time="2024-02-13T04:11:43.430593549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 04:11:43.430615 env[1466]: time="2024-02-13T04:11:43.430599972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 04:11:43.430615 env[1466]: time="2024-02-13T04:11:43.430606784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 04:11:43.430615 env[1466]: time="2024-02-13T04:11:43.430612890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 04:11:43.430687 env[1466]: time="2024-02-13T04:11:43.430618901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 04:11:43.430687 env[1466]: time="2024-02-13T04:11:43.430626038Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 04:11:43.430726 env[1466]: time="2024-02-13T04:11:43.430687431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 04:11:43.430726 env[1466]: time="2024-02-13T04:11:43.430696584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 04:11:43.430726 env[1466]: time="2024-02-13T04:11:43.430704288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 04:11:43.430726 env[1466]: time="2024-02-13T04:11:43.430711658Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 04:11:43.430726 env[1466]: time="2024-02-13T04:11:43.430721930Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 13 04:11:43.430802 env[1466]: time="2024-02-13T04:11:43.430728626Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 04:11:43.430802 env[1466]: time="2024-02-13T04:11:43.430738872Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 13 04:11:43.430802 env[1466]: time="2024-02-13T04:11:43.430771909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 04:11:43.431076 env[1466]: time="2024-02-13T04:11:43.431036265Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 04:11:43.432768 env[1466]: time="2024-02-13T04:11:43.431091158Z" level=info msg="Connect containerd service" Feb 13 04:11:43.432768 env[1466]: time="2024-02-13T04:11:43.431193343Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 04:11:43.432768 env[1466]: time="2024-02-13T04:11:43.431510968Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 04:11:43.432768 env[1466]: time="2024-02-13T04:11:43.431605670Z" level=info msg="Start subscribing containerd event" Feb 13 04:11:43.432768 env[1466]: time="2024-02-13T04:11:43.431637968Z" level=info msg="Start recovering state" Feb 13 04:11:43.432768 env[1466]: time="2024-02-13T04:11:43.431640101Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 04:11:43.432768 env[1466]: time="2024-02-13T04:11:43.431669939Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 04:11:43.432768 env[1466]: time="2024-02-13T04:11:43.431682328Z" level=info msg="Start event monitor" Feb 13 04:11:43.432768 env[1466]: time="2024-02-13T04:11:43.431693043Z" level=info msg="Start snapshots syncer" Feb 13 04:11:43.432768 env[1466]: time="2024-02-13T04:11:43.431700946Z" level=info msg="Start cni network conf syncer for default" Feb 13 04:11:43.432768 env[1466]: time="2024-02-13T04:11:43.431701862Z" level=info msg="containerd successfully booted in 0.025843s" Feb 13 04:11:43.432768 env[1466]: time="2024-02-13T04:11:43.431708036Z" level=info msg="Start streaming server" Feb 13 04:11:43.434489 systemd[1]: Started containerd.service. Feb 13 04:11:43.438563 tar[1461]: ./vlan Feb 13 04:11:43.442976 systemd[1]: Started locksmithd.service. Feb 13 04:11:43.449387 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 04:11:43.449470 systemd[1]: Reached target system-config.target. Feb 13 04:11:43.457341 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 04:11:43.457414 systemd[1]: Reached target user-config.target. Feb 13 04:11:43.459512 tar[1461]: ./portmap Feb 13 04:11:43.480291 tar[1461]: ./host-local Feb 13 04:11:43.497185 tar[1461]: ./vrf Feb 13 04:11:43.497974 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 04:11:43.515372 tar[1461]: ./bridge Feb 13 04:11:43.537028 tar[1461]: ./tuning Feb 13 04:11:43.554298 tar[1461]: ./firewall Feb 13 04:11:43.576746 tar[1461]: ./host-device Feb 13 04:11:43.596227 tar[1461]: ./sbr Feb 13 04:11:43.614092 tar[1461]: ./loopback Feb 13 04:11:43.631053 tar[1461]: ./dhcp Feb 13 04:11:43.663131 systemd[1]: Finished prepare-critools.service. Feb 13 04:11:43.680257 tar[1461]: ./ptp Feb 13 04:11:43.701286 tar[1461]: ./ipvlan Feb 13 04:11:43.710546 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 04:11:43.721646 tar[1461]: ./bandwidth Feb 13 04:11:43.722271 systemd[1]: Finished sshd-keygen.service. Feb 13 04:11:43.736847 systemd[1]: Starting issuegen.service... Feb 13 04:11:43.765846 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 13 04:11:43.743517 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 04:11:43.765928 extend-filesystems[1444]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 04:11:43.765928 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 04:11:43.765928 extend-filesystems[1444]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 13 04:11:43.743592 systemd[1]: Finished issuegen.service. Feb 13 04:11:43.799503 extend-filesystems[1428]: Resized filesystem in /dev/sda9 Feb 13 04:11:43.799503 extend-filesystems[1428]: Found sdb Feb 13 04:11:43.752229 systemd[1]: Starting systemd-user-sessions.service... Feb 13 04:11:43.761493 systemd[1]: Finished systemd-user-sessions.service. Feb 13 04:11:43.770586 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 04:11:43.770661 systemd[1]: Finished extend-filesystems.service. Feb 13 04:11:43.790458 systemd[1]: Started getty@tty1.service. Feb 13 04:11:43.811394 systemd-networkd[1309]: bond0: Gained IPv6LL Feb 13 04:11:43.816894 systemd[1]: Started serial-getty@ttyS1.service. Feb 13 04:11:43.837396 systemd[1]: Reached target getty.target. Feb 13 04:11:43.848671 systemd[1]: Finished prepare-cni-plugins.service. Feb 13 04:11:45.054473 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 13 04:11:48.413698 coreos-metadata[1420]: Feb 13 04:11:48.413 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Feb 13 04:11:48.414103 coreos-metadata[1423]: Feb 13 04:11:48.413 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Feb 13 04:11:48.817816 login[1526]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 04:11:48.825609 systemd-logind[1456]: New session 1 of user core. Feb 13 04:11:48.826078 systemd[1]: Created slice user-500.slice. Feb 13 04:11:48.826621 systemd[1]: Starting user-runtime-dir@500.service... Feb 13 04:11:48.828176 login[1525]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 04:11:48.830421 systemd-logind[1456]: New session 2 of user core. Feb 13 04:11:48.832214 systemd[1]: Finished user-runtime-dir@500.service. Feb 13 04:11:48.832869 systemd[1]: Starting user@500.service... Feb 13 04:11:48.834666 (systemd)[1531]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:11:48.987100 systemd[1531]: Queued start job for default target default.target. Feb 13 04:11:48.988443 systemd[1531]: Reached target paths.target. Feb 13 04:11:48.988511 systemd[1531]: Reached target sockets.target. Feb 13 04:11:48.988558 systemd[1531]: Reached target timers.target. Feb 13 04:11:48.988598 systemd[1531]: Reached target basic.target. Feb 13 04:11:48.988710 systemd[1531]: Reached target default.target. Feb 13 04:11:48.988788 systemd[1531]: Startup finished in 151ms. Feb 13 04:11:48.988903 systemd[1]: Started user@500.service. Feb 13 04:11:48.991713 systemd[1]: Started session-1.scope. Feb 13 04:11:48.993606 systemd[1]: Started session-2.scope. Feb 13 04:11:49.413986 coreos-metadata[1420]: Feb 13 04:11:49.413 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 04:11:49.414766 coreos-metadata[1423]: Feb 13 04:11:49.413 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 04:11:49.418334 coreos-metadata[1420]: Feb 13 04:11:49.418 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Feb 13 04:11:49.420640 coreos-metadata[1423]: Feb 13 04:11:49.420 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Feb 13 04:11:50.474319 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Feb 13 04:11:50.474524 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Feb 13 04:11:50.534250 systemd[1]: Created slice system-sshd.slice. Feb 13 04:11:50.535845 systemd[1]: Started sshd@0-139.178.94.233:22-139.178.68.195:48084.service. Feb 13 04:11:50.591904 sshd[1552]: Accepted publickey for core from 139.178.68.195 port 48084 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:11:50.592850 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:11:50.596091 systemd-logind[1456]: New session 3 of user core. Feb 13 04:11:50.596882 systemd[1]: Started session-3.scope. Feb 13 04:11:50.650452 systemd[1]: Started sshd@1-139.178.94.233:22-139.178.68.195:48088.service. Feb 13 04:11:50.684041 sshd[1557]: Accepted publickey for core from 139.178.68.195 port 48088 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:11:50.684742 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:11:50.686908 systemd-logind[1456]: New session 4 of user core. Feb 13 04:11:50.687345 systemd[1]: Started session-4.scope. Feb 13 04:11:50.737793 sshd[1557]: pam_unix(sshd:session): session closed for user core Feb 13 04:11:50.739396 systemd[1]: sshd@1-139.178.94.233:22-139.178.68.195:48088.service: Deactivated successfully. Feb 13 04:11:50.739713 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 04:11:50.740010 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Feb 13 04:11:50.740559 systemd[1]: Started sshd@2-139.178.94.233:22-139.178.68.195:48100.service. Feb 13 04:11:50.740949 systemd-logind[1456]: Removed session 4. Feb 13 04:11:50.774686 sshd[1563]: Accepted publickey for core from 139.178.68.195 port 48100 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:11:50.775592 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:11:50.778625 systemd-logind[1456]: New session 5 of user core. Feb 13 04:11:50.779299 systemd[1]: Started session-5.scope. Feb 13 04:11:50.834781 sshd[1563]: pam_unix(sshd:session): session closed for user core Feb 13 04:11:50.836012 systemd[1]: sshd@2-139.178.94.233:22-139.178.68.195:48100.service: Deactivated successfully. Feb 13 04:11:50.836381 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 04:11:50.836761 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Feb 13 04:11:50.837180 systemd-logind[1456]: Removed session 5. Feb 13 04:11:51.418772 coreos-metadata[1420]: Feb 13 04:11:51.418 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Feb 13 04:11:51.420902 coreos-metadata[1423]: Feb 13 04:11:51.420 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Feb 13 04:11:51.444697 coreos-metadata[1423]: Feb 13 04:11:51.444 INFO Fetch successful Feb 13 04:11:51.444799 coreos-metadata[1420]: Feb 13 04:11:51.444 INFO Fetch successful Feb 13 04:11:51.466434 systemd[1]: Finished coreos-metadata.service. Feb 13 04:11:51.467311 systemd[1]: Started packet-phone-home.service. Feb 13 04:11:51.467411 unknown[1420]: wrote ssh authorized keys file for user: core Feb 13 04:11:51.472681 curl[1570]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 13 04:11:51.472819 curl[1570]: Dload Upload Total Spent Left Speed Feb 13 04:11:51.478963 update-ssh-keys[1571]: Updated "/home/core/.ssh/authorized_keys" Feb 13 04:11:51.479125 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 13 04:11:51.479369 systemd[1]: Reached target multi-user.target. Feb 13 04:11:51.479937 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 13 04:11:51.483805 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 13 04:11:51.483874 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 13 04:11:51.484011 systemd[1]: Startup finished in 1.849s (kernel) + 19.357s (initrd) + 15.292s (userspace) = 36.499s. Feb 13 04:11:52.083934 curl[1570]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 13 04:11:52.086304 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 13 04:12:00.844680 systemd[1]: Started sshd@3-139.178.94.233:22-139.178.68.195:44028.service. Feb 13 04:12:00.878683 sshd[1575]: Accepted publickey for core from 139.178.68.195 port 44028 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:12:00.879656 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:12:00.883007 systemd-logind[1456]: New session 6 of user core. Feb 13 04:12:00.883870 systemd[1]: Started session-6.scope. Feb 13 04:12:00.939390 sshd[1575]: pam_unix(sshd:session): session closed for user core Feb 13 04:12:00.940932 systemd[1]: sshd@3-139.178.94.233:22-139.178.68.195:44028.service: Deactivated successfully. Feb 13 04:12:00.941236 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 04:12:00.941551 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Feb 13 04:12:00.942058 systemd[1]: Started sshd@4-139.178.94.233:22-139.178.68.195:44042.service. Feb 13 04:12:00.942466 systemd-logind[1456]: Removed session 6. Feb 13 04:12:00.977192 sshd[1581]: Accepted publickey for core from 139.178.68.195 port 44042 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:12:00.978322 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:12:00.982138 systemd-logind[1456]: New session 7 of user core. Feb 13 04:12:00.983128 systemd[1]: Started session-7.scope. Feb 13 04:12:01.038017 sshd[1581]: pam_unix(sshd:session): session closed for user core Feb 13 04:12:01.039481 systemd[1]: sshd@4-139.178.94.233:22-139.178.68.195:44042.service: Deactivated successfully. Feb 13 04:12:01.039779 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 04:12:01.040084 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Feb 13 04:12:01.040627 systemd[1]: Started sshd@5-139.178.94.233:22-139.178.68.195:44052.service. Feb 13 04:12:01.041045 systemd-logind[1456]: Removed session 7. Feb 13 04:12:01.075247 sshd[1588]: Accepted publickey for core from 139.178.68.195 port 44052 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:12:01.076315 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:12:01.080054 systemd-logind[1456]: New session 8 of user core. Feb 13 04:12:01.080922 systemd[1]: Started session-8.scope. Feb 13 04:12:01.138590 sshd[1588]: pam_unix(sshd:session): session closed for user core Feb 13 04:12:01.140163 systemd[1]: sshd@5-139.178.94.233:22-139.178.68.195:44052.service: Deactivated successfully. Feb 13 04:12:01.140483 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 04:12:01.140799 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Feb 13 04:12:01.141324 systemd[1]: Started sshd@6-139.178.94.233:22-139.178.68.195:44062.service. Feb 13 04:12:01.141733 systemd-logind[1456]: Removed session 8. Feb 13 04:12:01.176198 sshd[1594]: Accepted publickey for core from 139.178.68.195 port 44062 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 04:12:01.177393 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 04:12:01.181304 systemd-logind[1456]: New session 9 of user core. Feb 13 04:12:01.182320 systemd[1]: Started session-9.scope. Feb 13 04:12:01.268033 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 04:12:01.268692 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 04:12:05.299168 systemd[1]: Reloading. Feb 13 04:12:05.329871 /usr/lib/systemd/system-generators/torcx-generator[1629]: time="2024-02-13T04:12:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 04:12:05.329887 /usr/lib/systemd/system-generators/torcx-generator[1629]: time="2024-02-13T04:12:05Z" level=info msg="torcx already run" Feb 13 04:12:05.380412 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 04:12:05.380420 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 04:12:05.391641 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 04:12:05.444989 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 13 04:12:05.448409 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 13 04:12:05.448652 systemd[1]: Reached target network-online.target. Feb 13 04:12:05.449309 systemd[1]: Started kubelet.service. Feb 13 04:12:05.473293 kubelet[1687]: E0213 04:12:05.473249 1687 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 13 04:12:05.474648 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 04:12:05.474720 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 04:12:05.890656 systemd[1]: Stopped kubelet.service. Feb 13 04:12:05.901169 systemd[1]: Reloading. Feb 13 04:12:05.962306 /usr/lib/systemd/system-generators/torcx-generator[1783]: time="2024-02-13T04:12:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 04:12:05.962324 /usr/lib/systemd/system-generators/torcx-generator[1783]: time="2024-02-13T04:12:05Z" level=info msg="torcx already run" Feb 13 04:12:06.011519 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 04:12:06.011526 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 04:12:06.022314 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 04:12:06.080018 systemd[1]: Started kubelet.service. Feb 13 04:12:06.102279 kubelet[1841]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 13 04:12:06.102279 kubelet[1841]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 04:12:06.102485 kubelet[1841]: I0213 04:12:06.102281 1841 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 04:12:06.103062 kubelet[1841]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 13 04:12:06.103062 kubelet[1841]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 04:12:06.352544 kubelet[1841]: I0213 04:12:06.352505 1841 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 13 04:12:06.352544 kubelet[1841]: I0213 04:12:06.352516 1841 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 04:12:06.352680 kubelet[1841]: I0213 04:12:06.352634 1841 server.go:836] "Client rotation is on, will bootstrap in background" Feb 13 04:12:06.354300 kubelet[1841]: I0213 04:12:06.354246 1841 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 04:12:06.373277 kubelet[1841]: I0213 04:12:06.373240 1841 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 04:12:06.373443 kubelet[1841]: I0213 04:12:06.373408 1841 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 04:12:06.373443 kubelet[1841]: I0213 04:12:06.373443 1841 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 13 04:12:06.373549 kubelet[1841]: I0213 04:12:06.373454 1841 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 13 04:12:06.373549 kubelet[1841]: I0213 04:12:06.373460 1841 container_manager_linux.go:308] "Creating device plugin manager" Feb 13 04:12:06.373549 kubelet[1841]: I0213 04:12:06.373519 1841 state_mem.go:36] "Initialized new in-memory state store" Feb 13 04:12:06.374922 kubelet[1841]: I0213 04:12:06.374891 1841 kubelet.go:398] "Attempting to sync node with API server" Feb 13 04:12:06.374922 kubelet[1841]: I0213 04:12:06.374920 1841 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 04:12:06.374995 kubelet[1841]: I0213 04:12:06.374931 1841 kubelet.go:297] "Adding apiserver pod source" Feb 13 04:12:06.374995 kubelet[1841]: I0213 04:12:06.374941 1841 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 04:12:06.375032 kubelet[1841]: E0213 04:12:06.375003 1841 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:06.375032 kubelet[1841]: E0213 04:12:06.375023 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:06.375196 kubelet[1841]: I0213 04:12:06.375164 1841 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 04:12:06.375341 kubelet[1841]: W0213 04:12:06.375276 1841 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 04:12:06.375529 kubelet[1841]: I0213 04:12:06.375495 1841 server.go:1186] "Started kubelet" Feb 13 04:12:06.375570 kubelet[1841]: I0213 04:12:06.375557 1841 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 04:12:06.375658 kubelet[1841]: E0213 04:12:06.375636 1841 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 04:12:06.375658 kubelet[1841]: E0213 04:12:06.375649 1841 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 04:12:06.376404 kubelet[1841]: I0213 04:12:06.376391 1841 server.go:451] "Adding debug handlers to kubelet server" Feb 13 04:12:06.385796 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 13 04:12:06.385994 kubelet[1841]: I0213 04:12:06.385985 1841 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 04:12:06.386087 kubelet[1841]: I0213 04:12:06.386079 1841 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 13 04:12:06.386129 kubelet[1841]: I0213 04:12:06.386108 1841 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 04:12:06.391156 kubelet[1841]: E0213 04:12:06.391145 1841 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.80.11\" not found" node="10.67.80.11" Feb 13 04:12:06.394685 kubelet[1841]: I0213 04:12:06.394676 1841 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 04:12:06.394685 kubelet[1841]: I0213 04:12:06.394685 1841 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 04:12:06.394750 kubelet[1841]: I0213 04:12:06.394694 1841 state_mem.go:36] "Initialized new in-memory state store" Feb 13 04:12:06.395634 kubelet[1841]: I0213 04:12:06.395626 1841 policy_none.go:49] "None policy: Start" Feb 13 04:12:06.395895 kubelet[1841]: I0213 04:12:06.395889 1841 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 04:12:06.395925 kubelet[1841]: I0213 04:12:06.395900 1841 state_mem.go:35] "Initializing new in-memory state store" Feb 13 04:12:06.398348 systemd[1]: Created slice kubepods.slice. Feb 13 04:12:06.400263 systemd[1]: Created slice kubepods-burstable.slice. Feb 13 04:12:06.401598 systemd[1]: Created slice kubepods-besteffort.slice. Feb 13 04:12:06.411790 kubelet[1841]: I0213 04:12:06.411753 1841 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 04:12:06.411917 kubelet[1841]: I0213 04:12:06.411865 1841 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 04:12:06.412126 kubelet[1841]: E0213 04:12:06.412094 1841 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.80.11\" not found" Feb 13 04:12:06.487469 kubelet[1841]: I0213 04:12:06.487452 1841 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.11" Feb 13 04:12:06.523556 kubelet[1841]: I0213 04:12:06.523503 1841 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 13 04:12:06.549337 kubelet[1841]: I0213 04:12:06.549300 1841 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 13 04:12:06.549337 kubelet[1841]: I0213 04:12:06.549324 1841 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 13 04:12:06.549337 kubelet[1841]: I0213 04:12:06.549344 1841 kubelet.go:2113] "Starting kubelet main sync loop" Feb 13 04:12:06.549537 kubelet[1841]: E0213 04:12:06.549393 1841 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 04:12:06.581046 kubelet[1841]: I0213 04:12:06.580951 1841 kubelet_node_status.go:73] "Successfully registered node" node="10.67.80.11" Feb 13 04:12:06.599566 kubelet[1841]: I0213 04:12:06.599513 1841 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 04:12:06.600373 env[1466]: time="2024-02-13T04:12:06.600205894Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 04:12:06.601362 kubelet[1841]: I0213 04:12:06.600688 1841 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 04:12:07.376066 kubelet[1841]: E0213 04:12:07.376011 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:07.376921 kubelet[1841]: I0213 04:12:07.376128 1841 apiserver.go:52] "Watching apiserver" Feb 13 04:12:07.580707 kubelet[1841]: I0213 04:12:07.580609 1841 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:12:07.580990 kubelet[1841]: I0213 04:12:07.580776 1841 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:12:07.588583 kubelet[1841]: I0213 04:12:07.588530 1841 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 04:12:07.592989 kubelet[1841]: I0213 04:12:07.592940 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-xtables-lock\") pod \"cilium-k547j\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " pod="kube-system/cilium-k547j" Feb 13 04:12:07.593298 kubelet[1841]: I0213 04:12:07.593078 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/28cdcf06-4ffd-47b4-8529-e121af6c6439-cilium-config-path\") pod \"cilium-k547j\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " pod="kube-system/cilium-k547j" Feb 13 04:12:07.593298 kubelet[1841]: I0213 04:12:07.593205 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-host-proc-sys-kernel\") pod \"cilium-k547j\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " pod="kube-system/cilium-k547j" Feb 13 04:12:07.593691 kubelet[1841]: I0213 04:12:07.593335 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chtct\" (UniqueName: \"kubernetes.io/projected/28cdcf06-4ffd-47b4-8529-e121af6c6439-kube-api-access-chtct\") pod \"cilium-k547j\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " pod="kube-system/cilium-k547j" Feb 13 04:12:07.593691 kubelet[1841]: I0213 04:12:07.593474 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/785a22d8-5563-4d0a-954c-d53adefd622c-lib-modules\") pod \"kube-proxy-tmc6r\" (UID: \"785a22d8-5563-4d0a-954c-d53adefd622c\") " pod="kube-system/kube-proxy-tmc6r" Feb 13 04:12:07.593691 kubelet[1841]: I0213 04:12:07.593595 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-cilium-run\") pod \"cilium-k547j\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " pod="kube-system/cilium-k547j" Feb 13 04:12:07.594192 kubelet[1841]: I0213 04:12:07.593709 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-cilium-cgroup\") pod \"cilium-k547j\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " pod="kube-system/cilium-k547j" Feb 13 04:12:07.594192 kubelet[1841]: I0213 04:12:07.593817 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-etc-cni-netd\") pod \"cilium-k547j\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " pod="kube-system/cilium-k547j" Feb 13 04:12:07.594192 kubelet[1841]: I0213 04:12:07.593928 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/28cdcf06-4ffd-47b4-8529-e121af6c6439-clustermesh-secrets\") pod \"cilium-k547j\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " pod="kube-system/cilium-k547j" Feb 13 04:12:07.594192 kubelet[1841]: I0213 04:12:07.594068 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/785a22d8-5563-4d0a-954c-d53adefd622c-kube-proxy\") pod \"kube-proxy-tmc6r\" (UID: \"785a22d8-5563-4d0a-954c-d53adefd622c\") " pod="kube-system/kube-proxy-tmc6r" Feb 13 04:12:07.594873 kubelet[1841]: I0213 04:12:07.594221 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv2t6\" (UniqueName: \"kubernetes.io/projected/785a22d8-5563-4d0a-954c-d53adefd622c-kube-api-access-cv2t6\") pod \"kube-proxy-tmc6r\" (UID: \"785a22d8-5563-4d0a-954c-d53adefd622c\") " pod="kube-system/kube-proxy-tmc6r" Feb 13 04:12:07.594873 kubelet[1841]: I0213 04:12:07.594384 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-hostproc\") pod \"cilium-k547j\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " pod="kube-system/cilium-k547j" Feb 13 04:12:07.594873 kubelet[1841]: I0213 04:12:07.594494 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-bpf-maps\") pod \"cilium-k547j\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " pod="kube-system/cilium-k547j" Feb 13 04:12:07.594873 kubelet[1841]: I0213 04:12:07.594648 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-host-proc-sys-net\") pod \"cilium-k547j\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " pod="kube-system/cilium-k547j" Feb 13 04:12:07.594873 kubelet[1841]: I0213 04:12:07.594810 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/28cdcf06-4ffd-47b4-8529-e121af6c6439-hubble-tls\") pod \"cilium-k547j\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " pod="kube-system/cilium-k547j" Feb 13 04:12:07.595623 kubelet[1841]: I0213 04:12:07.594898 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/785a22d8-5563-4d0a-954c-d53adefd622c-xtables-lock\") pod \"kube-proxy-tmc6r\" (UID: \"785a22d8-5563-4d0a-954c-d53adefd622c\") " pod="kube-system/kube-proxy-tmc6r" Feb 13 04:12:07.595623 kubelet[1841]: I0213 04:12:07.595062 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-cni-path\") pod \"cilium-k547j\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " pod="kube-system/cilium-k547j" Feb 13 04:12:07.595623 kubelet[1841]: I0213 04:12:07.595190 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-lib-modules\") pod \"cilium-k547j\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " pod="kube-system/cilium-k547j" Feb 13 04:12:07.595623 kubelet[1841]: I0213 04:12:07.595286 1841 reconciler.go:41] "Reconciler: start to sync state" Feb 13 04:12:07.595136 systemd[1]: Created slice kubepods-besteffort-pod785a22d8_5563_4d0a_954c_d53adefd622c.slice. Feb 13 04:12:07.610973 sudo[1597]: pam_unix(sudo:session): session closed for user root Feb 13 04:12:07.615728 sshd[1594]: pam_unix(sshd:session): session closed for user core Feb 13 04:12:07.619279 systemd[1]: Created slice kubepods-burstable-pod28cdcf06_4ffd_47b4_8529_e121af6c6439.slice. Feb 13 04:12:07.621842 systemd[1]: sshd@6-139.178.94.233:22-139.178.68.195:44062.service: Deactivated successfully. Feb 13 04:12:07.623654 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 04:12:07.636118 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Feb 13 04:12:07.638460 systemd-logind[1456]: Removed session 9. Feb 13 04:12:08.376331 kubelet[1841]: E0213 04:12:08.376240 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:08.696885 kubelet[1841]: E0213 04:12:08.696780 1841 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 13 04:12:08.697213 kubelet[1841]: E0213 04:12:08.697010 1841 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/28cdcf06-4ffd-47b4-8529-e121af6c6439-cilium-config-path podName:28cdcf06-4ffd-47b4-8529-e121af6c6439 nodeName:}" failed. No retries permitted until 2024-02-13 04:12:09.196930255 +0000 UTC m=+3.115204023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/28cdcf06-4ffd-47b4-8529-e121af6c6439-cilium-config-path") pod "cilium-k547j" (UID: "28cdcf06-4ffd-47b4-8529-e121af6c6439") : failed to sync configmap cache: timed out waiting for the condition Feb 13 04:12:08.775569 kubelet[1841]: I0213 04:12:08.775468 1841 request.go:690] Waited for 1.193719074s due to client-side throttling, not priority and fairness, request: GET:https://139.178.90.101:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcilium-config&limit=500&resourceVersion=0 Feb 13 04:12:09.376537 kubelet[1841]: E0213 04:12:09.376438 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:09.414867 env[1466]: time="2024-02-13T04:12:09.414766813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tmc6r,Uid:785a22d8-5563-4d0a-954c-d53adefd622c,Namespace:kube-system,Attempt:0,}" Feb 13 04:12:09.437880 env[1466]: time="2024-02-13T04:12:09.437761452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k547j,Uid:28cdcf06-4ffd-47b4-8529-e121af6c6439,Namespace:kube-system,Attempt:0,}" Feb 13 04:12:10.144018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount252496799.mount: Deactivated successfully. Feb 13 04:12:10.145531 env[1466]: time="2024-02-13T04:12:10.145466969Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 04:12:10.146922 env[1466]: time="2024-02-13T04:12:10.146858897Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 04:12:10.148377 env[1466]: time="2024-02-13T04:12:10.148328651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 04:12:10.149894 env[1466]: time="2024-02-13T04:12:10.149848587Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 04:12:10.152132 env[1466]: time="2024-02-13T04:12:10.152074637Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 04:12:10.153523 env[1466]: time="2024-02-13T04:12:10.153478988Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 04:12:10.154774 env[1466]: time="2024-02-13T04:12:10.154753412Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 04:12:10.155397 env[1466]: time="2024-02-13T04:12:10.155380072Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 04:12:10.161027 env[1466]: time="2024-02-13T04:12:10.160987556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 04:12:10.161027 env[1466]: time="2024-02-13T04:12:10.161015041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 04:12:10.161027 env[1466]: time="2024-02-13T04:12:10.161024687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 04:12:10.161183 env[1466]: time="2024-02-13T04:12:10.161102148Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e pid=1952 runtime=io.containerd.runc.v2 Feb 13 04:12:10.161752 env[1466]: time="2024-02-13T04:12:10.161697196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 04:12:10.161752 env[1466]: time="2024-02-13T04:12:10.161721970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 04:12:10.161752 env[1466]: time="2024-02-13T04:12:10.161732101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 04:12:10.161853 env[1466]: time="2024-02-13T04:12:10.161806342Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8e16cc9f86f46cc02ed5971d4f6b6ed825166d74226b82e6bbf8c3051e81c89 pid=1959 runtime=io.containerd.runc.v2 Feb 13 04:12:10.168608 systemd[1]: Started cri-containerd-92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e.scope. Feb 13 04:12:10.169644 systemd[1]: Started cri-containerd-e8e16cc9f86f46cc02ed5971d4f6b6ed825166d74226b82e6bbf8c3051e81c89.scope. Feb 13 04:12:10.179724 env[1466]: time="2024-02-13T04:12:10.179690632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tmc6r,Uid:785a22d8-5563-4d0a-954c-d53adefd622c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8e16cc9f86f46cc02ed5971d4f6b6ed825166d74226b82e6bbf8c3051e81c89\"" Feb 13 04:12:10.180278 env[1466]: time="2024-02-13T04:12:10.180257841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k547j,Uid:28cdcf06-4ffd-47b4-8529-e121af6c6439,Namespace:kube-system,Attempt:0,} returns sandbox id \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\"" Feb 13 04:12:10.181284 env[1466]: time="2024-02-13T04:12:10.181262162Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 13 04:12:10.377394 kubelet[1841]: E0213 04:12:10.377331 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:10.975338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157398636.mount: Deactivated successfully. Feb 13 04:12:11.286999 env[1466]: time="2024-02-13T04:12:11.286704206Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 04:12:11.289188 env[1466]: time="2024-02-13T04:12:11.289116864Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 04:12:11.292729 env[1466]: time="2024-02-13T04:12:11.292624925Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 04:12:11.296209 env[1466]: time="2024-02-13T04:12:11.296102484Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 04:12:11.297780 env[1466]: time="2024-02-13T04:12:11.297658637Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 13 04:12:11.299179 env[1466]: time="2024-02-13T04:12:11.299085731Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 04:12:11.301812 env[1466]: time="2024-02-13T04:12:11.301698181Z" level=info msg="CreateContainer within sandbox \"e8e16cc9f86f46cc02ed5971d4f6b6ed825166d74226b82e6bbf8c3051e81c89\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 04:12:11.320098 env[1466]: time="2024-02-13T04:12:11.320081697Z" level=info msg="CreateContainer within sandbox \"e8e16cc9f86f46cc02ed5971d4f6b6ed825166d74226b82e6bbf8c3051e81c89\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f8e2d504ef1112c32effdf258d1a54ddd84cb4e86e40b9444d2f8344420bd0b4\"" Feb 13 04:12:11.320363 env[1466]: time="2024-02-13T04:12:11.320352334Z" level=info msg="StartContainer for \"f8e2d504ef1112c32effdf258d1a54ddd84cb4e86e40b9444d2f8344420bd0b4\"" Feb 13 04:12:11.329433 systemd[1]: Started cri-containerd-f8e2d504ef1112c32effdf258d1a54ddd84cb4e86e40b9444d2f8344420bd0b4.scope. Feb 13 04:12:11.342374 env[1466]: time="2024-02-13T04:12:11.342351786Z" level=info msg="StartContainer for \"f8e2d504ef1112c32effdf258d1a54ddd84cb4e86e40b9444d2f8344420bd0b4\" returns successfully" Feb 13 04:12:11.377829 kubelet[1841]: E0213 04:12:11.377809 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:12.379070 kubelet[1841]: E0213 04:12:12.378956 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:13.379633 kubelet[1841]: E0213 04:12:13.379594 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:14.379652 kubelet[1841]: E0213 04:12:14.379631 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:15.288878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1310106115.mount: Deactivated successfully. Feb 13 04:12:15.380076 kubelet[1841]: E0213 04:12:15.380054 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:16.354833 kubelet[1841]: I0213 04:12:16.354817 1841 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 04:12:16.380761 kubelet[1841]: E0213 04:12:16.380749 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:16.950930 env[1466]: time="2024-02-13T04:12:16.950878095Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 04:12:16.951508 env[1466]: time="2024-02-13T04:12:16.951466408Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 04:12:16.952326 env[1466]: time="2024-02-13T04:12:16.952269039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 04:12:16.952741 env[1466]: time="2024-02-13T04:12:16.952690208Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 04:12:16.953666 env[1466]: time="2024-02-13T04:12:16.953620583Z" level=info msg="CreateContainer within sandbox \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 04:12:16.957725 env[1466]: time="2024-02-13T04:12:16.957684001Z" level=info msg="CreateContainer within sandbox \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a\"" Feb 13 04:12:16.957970 env[1466]: time="2024-02-13T04:12:16.957918827Z" level=info msg="StartContainer for \"3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a\"" Feb 13 04:12:16.967116 systemd[1]: Started cri-containerd-3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a.scope. Feb 13 04:12:16.978815 env[1466]: time="2024-02-13T04:12:16.978792116Z" level=info msg="StartContainer for \"3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a\" returns successfully" Feb 13 04:12:16.983799 systemd[1]: cri-containerd-3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a.scope: Deactivated successfully. Feb 13 04:12:17.381650 kubelet[1841]: E0213 04:12:17.381439 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:17.614662 kubelet[1841]: I0213 04:12:17.614607 1841 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tmc6r" podStartSLOduration=-9.223372026240242e+09 pod.CreationTimestamp="2024-02-13 04:12:07 +0000 UTC" firstStartedPulling="2024-02-13 04:12:10.180746042 +0000 UTC m=+4.099019754" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 04:12:11.582853317 +0000 UTC m=+5.501127105" watchObservedRunningTime="2024-02-13 04:12:17.614533768 +0000 UTC m=+11.532807480" Feb 13 04:12:17.961502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a-rootfs.mount: Deactivated successfully. Feb 13 04:12:18.157249 env[1466]: time="2024-02-13T04:12:18.157140031Z" level=info msg="shim disconnected" id=3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a Feb 13 04:12:18.158082 env[1466]: time="2024-02-13T04:12:18.157250149Z" level=warning msg="cleaning up after shim disconnected" id=3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a namespace=k8s.io Feb 13 04:12:18.158082 env[1466]: time="2024-02-13T04:12:18.157316710Z" level=info msg="cleaning up dead shim" Feb 13 04:12:18.165295 env[1466]: time="2024-02-13T04:12:18.165272046Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:12:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2225 runtime=io.containerd.runc.v2\n" Feb 13 04:12:19.630448 systemd-resolved[1411]: Clock change detected. Flushing caches. Feb 13 04:12:19.630647 systemd-timesyncd[1412]: Contacted time server [2606:82c0:23::e]:123 (2.flatcar.pool.ntp.org). Feb 13 04:12:19.630773 systemd-timesyncd[1412]: Initial clock synchronization to Tue 2024-02-13 04:12:19.630304 UTC. Feb 13 04:12:19.668213 kubelet[1841]: E0213 04:12:19.668105 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:19.877295 env[1466]: time="2024-02-13T04:12:19.877158783Z" level=info msg="CreateContainer within sandbox \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 04:12:19.891993 env[1466]: time="2024-02-13T04:12:19.891801716Z" level=info msg="CreateContainer within sandbox \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7\"" Feb 13 04:12:19.892605 env[1466]: time="2024-02-13T04:12:19.892586150Z" level=info msg="StartContainer for \"7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7\"" Feb 13 04:12:19.901732 systemd[1]: Started cri-containerd-7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7.scope. Feb 13 04:12:19.912856 env[1466]: time="2024-02-13T04:12:19.912826854Z" level=info msg="StartContainer for \"7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7\" returns successfully" Feb 13 04:12:19.919082 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 04:12:19.919244 systemd[1]: Stopped systemd-sysctl.service. Feb 13 04:12:19.919360 systemd[1]: Stopping systemd-sysctl.service... Feb 13 04:12:19.920173 systemd[1]: Starting systemd-sysctl.service... Feb 13 04:12:19.920376 systemd[1]: cri-containerd-7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7.scope: Deactivated successfully. Feb 13 04:12:19.923930 systemd[1]: Finished systemd-sysctl.service. Feb 13 04:12:19.930067 env[1466]: time="2024-02-13T04:12:19.930042151Z" level=info msg="shim disconnected" id=7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7 Feb 13 04:12:19.930157 env[1466]: time="2024-02-13T04:12:19.930073195Z" level=warning msg="cleaning up after shim disconnected" id=7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7 namespace=k8s.io Feb 13 04:12:19.930157 env[1466]: time="2024-02-13T04:12:19.930082254Z" level=info msg="cleaning up dead shim" Feb 13 04:12:19.933406 env[1466]: time="2024-02-13T04:12:19.933390370Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:12:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2287 runtime=io.containerd.runc.v2\n" Feb 13 04:12:20.246332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7-rootfs.mount: Deactivated successfully. Feb 13 04:12:20.668819 kubelet[1841]: E0213 04:12:20.668603 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:20.883512 env[1466]: time="2024-02-13T04:12:20.883387178Z" level=info msg="CreateContainer within sandbox \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 04:12:20.901938 env[1466]: time="2024-02-13T04:12:20.901844263Z" level=info msg="CreateContainer within sandbox \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599\"" Feb 13 04:12:20.902262 env[1466]: time="2024-02-13T04:12:20.902234387Z" level=info msg="StartContainer for \"29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599\"" Feb 13 04:12:20.911778 systemd[1]: Started cri-containerd-29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599.scope. Feb 13 04:12:20.924515 env[1466]: time="2024-02-13T04:12:20.924462835Z" level=info msg="StartContainer for \"29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599\" returns successfully" Feb 13 04:12:20.926077 systemd[1]: cri-containerd-29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599.scope: Deactivated successfully. Feb 13 04:12:20.953901 env[1466]: time="2024-02-13T04:12:20.953834503Z" level=info msg="shim disconnected" id=29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599 Feb 13 04:12:20.953901 env[1466]: time="2024-02-13T04:12:20.953874825Z" level=warning msg="cleaning up after shim disconnected" id=29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599 namespace=k8s.io Feb 13 04:12:20.953901 env[1466]: time="2024-02-13T04:12:20.953885590Z" level=info msg="cleaning up dead shim" Feb 13 04:12:20.959520 env[1466]: time="2024-02-13T04:12:20.959467363Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:12:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2343 runtime=io.containerd.runc.v2\n" Feb 13 04:12:21.246560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599-rootfs.mount: Deactivated successfully. Feb 13 04:12:21.669756 kubelet[1841]: E0213 04:12:21.669545 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:21.890153 env[1466]: time="2024-02-13T04:12:21.890018186Z" level=info msg="CreateContainer within sandbox \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 04:12:21.905663 env[1466]: time="2024-02-13T04:12:21.905558855Z" level=info msg="CreateContainer within sandbox \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503\"" Feb 13 04:12:21.906112 env[1466]: time="2024-02-13T04:12:21.906069450Z" level=info msg="StartContainer for \"798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503\"" Feb 13 04:12:21.915289 systemd[1]: Started cri-containerd-798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503.scope. Feb 13 04:12:21.926741 env[1466]: time="2024-02-13T04:12:21.926686479Z" level=info msg="StartContainer for \"798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503\" returns successfully" Feb 13 04:12:21.926996 systemd[1]: cri-containerd-798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503.scope: Deactivated successfully. Feb 13 04:12:21.948044 env[1466]: time="2024-02-13T04:12:21.947972140Z" level=info msg="shim disconnected" id=798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503 Feb 13 04:12:21.948044 env[1466]: time="2024-02-13T04:12:21.948007559Z" level=warning msg="cleaning up after shim disconnected" id=798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503 namespace=k8s.io Feb 13 04:12:21.948044 env[1466]: time="2024-02-13T04:12:21.948015626Z" level=info msg="cleaning up dead shim" Feb 13 04:12:21.952641 env[1466]: time="2024-02-13T04:12:21.952617999Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:12:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2398 runtime=io.containerd.runc.v2\n" Feb 13 04:12:22.247868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503-rootfs.mount: Deactivated successfully. Feb 13 04:12:22.670872 kubelet[1841]: E0213 04:12:22.670648 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:22.898653 env[1466]: time="2024-02-13T04:12:22.898520852Z" level=info msg="CreateContainer within sandbox \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 04:12:22.916721 env[1466]: time="2024-02-13T04:12:22.916574749Z" level=info msg="CreateContainer within sandbox \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\"" Feb 13 04:12:22.917553 env[1466]: time="2024-02-13T04:12:22.917443281Z" level=info msg="StartContainer for \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\"" Feb 13 04:12:22.931256 systemd[1]: Started cri-containerd-0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982.scope. Feb 13 04:12:22.943231 env[1466]: time="2024-02-13T04:12:22.943182374Z" level=info msg="StartContainer for \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\" returns successfully" Feb 13 04:12:22.996481 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 04:12:23.033114 kubelet[1841]: I0213 04:12:23.033101 1841 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 13 04:12:23.160471 kernel: Initializing XFRM netlink socket Feb 13 04:12:23.175503 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 04:12:23.671834 kubelet[1841]: E0213 04:12:23.671722 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:24.672996 kubelet[1841]: E0213 04:12:24.672878 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:24.767223 systemd-networkd[1309]: cilium_host: Link UP Feb 13 04:12:24.767304 systemd-networkd[1309]: cilium_net: Link UP Feb 13 04:12:24.767306 systemd-networkd[1309]: cilium_net: Gained carrier Feb 13 04:12:24.767391 systemd-networkd[1309]: cilium_host: Gained carrier Feb 13 04:12:24.775335 systemd-networkd[1309]: cilium_host: Gained IPv6LL Feb 13 04:12:24.775468 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 13 04:12:24.817255 systemd-networkd[1309]: cilium_vxlan: Link UP Feb 13 04:12:24.817260 systemd-networkd[1309]: cilium_vxlan: Gained carrier Feb 13 04:12:24.946490 kernel: NET: Registered PF_ALG protocol family Feb 13 04:12:25.132100 kubelet[1841]: I0213 04:12:25.132061 1841 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-k547j" podStartSLOduration=-9.223372018722767e+09 pod.CreationTimestamp="2024-02-13 04:12:07 +0000 UTC" firstStartedPulling="2024-02-13 04:12:10.18119966 +0000 UTC m=+4.099473379" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 04:12:23.921037697 +0000 UTC m=+16.553148146" watchObservedRunningTime="2024-02-13 04:12:25.132008889 +0000 UTC m=+17.764119272" Feb 13 04:12:25.132272 kubelet[1841]: I0213 04:12:25.132240 1841 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:12:25.134996 systemd[1]: Created slice kubepods-besteffort-pod6eafb017_5c58_4512_a9ea_9534428cc6ce.slice. Feb 13 04:12:25.188891 kubelet[1841]: I0213 04:12:25.188836 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfp4f\" (UniqueName: \"kubernetes.io/projected/6eafb017-5c58-4512-a9ea-9534428cc6ce-kube-api-access-tfp4f\") pod \"nginx-deployment-8ffc5cf85-6sltb\" (UID: \"6eafb017-5c58-4512-a9ea-9534428cc6ce\") " pod="default/nginx-deployment-8ffc5cf85-6sltb" Feb 13 04:12:25.402201 systemd-networkd[1309]: lxc_health: Link UP Feb 13 04:12:25.424307 systemd-networkd[1309]: lxc_health: Gained carrier Feb 13 04:12:25.424445 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 04:12:25.437023 env[1466]: time="2024-02-13T04:12:25.436990318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-6sltb,Uid:6eafb017-5c58-4512-a9ea-9534428cc6ce,Namespace:default,Attempt:0,}" Feb 13 04:12:25.609599 systemd-networkd[1309]: cilium_net: Gained IPv6LL Feb 13 04:12:25.673362 kubelet[1841]: E0213 04:12:25.673260 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:25.975979 systemd-networkd[1309]: lxc435602fc2746: Link UP Feb 13 04:12:25.980425 kernel: eth0: renamed from tmpbd7f2 Feb 13 04:12:26.030261 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 04:12:26.030317 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc435602fc2746: link becomes ready Feb 13 04:12:26.030343 systemd-networkd[1309]: lxc435602fc2746: Gained carrier Feb 13 04:12:26.121519 systemd-networkd[1309]: cilium_vxlan: Gained IPv6LL Feb 13 04:12:26.673604 kubelet[1841]: E0213 04:12:26.673545 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:26.953597 systemd-networkd[1309]: lxc_health: Gained IPv6LL Feb 13 04:12:27.081533 systemd-networkd[1309]: lxc435602fc2746: Gained IPv6LL Feb 13 04:12:27.661886 kubelet[1841]: E0213 04:12:27.661840 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:27.674113 kubelet[1841]: E0213 04:12:27.674063 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:28.239143 env[1466]: time="2024-02-13T04:12:28.239066480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 04:12:28.239143 env[1466]: time="2024-02-13T04:12:28.239089641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 04:12:28.239143 env[1466]: time="2024-02-13T04:12:28.239096972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 04:12:28.239374 env[1466]: time="2024-02-13T04:12:28.239150935Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd7f2d184f64d5aaccf1601ac3073ee64d5ba370667b65885e5c09090f5e7636 pid=3048 runtime=io.containerd.runc.v2 Feb 13 04:12:28.245058 systemd[1]: Started cri-containerd-bd7f2d184f64d5aaccf1601ac3073ee64d5ba370667b65885e5c09090f5e7636.scope. Feb 13 04:12:28.266921 env[1466]: time="2024-02-13T04:12:28.266873576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-6sltb,Uid:6eafb017-5c58-4512-a9ea-9534428cc6ce,Namespace:default,Attempt:0,} returns sandbox id \"bd7f2d184f64d5aaccf1601ac3073ee64d5ba370667b65885e5c09090f5e7636\"" Feb 13 04:12:28.267516 env[1466]: time="2024-02-13T04:12:28.267502599Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 04:12:28.674380 kubelet[1841]: E0213 04:12:28.674177 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:29.674768 kubelet[1841]: E0213 04:12:29.674644 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:30.273715 update_engine[1458]: I0213 04:12:30.273598 1458 update_attempter.cc:509] Updating boot flags... Feb 13 04:12:30.675891 kubelet[1841]: E0213 04:12:30.675631 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:31.676851 kubelet[1841]: E0213 04:12:31.676725 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:32.677607 kubelet[1841]: E0213 04:12:32.677489 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:33.158409 kubelet[1841]: I0213 04:12:33.158299 1841 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 13 04:12:33.677895 kubelet[1841]: E0213 04:12:33.677820 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:34.679103 kubelet[1841]: E0213 04:12:34.678980 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:35.680208 kubelet[1841]: E0213 04:12:35.680142 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:36.680551 kubelet[1841]: E0213 04:12:36.680483 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:36.693201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3447225879.mount: Deactivated successfully. Feb 13 04:12:37.681481 kubelet[1841]: E0213 04:12:37.681379 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:38.681706 kubelet[1841]: E0213 04:12:38.681587 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:39.682392 kubelet[1841]: E0213 04:12:39.682280 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:40.683053 kubelet[1841]: E0213 04:12:40.682932 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:41.683475 kubelet[1841]: E0213 04:12:41.683361 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:42.684489 kubelet[1841]: E0213 04:12:42.684382 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:43.685173 kubelet[1841]: E0213 04:12:43.685061 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:44.686008 kubelet[1841]: E0213 04:12:44.685945 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:45.686305 kubelet[1841]: E0213 04:12:45.686187 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:46.686469 kubelet[1841]: E0213 04:12:46.686344 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:47.662353 kubelet[1841]: E0213 04:12:47.662233 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:47.686974 kubelet[1841]: E0213 04:12:47.686856 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:48.687685 kubelet[1841]: E0213 04:12:48.687567 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:49.688520 kubelet[1841]: E0213 04:12:49.688400 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:50.689149 kubelet[1841]: E0213 04:12:50.689041 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:51.690229 kubelet[1841]: E0213 04:12:51.690121 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:52.690984 kubelet[1841]: E0213 04:12:52.690920 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:53.691916 kubelet[1841]: E0213 04:12:53.691793 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:54.692818 kubelet[1841]: E0213 04:12:54.692699 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:55.693830 kubelet[1841]: E0213 04:12:55.693711 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:56.694038 kubelet[1841]: E0213 04:12:56.693952 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:57.694407 kubelet[1841]: E0213 04:12:57.694332 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:58.695051 kubelet[1841]: E0213 04:12:58.694977 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:12:59.695300 kubelet[1841]: E0213 04:12:59.695227 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:00.695791 kubelet[1841]: E0213 04:13:00.695719 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:01.696605 kubelet[1841]: E0213 04:13:01.696495 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:02.696762 kubelet[1841]: E0213 04:13:02.696652 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:03.697531 kubelet[1841]: E0213 04:13:03.697414 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:04.698616 kubelet[1841]: E0213 04:13:04.698504 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:05.699177 kubelet[1841]: E0213 04:13:05.699066 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:06.700326 kubelet[1841]: E0213 04:13:06.700247 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:07.661928 kubelet[1841]: E0213 04:13:07.661834 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:07.700527 kubelet[1841]: E0213 04:13:07.700406 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:08.701579 kubelet[1841]: E0213 04:13:08.701460 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:09.702359 kubelet[1841]: E0213 04:13:09.702250 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:10.702542 kubelet[1841]: E0213 04:13:10.702407 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:11.703669 kubelet[1841]: E0213 04:13:11.703562 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:12.704109 kubelet[1841]: E0213 04:13:12.704027 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:13.704348 kubelet[1841]: E0213 04:13:13.704213 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:14.705608 kubelet[1841]: E0213 04:13:14.705492 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:15.706550 kubelet[1841]: E0213 04:13:15.706433 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:16.706976 kubelet[1841]: E0213 04:13:16.706871 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:17.707735 kubelet[1841]: E0213 04:13:17.707631 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:18.708971 kubelet[1841]: E0213 04:13:18.708851 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:19.709591 kubelet[1841]: E0213 04:13:19.709481 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:20.709856 kubelet[1841]: E0213 04:13:20.709732 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:21.710447 kubelet[1841]: E0213 04:13:21.710324 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:22.710916 kubelet[1841]: E0213 04:13:22.710796 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:23.711772 kubelet[1841]: E0213 04:13:23.711699 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:24.712813 kubelet[1841]: E0213 04:13:24.712710 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:25.713796 kubelet[1841]: E0213 04:13:25.713692 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:26.714821 kubelet[1841]: E0213 04:13:26.714717 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:27.662060 kubelet[1841]: E0213 04:13:27.661958 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:27.715925 kubelet[1841]: E0213 04:13:27.715867 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:28.716944 kubelet[1841]: E0213 04:13:28.716877 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:29.717724 kubelet[1841]: E0213 04:13:29.717619 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:30.717972 kubelet[1841]: E0213 04:13:30.717871 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:31.718185 kubelet[1841]: E0213 04:13:31.718068 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:32.719124 kubelet[1841]: E0213 04:13:32.719030 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:33.719622 kubelet[1841]: E0213 04:13:33.719517 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:34.720273 kubelet[1841]: E0213 04:13:34.720204 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:35.720821 kubelet[1841]: E0213 04:13:35.720721 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:36.721220 kubelet[1841]: E0213 04:13:36.721154 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:37.721607 kubelet[1841]: E0213 04:13:37.721539 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:38.265298 systemd[1]: Started sshd@7-139.178.94.233:22-141.98.11.169:37294.service. Feb 13 04:13:38.722771 kubelet[1841]: E0213 04:13:38.722658 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:39.416529 sshd[3107]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:13:39.723645 kubelet[1841]: E0213 04:13:39.723539 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:40.724361 kubelet[1841]: E0213 04:13:40.724295 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:41.690703 sshd[3107]: Failed password for root from 141.98.11.169 port 37294 ssh2 Feb 13 04:13:41.724918 kubelet[1841]: E0213 04:13:41.724842 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:41.921673 sshd[3107]: Connection closed by authenticating user root 141.98.11.169 port 37294 [preauth] Feb 13 04:13:41.924164 systemd[1]: sshd@7-139.178.94.233:22-141.98.11.169:37294.service: Deactivated successfully. Feb 13 04:13:42.094326 systemd[1]: Started sshd@8-139.178.94.233:22-141.98.11.169:52574.service. Feb 13 04:13:42.725286 kubelet[1841]: E0213 04:13:42.725183 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:42.953746 sshd[3111]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:13:43.725809 kubelet[1841]: E0213 04:13:43.725698 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:44.726056 kubelet[1841]: E0213 04:13:44.725949 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:44.972102 sshd[3111]: Failed password for root from 141.98.11.169 port 52574 ssh2 Feb 13 04:13:45.461394 sshd[3111]: Connection closed by authenticating user root 141.98.11.169 port 52574 [preauth] Feb 13 04:13:45.463936 systemd[1]: sshd@8-139.178.94.233:22-141.98.11.169:52574.service: Deactivated successfully. Feb 13 04:13:45.633258 systemd[1]: Started sshd@9-139.178.94.233:22-141.98.11.169:33288.service. Feb 13 04:13:45.726513 kubelet[1841]: E0213 04:13:45.726309 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:46.675330 sshd[3117]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:13:46.727533 kubelet[1841]: E0213 04:13:46.727462 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:47.661466 kubelet[1841]: E0213 04:13:47.661351 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:47.728121 kubelet[1841]: E0213 04:13:47.728018 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:48.729252 kubelet[1841]: E0213 04:13:48.729147 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:48.909751 sshd[3117]: Failed password for root from 141.98.11.169 port 33288 ssh2 Feb 13 04:13:49.179659 sshd[3117]: Connection closed by authenticating user root 141.98.11.169 port 33288 [preauth] Feb 13 04:13:49.182084 systemd[1]: sshd@9-139.178.94.233:22-141.98.11.169:33288.service: Deactivated successfully. Feb 13 04:13:49.353011 systemd[1]: Started sshd@10-139.178.94.233:22-141.98.11.169:46196.service. Feb 13 04:13:49.729461 kubelet[1841]: E0213 04:13:49.729284 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:50.216517 sshd[3121]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:13:50.729845 kubelet[1841]: E0213 04:13:50.729727 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:51.730538 kubelet[1841]: E0213 04:13:51.730447 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:51.999350 sshd[3121]: Failed password for root from 141.98.11.169 port 46196 ssh2 Feb 13 04:13:52.723282 sshd[3121]: Connection closed by authenticating user root 141.98.11.169 port 46196 [preauth] Feb 13 04:13:52.725863 systemd[1]: sshd@10-139.178.94.233:22-141.98.11.169:46196.service: Deactivated successfully. Feb 13 04:13:52.731099 kubelet[1841]: E0213 04:13:52.731043 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:52.894446 systemd[1]: Started sshd@11-139.178.94.233:22-141.98.11.169:35746.service. Feb 13 04:13:53.731695 kubelet[1841]: E0213 04:13:53.731594 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:53.904918 sshd[3125]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:13:53.905164 sshd[3125]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 04:13:54.732847 kubelet[1841]: E0213 04:13:54.732735 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:55.733137 kubelet[1841]: E0213 04:13:55.733026 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:56.733905 kubelet[1841]: E0213 04:13:56.733795 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:56.766611 sshd[3125]: Failed password for root from 141.98.11.169 port 35746 ssh2 Feb 13 04:13:57.734824 kubelet[1841]: E0213 04:13:57.734713 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:58.735797 kubelet[1841]: E0213 04:13:58.735686 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:58.749000 sshd[3125]: Connection closed by authenticating user root 141.98.11.169 port 35746 [preauth] Feb 13 04:13:58.751522 systemd[1]: sshd@11-139.178.94.233:22-141.98.11.169:35746.service: Deactivated successfully. Feb 13 04:13:58.920553 systemd[1]: Started sshd@12-139.178.94.233:22-141.98.11.169:37714.service. Feb 13 04:13:59.736161 kubelet[1841]: E0213 04:13:59.736054 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:13:59.870961 sshd[3129]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:14:00.736620 kubelet[1841]: E0213 04:14:00.736490 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:01.737836 kubelet[1841]: E0213 04:14:01.737726 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:01.889343 sshd[3129]: Failed password for root from 141.98.11.169 port 37714 ssh2 Feb 13 04:14:02.376584 sshd[3129]: Connection closed by authenticating user root 141.98.11.169 port 37714 [preauth] Feb 13 04:14:02.379126 systemd[1]: sshd@12-139.178.94.233:22-141.98.11.169:37714.service: Deactivated successfully. Feb 13 04:14:02.547023 systemd[1]: Started sshd@13-139.178.94.233:22-141.98.11.169:56418.service. Feb 13 04:14:02.738343 kubelet[1841]: E0213 04:14:02.738229 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:03.445002 sshd[3133]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:14:03.739442 kubelet[1841]: E0213 04:14:03.739303 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:04.740537 kubelet[1841]: E0213 04:14:04.740474 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:05.679473 sshd[3133]: Failed password for root from 141.98.11.169 port 56418 ssh2 Feb 13 04:14:05.741449 kubelet[1841]: E0213 04:14:05.741316 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:05.951295 sshd[3133]: Connection closed by authenticating user root 141.98.11.169 port 56418 [preauth] Feb 13 04:14:05.953814 systemd[1]: sshd@13-139.178.94.233:22-141.98.11.169:56418.service: Deactivated successfully. Feb 13 04:14:06.135760 systemd[1]: Started sshd@14-139.178.94.233:22-141.98.11.169:33718.service. Feb 13 04:14:06.741719 kubelet[1841]: E0213 04:14:06.741611 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:06.974387 sshd[3137]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:14:07.661361 kubelet[1841]: E0213 04:14:07.661283 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:07.742193 kubelet[1841]: E0213 04:14:07.742092 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:08.743087 kubelet[1841]: E0213 04:14:08.743016 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:08.953165 sshd[3137]: Failed password for root from 141.98.11.169 port 33718 ssh2 Feb 13 04:14:09.477015 sshd[3137]: Connection closed by authenticating user root 141.98.11.169 port 33718 [preauth] Feb 13 04:14:09.479537 systemd[1]: sshd@14-139.178.94.233:22-141.98.11.169:33718.service: Deactivated successfully. Feb 13 04:14:09.648299 systemd[1]: Started sshd@15-139.178.94.233:22-141.98.11.169:47324.service. Feb 13 04:14:09.743698 kubelet[1841]: E0213 04:14:09.743517 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:10.724995 sshd[3143]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:14:10.744789 kubelet[1841]: E0213 04:14:10.744680 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:11.745838 kubelet[1841]: E0213 04:14:11.745725 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:12.587775 sshd[3143]: Failed password for root from 141.98.11.169 port 47324 ssh2 Feb 13 04:14:12.746114 kubelet[1841]: E0213 04:14:12.745996 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:13.291898 sshd[3143]: Connection closed by authenticating user root 141.98.11.169 port 47324 [preauth] Feb 13 04:14:13.294369 systemd[1]: sshd@15-139.178.94.233:22-141.98.11.169:47324.service: Deactivated successfully. Feb 13 04:14:13.474882 systemd[1]: Started sshd@16-139.178.94.233:22-141.98.11.169:54902.service. Feb 13 04:14:13.746460 kubelet[1841]: E0213 04:14:13.746366 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:14.638836 sshd[3149]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:14:14.746707 kubelet[1841]: E0213 04:14:14.746632 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:15.747630 kubelet[1841]: E0213 04:14:15.747509 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:16.748287 kubelet[1841]: E0213 04:14:16.748179 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:17.049169 sshd[3149]: Failed password for root from 141.98.11.169 port 54902 ssh2 Feb 13 04:14:17.749454 kubelet[1841]: E0213 04:14:17.749328 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:18.749559 kubelet[1841]: E0213 04:14:18.749489 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:19.491549 sshd[3149]: Connection closed by authenticating user root 141.98.11.169 port 54902 [preauth] Feb 13 04:14:19.494053 systemd[1]: sshd@16-139.178.94.233:22-141.98.11.169:54902.service: Deactivated successfully. Feb 13 04:14:19.664087 systemd[1]: Started sshd@17-139.178.94.233:22-141.98.11.169:53638.service. Feb 13 04:14:19.750228 kubelet[1841]: E0213 04:14:19.750041 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:20.698237 sshd[3153]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:14:20.750372 kubelet[1841]: E0213 04:14:20.750306 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:21.750775 kubelet[1841]: E0213 04:14:21.750676 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:22.601119 sshd[3153]: Failed password for root from 141.98.11.169 port 53638 ssh2 Feb 13 04:14:22.751923 kubelet[1841]: E0213 04:14:22.751849 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:23.206567 sshd[3153]: Connection closed by authenticating user root 141.98.11.169 port 53638 [preauth] Feb 13 04:14:23.209212 systemd[1]: sshd@17-139.178.94.233:22-141.98.11.169:53638.service: Deactivated successfully. Feb 13 04:14:23.392707 systemd[1]: Started sshd@18-139.178.94.233:22-141.98.11.169:42032.service. Feb 13 04:14:23.752282 kubelet[1841]: E0213 04:14:23.752226 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:24.230982 sshd[3157]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:14:24.753232 kubelet[1841]: E0213 04:14:24.753125 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:25.346362 sshd[3157]: Failed password for root from 141.98.11.169 port 42032 ssh2 Feb 13 04:14:25.564187 sshd[3157]: Connection closed by authenticating user root 141.98.11.169 port 42032 [preauth] Feb 13 04:14:25.566700 systemd[1]: sshd@18-139.178.94.233:22-141.98.11.169:42032.service: Deactivated successfully. Feb 13 04:14:25.742981 systemd[1]: Started sshd@19-139.178.94.233:22-141.98.11.169:32846.service. Feb 13 04:14:25.753539 kubelet[1841]: E0213 04:14:25.753525 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:26.627297 sshd[3162]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:14:26.754375 kubelet[1841]: E0213 04:14:26.754268 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:27.662275 kubelet[1841]: E0213 04:14:27.662172 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:27.755172 kubelet[1841]: E0213 04:14:27.755072 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:28.686076 sshd[3162]: Failed password for root from 141.98.11.169 port 32846 ssh2 Feb 13 04:14:28.756310 kubelet[1841]: E0213 04:14:28.756197 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:29.139074 sshd[3162]: Connection closed by authenticating user root 141.98.11.169 port 32846 [preauth] Feb 13 04:14:29.141627 systemd[1]: sshd@19-139.178.94.233:22-141.98.11.169:32846.service: Deactivated successfully. Feb 13 04:14:29.311649 systemd[1]: Started sshd@20-139.178.94.233:22-141.98.11.169:45546.service. Feb 13 04:14:29.756572 kubelet[1841]: E0213 04:14:29.756494 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:30.155721 sshd[3166]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:14:30.756860 kubelet[1841]: E0213 04:14:30.756747 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:31.757894 kubelet[1841]: E0213 04:14:31.757786 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:31.762671 sshd[3166]: Failed password for root from 141.98.11.169 port 45546 ssh2 Feb 13 04:14:32.734123 sshd[3166]: Connection closed by authenticating user root 141.98.11.169 port 45546 [preauth] Feb 13 04:14:32.736601 systemd[1]: sshd@20-139.178.94.233:22-141.98.11.169:45546.service: Deactivated successfully. Feb 13 04:14:32.758443 kubelet[1841]: E0213 04:14:32.758348 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:32.912823 systemd[1]: Started sshd@21-139.178.94.233:22-141.98.11.169:44090.service. Feb 13 04:14:33.744455 sshd[3170]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:14:33.758632 kubelet[1841]: E0213 04:14:33.758568 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:34.759360 kubelet[1841]: E0213 04:14:34.759254 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:35.431618 sshd[3170]: Failed password for root from 141.98.11.169 port 44090 ssh2 Feb 13 04:14:35.760381 kubelet[1841]: E0213 04:14:35.760307 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:36.253468 sshd[3170]: Connection closed by authenticating user root 141.98.11.169 port 44090 [preauth] Feb 13 04:14:36.256008 systemd[1]: sshd@21-139.178.94.233:22-141.98.11.169:44090.service: Deactivated successfully. Feb 13 04:14:36.425017 systemd[1]: Started sshd@22-139.178.94.233:22-141.98.11.169:55802.service. Feb 13 04:14:36.761367 kubelet[1841]: E0213 04:14:36.761267 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:37.358094 sshd[3174]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:14:37.762333 kubelet[1841]: E0213 04:14:37.762262 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:38.762970 kubelet[1841]: E0213 04:14:38.762855 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:39.592692 sshd[3174]: Failed password for root from 141.98.11.169 port 55802 ssh2 Feb 13 04:14:39.763954 kubelet[1841]: E0213 04:14:39.763845 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:39.905847 sshd[3174]: Connection closed by authenticating user root 141.98.11.169 port 55802 [preauth] Feb 13 04:14:39.908210 systemd[1]: sshd@22-139.178.94.233:22-141.98.11.169:55802.service: Deactivated successfully. Feb 13 04:14:40.093654 systemd[1]: Started sshd@23-139.178.94.233:22-141.98.11.169:37194.service. Feb 13 04:14:40.764110 kubelet[1841]: E0213 04:14:40.764018 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:40.960286 sshd[3178]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:14:41.765231 kubelet[1841]: E0213 04:14:41.765122 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:42.765383 kubelet[1841]: E0213 04:14:42.765304 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:43.606398 sshd[3178]: Failed password for root from 141.98.11.169 port 37194 ssh2 Feb 13 04:14:43.765956 kubelet[1841]: E0213 04:14:43.765842 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:44.766886 kubelet[1841]: E0213 04:14:44.766771 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:45.768118 kubelet[1841]: E0213 04:14:45.768011 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:45.810828 sshd[3178]: Connection closed by authenticating user root 141.98.11.169 port 37194 [preauth] Feb 13 04:14:45.813332 systemd[1]: sshd@23-139.178.94.233:22-141.98.11.169:37194.service: Deactivated successfully. Feb 13 04:14:45.980744 systemd[1]: Started sshd@24-139.178.94.233:22-141.98.11.169:37622.service. Feb 13 04:14:46.768701 kubelet[1841]: E0213 04:14:46.768630 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:46.846503 sshd[3184]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:14:47.662021 kubelet[1841]: E0213 04:14:47.661939 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:47.769648 kubelet[1841]: E0213 04:14:47.769534 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:48.649595 sshd[3184]: Failed password for root from 141.98.11.169 port 37622 ssh2 Feb 13 04:14:48.770263 kubelet[1841]: E0213 04:14:48.770154 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:49.349553 sshd[3184]: Connection closed by authenticating user root 141.98.11.169 port 37622 [preauth] Feb 13 04:14:49.352197 systemd[1]: sshd@24-139.178.94.233:22-141.98.11.169:37622.service: Deactivated successfully. Feb 13 04:14:49.519680 systemd[1]: Started sshd@25-139.178.94.233:22-141.98.11.169:47420.service. Feb 13 04:14:49.771387 kubelet[1841]: E0213 04:14:49.771256 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:50.472569 sshd[3188]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.169 user=root Feb 13 04:14:50.772474 kubelet[1841]: E0213 04:14:50.772235 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:51.772665 kubelet[1841]: E0213 04:14:51.772557 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:52.491246 sshd[3188]: Failed password for root from 141.98.11.169 port 47420 ssh2 Feb 13 04:14:52.773768 kubelet[1841]: E0213 04:14:52.773551 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:53.774334 kubelet[1841]: E0213 04:14:53.774230 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:54.775275 kubelet[1841]: E0213 04:14:54.775210 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:55.775601 kubelet[1841]: E0213 04:14:55.775495 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:56.775983 kubelet[1841]: E0213 04:14:56.775869 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:57.777008 kubelet[1841]: E0213 04:14:57.776878 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:58.777220 kubelet[1841]: E0213 04:14:58.777107 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:14:59.777851 kubelet[1841]: E0213 04:14:59.777737 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:00.778853 kubelet[1841]: E0213 04:15:00.778738 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:01.778999 kubelet[1841]: E0213 04:15:01.778918 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:02.779807 kubelet[1841]: E0213 04:15:02.779695 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:03.780673 kubelet[1841]: E0213 04:15:03.780573 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:04.781898 kubelet[1841]: E0213 04:15:04.781777 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:05.782821 kubelet[1841]: E0213 04:15:05.782711 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:06.783678 kubelet[1841]: E0213 04:15:06.783600 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:07.661379 kubelet[1841]: E0213 04:15:07.661271 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:07.784070 kubelet[1841]: E0213 04:15:07.783997 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:08.784712 kubelet[1841]: E0213 04:15:08.784641 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:09.784948 kubelet[1841]: E0213 04:15:09.784873 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:10.785709 kubelet[1841]: E0213 04:15:10.785602 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:11.785964 kubelet[1841]: E0213 04:15:11.785856 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:12.786951 kubelet[1841]: E0213 04:15:12.786842 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:13.787375 kubelet[1841]: E0213 04:15:13.787270 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:14.788308 kubelet[1841]: E0213 04:15:14.788185 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:15.789179 kubelet[1841]: E0213 04:15:15.789063 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:16.790347 kubelet[1841]: E0213 04:15:16.790244 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:17.791406 kubelet[1841]: E0213 04:15:17.791297 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:18.792365 kubelet[1841]: E0213 04:15:18.792255 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:19.793246 kubelet[1841]: E0213 04:15:19.793132 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:20.794059 kubelet[1841]: E0213 04:15:20.793990 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:21.794880 kubelet[1841]: E0213 04:15:21.794776 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:22.795884 kubelet[1841]: E0213 04:15:22.795779 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:23.796073 kubelet[1841]: E0213 04:15:23.795990 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:24.796958 kubelet[1841]: E0213 04:15:24.796848 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:25.797608 kubelet[1841]: E0213 04:15:25.797502 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:26.798455 kubelet[1841]: E0213 04:15:26.798328 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:27.068303 kubelet[1841]: I0213 04:15:27.068094 1841 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:15:27.082144 systemd[1]: Created slice kubepods-besteffort-pod0fa2eb9b_ccf6_4028_b6f6_ce862712ed7d.slice. Feb 13 04:15:27.106679 kubelet[1841]: I0213 04:15:27.106624 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nntp\" (UniqueName: \"kubernetes.io/projected/0fa2eb9b-ccf6-4028-b6f6-ce862712ed7d-kube-api-access-7nntp\") pod \"nfs-server-provisioner-0\" (UID: \"0fa2eb9b-ccf6-4028-b6f6-ce862712ed7d\") " pod="default/nfs-server-provisioner-0" Feb 13 04:15:27.106978 kubelet[1841]: I0213 04:15:27.106732 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0fa2eb9b-ccf6-4028-b6f6-ce862712ed7d-data\") pod \"nfs-server-provisioner-0\" (UID: \"0fa2eb9b-ccf6-4028-b6f6-ce862712ed7d\") " pod="default/nfs-server-provisioner-0" Feb 13 04:15:27.388962 env[1466]: time="2024-02-13T04:15:27.388709851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0fa2eb9b-ccf6-4028-b6f6-ce862712ed7d,Namespace:default,Attempt:0,}" Feb 13 04:15:27.437804 systemd-networkd[1309]: lxc85f08499e0cf: Link UP Feb 13 04:15:27.463486 kernel: eth0: renamed from tmp3a9fa Feb 13 04:15:27.493052 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 04:15:27.493125 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc85f08499e0cf: link becomes ready Feb 13 04:15:27.493130 systemd-networkd[1309]: lxc85f08499e0cf: Gained carrier Feb 13 04:15:27.661808 kubelet[1841]: E0213 04:15:27.661669 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:27.761109 env[1466]: time="2024-02-13T04:15:27.761055050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 04:15:27.761109 env[1466]: time="2024-02-13T04:15:27.761075750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 04:15:27.761109 env[1466]: time="2024-02-13T04:15:27.761082412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 04:15:27.761307 env[1466]: time="2024-02-13T04:15:27.761216605Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a9fa7d0c988a01b461b5135392c176e22ba5dfd9d110ff71d3c723d604af348 pid=3310 runtime=io.containerd.runc.v2 Feb 13 04:15:27.767394 systemd[1]: Started cri-containerd-3a9fa7d0c988a01b461b5135392c176e22ba5dfd9d110ff71d3c723d604af348.scope. Feb 13 04:15:27.789998 env[1466]: time="2024-02-13T04:15:27.789971169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0fa2eb9b-ccf6-4028-b6f6-ce862712ed7d,Namespace:default,Attempt:0,} returns sandbox id \"3a9fa7d0c988a01b461b5135392c176e22ba5dfd9d110ff71d3c723d604af348\"" Feb 13 04:15:27.798503 kubelet[1841]: E0213 04:15:27.798425 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:28.798629 kubelet[1841]: E0213 04:15:28.798553 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:28.841818 systemd-networkd[1309]: lxc85f08499e0cf: Gained IPv6LL Feb 13 04:15:29.799180 kubelet[1841]: E0213 04:15:29.799099 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:30.799457 kubelet[1841]: E0213 04:15:30.799345 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:31.800175 kubelet[1841]: E0213 04:15:31.800065 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:32.800870 kubelet[1841]: E0213 04:15:32.800745 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:33.801296 kubelet[1841]: E0213 04:15:33.801193 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:34.610601 systemd[1]: Started sshd@26-139.178.94.233:22-138.197.18.220:60788.service. Feb 13 04:15:34.801406 kubelet[1841]: E0213 04:15:34.801356 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:34.995862 sshd[3342]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=138.197.18.220 user=root Feb 13 04:15:35.802354 kubelet[1841]: E0213 04:15:35.802237 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:36.802935 kubelet[1841]: E0213 04:15:36.802824 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:37.054756 sshd[3342]: Failed password for root from 138.197.18.220 port 60788 ssh2 Feb 13 04:15:37.405572 sshd[3342]: Connection closed by authenticating user root 138.197.18.220 port 60788 [preauth] Feb 13 04:15:37.407924 systemd[1]: sshd@26-139.178.94.233:22-138.197.18.220:60788.service: Deactivated successfully. Feb 13 04:15:37.804105 kubelet[1841]: E0213 04:15:37.803992 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:38.804784 kubelet[1841]: E0213 04:15:38.804681 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:39.805024 kubelet[1841]: E0213 04:15:39.804920 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:40.805799 kubelet[1841]: E0213 04:15:40.805698 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:41.806025 kubelet[1841]: E0213 04:15:41.805910 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:42.807155 kubelet[1841]: E0213 04:15:42.807045 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:43.807946 kubelet[1841]: E0213 04:15:43.807834 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:44.808683 kubelet[1841]: E0213 04:15:44.808567 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:45.809706 kubelet[1841]: E0213 04:15:45.809598 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:46.810348 kubelet[1841]: E0213 04:15:46.810246 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:47.661436 kubelet[1841]: E0213 04:15:47.661321 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:47.811459 kubelet[1841]: E0213 04:15:47.811332 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:48.812050 kubelet[1841]: E0213 04:15:48.811940 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:49.812980 kubelet[1841]: E0213 04:15:49.812903 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:50.813565 kubelet[1841]: E0213 04:15:50.813445 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:51.814457 kubelet[1841]: E0213 04:15:51.814328 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:52.815458 kubelet[1841]: E0213 04:15:52.815343 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:53.816347 kubelet[1841]: E0213 04:15:53.816270 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:54.817234 kubelet[1841]: E0213 04:15:54.817160 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:55.817399 kubelet[1841]: E0213 04:15:55.817328 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:56.817774 kubelet[1841]: E0213 04:15:56.817702 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:57.817979 kubelet[1841]: E0213 04:15:57.817868 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:58.818667 kubelet[1841]: E0213 04:15:58.818590 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:15:59.819482 kubelet[1841]: E0213 04:15:59.819392 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:00.820674 kubelet[1841]: E0213 04:16:00.820606 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:01.820986 kubelet[1841]: E0213 04:16:01.820880 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:02.821878 kubelet[1841]: E0213 04:16:02.821774 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:03.822966 kubelet[1841]: E0213 04:16:03.822855 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:04.823247 kubelet[1841]: E0213 04:16:04.823137 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:05.823546 kubelet[1841]: E0213 04:16:05.823412 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:06.824044 kubelet[1841]: E0213 04:16:06.823933 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:07.662376 kubelet[1841]: E0213 04:16:07.662306 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:07.824677 kubelet[1841]: E0213 04:16:07.824600 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:08.824816 kubelet[1841]: E0213 04:16:08.824741 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:09.825262 kubelet[1841]: E0213 04:16:09.825188 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:10.825758 kubelet[1841]: E0213 04:16:10.825644 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:11.826767 kubelet[1841]: E0213 04:16:11.826665 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:12.827128 kubelet[1841]: E0213 04:16:12.827049 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:13.827346 kubelet[1841]: E0213 04:16:13.827242 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:14.827635 kubelet[1841]: E0213 04:16:14.827530 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:15.828479 kubelet[1841]: E0213 04:16:15.828390 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:16.829166 kubelet[1841]: E0213 04:16:16.829090 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:17.829444 kubelet[1841]: E0213 04:16:17.829329 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:18.829918 kubelet[1841]: E0213 04:16:18.829841 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:19.830846 kubelet[1841]: E0213 04:16:19.830775 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:20.831682 kubelet[1841]: E0213 04:16:20.831606 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:21.831832 kubelet[1841]: E0213 04:16:21.831758 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:22.832695 kubelet[1841]: E0213 04:16:22.832586 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:23.833724 kubelet[1841]: E0213 04:16:23.833615 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:24.833957 kubelet[1841]: E0213 04:16:24.833886 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:25.834574 kubelet[1841]: E0213 04:16:25.834460 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:26.834809 kubelet[1841]: E0213 04:16:26.834708 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:27.661618 kubelet[1841]: E0213 04:16:27.661510 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:27.835019 kubelet[1841]: E0213 04:16:27.834907 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:28.836335 kubelet[1841]: E0213 04:16:28.836232 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:29.837386 kubelet[1841]: E0213 04:16:29.837284 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:30.837651 kubelet[1841]: E0213 04:16:30.837543 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:31.838772 kubelet[1841]: E0213 04:16:31.838657 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:32.839278 kubelet[1841]: E0213 04:16:32.839169 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:33.839766 kubelet[1841]: E0213 04:16:33.839657 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:34.839983 kubelet[1841]: E0213 04:16:34.839869 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:35.840104 kubelet[1841]: E0213 04:16:35.840015 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:36.841262 kubelet[1841]: E0213 04:16:36.841146 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:37.841453 kubelet[1841]: E0213 04:16:37.841376 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:38.842545 kubelet[1841]: E0213 04:16:38.842438 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:39.842675 kubelet[1841]: E0213 04:16:39.842620 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:40.843256 kubelet[1841]: E0213 04:16:40.843148 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:41.843672 kubelet[1841]: E0213 04:16:41.843562 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:42.844447 kubelet[1841]: E0213 04:16:42.844307 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:43.845458 kubelet[1841]: E0213 04:16:43.845350 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:44.845806 kubelet[1841]: E0213 04:16:44.845686 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:45.846880 kubelet[1841]: E0213 04:16:45.846765 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:46.847552 kubelet[1841]: E0213 04:16:46.847442 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:47.661805 kubelet[1841]: E0213 04:16:47.661688 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:47.848696 kubelet[1841]: E0213 04:16:47.848586 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:48.849835 kubelet[1841]: E0213 04:16:48.849722 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:49.472769 sshd[3188]: Timeout before authentication for 141.98.11.169 port 47420 Feb 13 04:16:49.474642 systemd[1]: sshd@25-139.178.94.233:22-141.98.11.169:47420.service: Deactivated successfully. Feb 13 04:16:49.850109 kubelet[1841]: E0213 04:16:49.849895 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:50.850190 kubelet[1841]: E0213 04:16:50.850084 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:51.851400 kubelet[1841]: E0213 04:16:51.851307 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:52.852437 kubelet[1841]: E0213 04:16:52.852302 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:53.853109 kubelet[1841]: E0213 04:16:53.853020 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:54.854170 kubelet[1841]: E0213 04:16:54.854061 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:55.855241 kubelet[1841]: E0213 04:16:55.855128 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:56.855692 kubelet[1841]: E0213 04:16:56.855588 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:57.856563 kubelet[1841]: E0213 04:16:57.856448 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:58.857314 kubelet[1841]: E0213 04:16:58.857203 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:16:59.857532 kubelet[1841]: E0213 04:16:59.857455 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:00.858702 kubelet[1841]: E0213 04:17:00.858597 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:01.859212 kubelet[1841]: E0213 04:17:01.859117 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:02.860437 kubelet[1841]: E0213 04:17:02.860319 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:03.861168 kubelet[1841]: E0213 04:17:03.861088 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:04.861836 kubelet[1841]: E0213 04:17:04.861761 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:05.862317 kubelet[1841]: E0213 04:17:05.862241 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:06.863312 kubelet[1841]: E0213 04:17:06.863229 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:07.662085 kubelet[1841]: E0213 04:17:07.662002 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:07.863836 kubelet[1841]: E0213 04:17:07.863729 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:08.865040 kubelet[1841]: E0213 04:17:08.864928 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:09.866308 kubelet[1841]: E0213 04:17:09.866191 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:10.867281 kubelet[1841]: E0213 04:17:10.867173 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:11.867647 kubelet[1841]: E0213 04:17:11.867531 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:12.867787 kubelet[1841]: E0213 04:17:12.867671 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:13.868716 kubelet[1841]: E0213 04:17:13.868608 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:14.869949 kubelet[1841]: E0213 04:17:14.869843 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:15.870877 kubelet[1841]: E0213 04:17:15.870767 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:16.872088 kubelet[1841]: E0213 04:17:16.871977 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:17.873309 kubelet[1841]: E0213 04:17:17.873201 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:18.874231 kubelet[1841]: E0213 04:17:18.874115 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:19.874985 kubelet[1841]: E0213 04:17:19.874876 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:20.875495 kubelet[1841]: E0213 04:17:20.875374 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:21.876376 kubelet[1841]: E0213 04:17:21.876315 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:22.877615 kubelet[1841]: E0213 04:17:22.877505 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:23.878835 kubelet[1841]: E0213 04:17:23.878714 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:24.879004 kubelet[1841]: E0213 04:17:24.878895 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:25.879519 kubelet[1841]: E0213 04:17:25.879435 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:26.880825 kubelet[1841]: E0213 04:17:26.880714 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:27.661676 kubelet[1841]: E0213 04:17:27.661560 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:27.881391 kubelet[1841]: E0213 04:17:27.881273 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:28.881640 kubelet[1841]: E0213 04:17:28.881532 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:29.882921 kubelet[1841]: E0213 04:17:29.882805 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:30.883696 kubelet[1841]: E0213 04:17:30.883578 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:31.884236 kubelet[1841]: E0213 04:17:31.884076 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:32.884702 kubelet[1841]: E0213 04:17:32.884597 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:33.885954 kubelet[1841]: E0213 04:17:33.885848 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:34.886702 kubelet[1841]: E0213 04:17:34.886599 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:35.887136 kubelet[1841]: E0213 04:17:35.887032 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:36.888228 kubelet[1841]: E0213 04:17:36.888118 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:37.889204 kubelet[1841]: E0213 04:17:37.889102 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:38.890277 kubelet[1841]: E0213 04:17:38.890168 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:39.890772 kubelet[1841]: E0213 04:17:39.890670 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:40.891769 kubelet[1841]: E0213 04:17:40.891697 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:41.892777 kubelet[1841]: E0213 04:17:41.892672 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:42.893366 kubelet[1841]: E0213 04:17:42.893263 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:43.893570 kubelet[1841]: E0213 04:17:43.893458 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:44.894238 kubelet[1841]: E0213 04:17:44.894135 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:45.895506 kubelet[1841]: E0213 04:17:45.895391 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:46.896393 kubelet[1841]: E0213 04:17:46.896282 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:47.661284 kubelet[1841]: E0213 04:17:47.661185 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:47.897645 kubelet[1841]: E0213 04:17:47.897544 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:48.898114 kubelet[1841]: E0213 04:17:48.898039 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:49.898918 kubelet[1841]: E0213 04:17:49.898805 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:50.899623 kubelet[1841]: E0213 04:17:50.899509 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:51.900529 kubelet[1841]: E0213 04:17:51.900411 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:52.901060 kubelet[1841]: E0213 04:17:52.900948 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:53.901867 kubelet[1841]: E0213 04:17:53.901754 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:54.902085 kubelet[1841]: E0213 04:17:54.901986 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:55.902222 kubelet[1841]: E0213 04:17:55.902115 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:56.902858 kubelet[1841]: E0213 04:17:56.902778 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:57.903472 kubelet[1841]: E0213 04:17:57.903362 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:58.904557 kubelet[1841]: E0213 04:17:58.904450 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:17:59.905286 kubelet[1841]: E0213 04:17:59.905178 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:00.906278 kubelet[1841]: E0213 04:18:00.906167 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:01.906739 kubelet[1841]: E0213 04:18:01.906627 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:02.907485 kubelet[1841]: E0213 04:18:02.907337 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:03.908058 kubelet[1841]: E0213 04:18:03.907949 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:04.908936 kubelet[1841]: E0213 04:18:04.908826 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:05.909552 kubelet[1841]: E0213 04:18:05.909440 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:06.909807 kubelet[1841]: E0213 04:18:06.909697 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:07.661324 kubelet[1841]: E0213 04:18:07.661219 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:07.910000 kubelet[1841]: E0213 04:18:07.909926 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:08.910206 kubelet[1841]: E0213 04:18:08.910096 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:09.910866 kubelet[1841]: E0213 04:18:09.910789 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:10.911547 kubelet[1841]: E0213 04:18:10.911474 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:11.911791 kubelet[1841]: E0213 04:18:11.911711 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:12.912563 kubelet[1841]: E0213 04:18:12.912446 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:13.913138 kubelet[1841]: E0213 04:18:13.913032 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:14.914101 kubelet[1841]: E0213 04:18:14.913998 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:15.914331 kubelet[1841]: E0213 04:18:15.914259 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:16.915356 kubelet[1841]: E0213 04:18:16.915284 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:17.916409 kubelet[1841]: E0213 04:18:17.916330 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:18.917447 kubelet[1841]: E0213 04:18:18.917352 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:19.917656 kubelet[1841]: E0213 04:18:19.917546 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:20.917924 kubelet[1841]: E0213 04:18:20.917817 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:21.918590 kubelet[1841]: E0213 04:18:21.918484 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:22.919709 kubelet[1841]: E0213 04:18:22.919606 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:23.540635 systemd[1]: Started sshd@27-139.178.94.233:22-196.189.21.247:38552.service. Feb 13 04:18:23.920889 kubelet[1841]: E0213 04:18:23.920659 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:24.795217 sshd[3375]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.189.21.247 user=root Feb 13 04:18:24.920915 kubelet[1841]: E0213 04:18:24.920799 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:25.921275 kubelet[1841]: E0213 04:18:25.921152 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:26.527707 sshd[3375]: Failed password for root from 196.189.21.247 port 38552 ssh2 Feb 13 04:18:26.921707 kubelet[1841]: E0213 04:18:26.921494 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:27.360289 sshd[3375]: Received disconnect from 196.189.21.247 port 38552:11: Bye Bye [preauth] Feb 13 04:18:27.360289 sshd[3375]: Disconnected from authenticating user root 196.189.21.247 port 38552 [preauth] Feb 13 04:18:27.362848 systemd[1]: sshd@27-139.178.94.233:22-196.189.21.247:38552.service: Deactivated successfully. Feb 13 04:18:27.661839 kubelet[1841]: E0213 04:18:27.661620 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:27.922760 kubelet[1841]: E0213 04:18:27.922534 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:28.667714 env[1466]: time="2024-02-13T04:18:28.667657964Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 04:18:28.670811 env[1466]: time="2024-02-13T04:18:28.670797372Z" level=info msg="StopContainer for \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\" with timeout 1 (s)" Feb 13 04:18:28.670989 env[1466]: time="2024-02-13T04:18:28.670975953Z" level=info msg="Stop container \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\" with signal terminated" Feb 13 04:18:28.674150 systemd-networkd[1309]: lxc_health: Link DOWN Feb 13 04:18:28.674152 systemd-networkd[1309]: lxc_health: Lost carrier Feb 13 04:18:28.727944 systemd[1]: cri-containerd-0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982.scope: Deactivated successfully. Feb 13 04:18:28.728297 systemd[1]: cri-containerd-0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982.scope: Consumed 6.248s CPU time. Feb 13 04:18:28.762126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982-rootfs.mount: Deactivated successfully. Feb 13 04:18:28.763027 env[1466]: time="2024-02-13T04:18:28.762978137Z" level=info msg="shim disconnected" id=0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982 Feb 13 04:18:28.763027 env[1466]: time="2024-02-13T04:18:28.763004816Z" level=warning msg="cleaning up after shim disconnected" id=0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982 namespace=k8s.io Feb 13 04:18:28.763027 env[1466]: time="2024-02-13T04:18:28.763010818Z" level=info msg="cleaning up dead shim" Feb 13 04:18:28.766253 env[1466]: time="2024-02-13T04:18:28.766236917Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3423 runtime=io.containerd.runc.v2\n" Feb 13 04:18:28.767041 env[1466]: time="2024-02-13T04:18:28.767028058Z" level=info msg="StopContainer for \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\" returns successfully" Feb 13 04:18:28.767351 env[1466]: time="2024-02-13T04:18:28.767339505Z" level=info msg="StopPodSandbox for \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\"" Feb 13 04:18:28.767379 env[1466]: time="2024-02-13T04:18:28.767370893Z" level=info msg="Container to stop \"798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 04:18:28.767405 env[1466]: time="2024-02-13T04:18:28.767380893Z" level=info msg="Container to stop \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 04:18:28.767405 env[1466]: time="2024-02-13T04:18:28.767387321Z" level=info msg="Container to stop \"3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 04:18:28.767405 env[1466]: time="2024-02-13T04:18:28.767394775Z" level=info msg="Container to stop \"7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 04:18:28.767405 env[1466]: time="2024-02-13T04:18:28.767400855Z" level=info msg="Container to stop \"29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 04:18:28.768223 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e-shm.mount: Deactivated successfully. Feb 13 04:18:28.770321 systemd[1]: cri-containerd-92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e.scope: Deactivated successfully. Feb 13 04:18:28.784846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e-rootfs.mount: Deactivated successfully. Feb 13 04:18:28.815122 env[1466]: time="2024-02-13T04:18:28.815032902Z" level=info msg="shim disconnected" id=92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e Feb 13 04:18:28.815122 env[1466]: time="2024-02-13T04:18:28.815100220Z" level=warning msg="cleaning up after shim disconnected" id=92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e namespace=k8s.io Feb 13 04:18:28.815122 env[1466]: time="2024-02-13T04:18:28.815116671Z" level=info msg="cleaning up dead shim" Feb 13 04:18:28.822205 env[1466]: time="2024-02-13T04:18:28.822163169Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3455 runtime=io.containerd.runc.v2\n" Feb 13 04:18:28.822532 env[1466]: time="2024-02-13T04:18:28.822499161Z" level=info msg="TearDown network for sandbox \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" successfully" Feb 13 04:18:28.822532 env[1466]: time="2024-02-13T04:18:28.822525459Z" level=info msg="StopPodSandbox for \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" returns successfully" Feb 13 04:18:28.860067 kubelet[1841]: I0213 04:18:28.860001 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-host-proc-sys-kernel\") pod \"28cdcf06-4ffd-47b4-8529-e121af6c6439\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " Feb 13 04:18:28.860450 kubelet[1841]: I0213 04:18:28.860104 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chtct\" (UniqueName: \"kubernetes.io/projected/28cdcf06-4ffd-47b4-8529-e121af6c6439-kube-api-access-chtct\") pod \"28cdcf06-4ffd-47b4-8529-e121af6c6439\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " Feb 13 04:18:28.860450 kubelet[1841]: I0213 04:18:28.860166 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-cilium-run\") pod \"28cdcf06-4ffd-47b4-8529-e121af6c6439\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " Feb 13 04:18:28.860450 kubelet[1841]: I0213 04:18:28.860218 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-etc-cni-netd\") pod \"28cdcf06-4ffd-47b4-8529-e121af6c6439\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " Feb 13 04:18:28.860450 kubelet[1841]: I0213 04:18:28.860161 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "28cdcf06-4ffd-47b4-8529-e121af6c6439" (UID: "28cdcf06-4ffd-47b4-8529-e121af6c6439"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.860450 kubelet[1841]: I0213 04:18:28.860273 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-bpf-maps\") pod \"28cdcf06-4ffd-47b4-8529-e121af6c6439\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " Feb 13 04:18:28.860450 kubelet[1841]: I0213 04:18:28.860327 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-lib-modules\") pod \"28cdcf06-4ffd-47b4-8529-e121af6c6439\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " Feb 13 04:18:28.861230 kubelet[1841]: I0213 04:18:28.860280 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "28cdcf06-4ffd-47b4-8529-e121af6c6439" (UID: "28cdcf06-4ffd-47b4-8529-e121af6c6439"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.861230 kubelet[1841]: I0213 04:18:28.860364 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "28cdcf06-4ffd-47b4-8529-e121af6c6439" (UID: "28cdcf06-4ffd-47b4-8529-e121af6c6439"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.861230 kubelet[1841]: I0213 04:18:28.860388 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/28cdcf06-4ffd-47b4-8529-e121af6c6439-cilium-config-path\") pod \"28cdcf06-4ffd-47b4-8529-e121af6c6439\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " Feb 13 04:18:28.861230 kubelet[1841]: I0213 04:18:28.860363 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "28cdcf06-4ffd-47b4-8529-e121af6c6439" (UID: "28cdcf06-4ffd-47b4-8529-e121af6c6439"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.861230 kubelet[1841]: I0213 04:18:28.860503 1841 scope.go:115] "RemoveContainer" containerID="0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982" Feb 13 04:18:28.861230 kubelet[1841]: I0213 04:18:28.860551 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-hostproc" (OuterVolumeSpecName: "hostproc") pod "28cdcf06-4ffd-47b4-8529-e121af6c6439" (UID: "28cdcf06-4ffd-47b4-8529-e121af6c6439"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.861955 kubelet[1841]: I0213 04:18:28.860522 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "28cdcf06-4ffd-47b4-8529-e121af6c6439" (UID: "28cdcf06-4ffd-47b4-8529-e121af6c6439"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.861955 kubelet[1841]: I0213 04:18:28.860516 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-hostproc\") pod \"28cdcf06-4ffd-47b4-8529-e121af6c6439\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " Feb 13 04:18:28.861955 kubelet[1841]: I0213 04:18:28.860811 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/28cdcf06-4ffd-47b4-8529-e121af6c6439-hubble-tls\") pod \"28cdcf06-4ffd-47b4-8529-e121af6c6439\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " Feb 13 04:18:28.861955 kubelet[1841]: W0213 04:18:28.860800 1841 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/28cdcf06-4ffd-47b4-8529-e121af6c6439/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 04:18:28.861955 kubelet[1841]: I0213 04:18:28.860910 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-xtables-lock\") pod \"28cdcf06-4ffd-47b4-8529-e121af6c6439\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " Feb 13 04:18:28.861955 kubelet[1841]: I0213 04:18:28.861000 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-cilium-cgroup\") pod \"28cdcf06-4ffd-47b4-8529-e121af6c6439\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " Feb 13 04:18:28.862750 kubelet[1841]: I0213 04:18:28.861017 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "28cdcf06-4ffd-47b4-8529-e121af6c6439" (UID: "28cdcf06-4ffd-47b4-8529-e121af6c6439"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.862750 kubelet[1841]: I0213 04:18:28.861117 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/28cdcf06-4ffd-47b4-8529-e121af6c6439-clustermesh-secrets\") pod \"28cdcf06-4ffd-47b4-8529-e121af6c6439\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " Feb 13 04:18:28.862750 kubelet[1841]: I0213 04:18:28.861135 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "28cdcf06-4ffd-47b4-8529-e121af6c6439" (UID: "28cdcf06-4ffd-47b4-8529-e121af6c6439"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.862750 kubelet[1841]: I0213 04:18:28.861199 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-host-proc-sys-net\") pod \"28cdcf06-4ffd-47b4-8529-e121af6c6439\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " Feb 13 04:18:28.862750 kubelet[1841]: I0213 04:18:28.861275 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-cni-path\") pod \"28cdcf06-4ffd-47b4-8529-e121af6c6439\" (UID: \"28cdcf06-4ffd-47b4-8529-e121af6c6439\") " Feb 13 04:18:28.863432 kubelet[1841]: I0213 04:18:28.861318 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "28cdcf06-4ffd-47b4-8529-e121af6c6439" (UID: "28cdcf06-4ffd-47b4-8529-e121af6c6439"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.863432 kubelet[1841]: I0213 04:18:28.861361 1841 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-hostproc\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:28.863432 kubelet[1841]: I0213 04:18:28.861373 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-cni-path" (OuterVolumeSpecName: "cni-path") pod "28cdcf06-4ffd-47b4-8529-e121af6c6439" (UID: "28cdcf06-4ffd-47b4-8529-e121af6c6439"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:28.863432 kubelet[1841]: I0213 04:18:28.861445 1841 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-xtables-lock\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:28.863432 kubelet[1841]: I0213 04:18:28.861486 1841 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-cilium-cgroup\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:28.863432 kubelet[1841]: I0213 04:18:28.861530 1841 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-bpf-maps\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:28.863432 kubelet[1841]: I0213 04:18:28.861563 1841 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-lib-modules\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:28.864738 kubelet[1841]: I0213 04:18:28.861618 1841 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-host-proc-sys-kernel\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:28.864738 kubelet[1841]: I0213 04:18:28.861681 1841 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-cilium-run\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:28.864738 kubelet[1841]: I0213 04:18:28.861721 1841 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-etc-cni-netd\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:28.865453 env[1466]: time="2024-02-13T04:18:28.863783426Z" level=info msg="RemoveContainer for \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\"" Feb 13 04:18:28.866298 kubelet[1841]: I0213 04:18:28.866236 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28cdcf06-4ffd-47b4-8529-e121af6c6439-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "28cdcf06-4ffd-47b4-8529-e121af6c6439" (UID: "28cdcf06-4ffd-47b4-8529-e121af6c6439"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 04:18:28.867712 kubelet[1841]: I0213 04:18:28.867638 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28cdcf06-4ffd-47b4-8529-e121af6c6439-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "28cdcf06-4ffd-47b4-8529-e121af6c6439" (UID: "28cdcf06-4ffd-47b4-8529-e121af6c6439"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 04:18:28.868306 kubelet[1841]: I0213 04:18:28.868241 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28cdcf06-4ffd-47b4-8529-e121af6c6439-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "28cdcf06-4ffd-47b4-8529-e121af6c6439" (UID: "28cdcf06-4ffd-47b4-8529-e121af6c6439"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 04:18:28.868306 kubelet[1841]: I0213 04:18:28.868266 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28cdcf06-4ffd-47b4-8529-e121af6c6439-kube-api-access-chtct" (OuterVolumeSpecName: "kube-api-access-chtct") pod "28cdcf06-4ffd-47b4-8529-e121af6c6439" (UID: "28cdcf06-4ffd-47b4-8529-e121af6c6439"). InnerVolumeSpecName "kube-api-access-chtct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 04:18:28.868732 env[1466]: time="2024-02-13T04:18:28.868654617Z" level=info msg="RemoveContainer for \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\" returns successfully" Feb 13 04:18:28.869223 kubelet[1841]: I0213 04:18:28.869190 1841 scope.go:115] "RemoveContainer" containerID="798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503" Feb 13 04:18:28.871072 systemd[1]: var-lib-kubelet-pods-28cdcf06\x2d4ffd\x2d47b4\x2d8529\x2de121af6c6439-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dchtct.mount: Deactivated successfully. Feb 13 04:18:28.871352 systemd[1]: var-lib-kubelet-pods-28cdcf06\x2d4ffd\x2d47b4\x2d8529\x2de121af6c6439-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 04:18:28.871575 systemd[1]: var-lib-kubelet-pods-28cdcf06\x2d4ffd\x2d47b4\x2d8529\x2de121af6c6439-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 04:18:28.871779 env[1466]: time="2024-02-13T04:18:28.871671981Z" level=info msg="RemoveContainer for \"798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503\"" Feb 13 04:18:28.875391 env[1466]: time="2024-02-13T04:18:28.875281426Z" level=info msg="RemoveContainer for \"798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503\" returns successfully" Feb 13 04:18:28.875705 kubelet[1841]: I0213 04:18:28.875663 1841 scope.go:115] "RemoveContainer" containerID="29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599" Feb 13 04:18:28.877949 env[1466]: time="2024-02-13T04:18:28.877878721Z" level=info msg="RemoveContainer for \"29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599\"" Feb 13 04:18:28.881507 env[1466]: time="2024-02-13T04:18:28.881410223Z" level=info msg="RemoveContainer for \"29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599\" returns successfully" Feb 13 04:18:28.881910 kubelet[1841]: I0213 04:18:28.881866 1841 scope.go:115] "RemoveContainer" containerID="7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7" Feb 13 04:18:28.884375 env[1466]: time="2024-02-13T04:18:28.884307592Z" level=info msg="RemoveContainer for \"7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7\"" Feb 13 04:18:28.887867 env[1466]: time="2024-02-13T04:18:28.887795357Z" level=info msg="RemoveContainer for \"7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7\" returns successfully" Feb 13 04:18:28.888176 kubelet[1841]: I0213 04:18:28.888133 1841 scope.go:115] "RemoveContainer" containerID="3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a" Feb 13 04:18:28.890579 env[1466]: time="2024-02-13T04:18:28.890509903Z" level=info msg="RemoveContainer for \"3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a\"" Feb 13 04:18:28.893964 env[1466]: time="2024-02-13T04:18:28.893858845Z" level=info msg="RemoveContainer for \"3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a\" returns successfully" Feb 13 04:18:28.894208 kubelet[1841]: I0213 04:18:28.894166 1841 scope.go:115] "RemoveContainer" containerID="0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982" Feb 13 04:18:28.894858 env[1466]: time="2024-02-13T04:18:28.894643972Z" level=error msg="ContainerStatus for \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\": not found" Feb 13 04:18:28.895098 kubelet[1841]: E0213 04:18:28.895047 1841 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\": not found" containerID="0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982" Feb 13 04:18:28.895284 kubelet[1841]: I0213 04:18:28.895123 1841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982} err="failed to get container status \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\": rpc error: code = NotFound desc = an error occurred when try to find container \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\": not found" Feb 13 04:18:28.895284 kubelet[1841]: I0213 04:18:28.895158 1841 scope.go:115] "RemoveContainer" containerID="798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503" Feb 13 04:18:28.895705 env[1466]: time="2024-02-13T04:18:28.895571221Z" level=error msg="ContainerStatus for \"798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503\": not found" Feb 13 04:18:28.895931 kubelet[1841]: E0213 04:18:28.895896 1841 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503\": not found" containerID="798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503" Feb 13 04:18:28.896108 kubelet[1841]: I0213 04:18:28.895967 1841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503} err="failed to get container status \"798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503\": rpc error: code = NotFound desc = an error occurred when try to find container \"798393ed565c13d64b592e9fa3fcb5c327e35097509d95c697d98cf2dff30503\": not found" Feb 13 04:18:28.896108 kubelet[1841]: I0213 04:18:28.895997 1841 scope.go:115] "RemoveContainer" containerID="29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599" Feb 13 04:18:28.896595 env[1466]: time="2024-02-13T04:18:28.896455319Z" level=error msg="ContainerStatus for \"29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599\": not found" Feb 13 04:18:28.896973 kubelet[1841]: E0213 04:18:28.896937 1841 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599\": not found" containerID="29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599" Feb 13 04:18:28.897132 kubelet[1841]: I0213 04:18:28.897011 1841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599} err="failed to get container status \"29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599\": rpc error: code = NotFound desc = an error occurred when try to find container \"29d8859582262efc26446e40c51c7a30c172c6b77f85c8fb9bc0f2145d161599\": not found" Feb 13 04:18:28.897132 kubelet[1841]: I0213 04:18:28.897039 1841 scope.go:115] "RemoveContainer" containerID="7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7" Feb 13 04:18:28.897528 env[1466]: time="2024-02-13T04:18:28.897385189Z" level=error msg="ContainerStatus for \"7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7\": not found" Feb 13 04:18:28.897748 kubelet[1841]: E0213 04:18:28.897715 1841 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7\": not found" containerID="7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7" Feb 13 04:18:28.897907 kubelet[1841]: I0213 04:18:28.897779 1841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7} err="failed to get container status \"7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f920974dda88296410d00f04be9931da2dea7066bc8e806fc02059a0dc600b7\": not found" Feb 13 04:18:28.897907 kubelet[1841]: I0213 04:18:28.897803 1841 scope.go:115] "RemoveContainer" containerID="3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a" Feb 13 04:18:28.898394 env[1466]: time="2024-02-13T04:18:28.898262613Z" level=error msg="ContainerStatus for \"3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a\": not found" Feb 13 04:18:28.898779 kubelet[1841]: E0213 04:18:28.898740 1841 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a\": not found" containerID="3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a" Feb 13 04:18:28.898944 kubelet[1841]: I0213 04:18:28.898819 1841 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a} err="failed to get container status \"3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c45b253872301d35b8a508891f16d20e32f1652365c31c5922a8bbc431f753a\": not found" Feb 13 04:18:28.923240 kubelet[1841]: E0213 04:18:28.923104 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:28.962966 kubelet[1841]: I0213 04:18:28.962904 1841 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-host-proc-sys-net\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:28.962966 kubelet[1841]: I0213 04:18:28.962975 1841 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/28cdcf06-4ffd-47b4-8529-e121af6c6439-cni-path\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:28.963447 kubelet[1841]: I0213 04:18:28.963010 1841 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/28cdcf06-4ffd-47b4-8529-e121af6c6439-clustermesh-secrets\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:28.963447 kubelet[1841]: I0213 04:18:28.963146 1841 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-chtct\" (UniqueName: \"kubernetes.io/projected/28cdcf06-4ffd-47b4-8529-e121af6c6439-kube-api-access-chtct\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:28.963447 kubelet[1841]: I0213 04:18:28.963177 1841 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/28cdcf06-4ffd-47b4-8529-e121af6c6439-cilium-config-path\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:28.963447 kubelet[1841]: I0213 04:18:28.963206 1841 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/28cdcf06-4ffd-47b4-8529-e121af6c6439-hubble-tls\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:29.166229 systemd[1]: Removed slice kubepods-burstable-pod28cdcf06_4ffd_47b4_8529_e121af6c6439.slice. Feb 13 04:18:29.166276 systemd[1]: kubepods-burstable-pod28cdcf06_4ffd_47b4_8529_e121af6c6439.slice: Consumed 6.296s CPU time. Feb 13 04:18:29.840603 env[1466]: time="2024-02-13T04:18:29.840463554Z" level=info msg="StopContainer for \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\" with timeout 1 (s)" Feb 13 04:18:29.841397 env[1466]: time="2024-02-13T04:18:29.840561333Z" level=error msg="StopContainer for \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\": not found" Feb 13 04:18:29.841397 env[1466]: time="2024-02-13T04:18:29.841274629Z" level=info msg="StopPodSandbox for \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\"" Feb 13 04:18:29.841665 kubelet[1841]: E0213 04:18:29.840890 1841 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982\": not found" containerID="0538ac9af5e9cf7761693d354a778087e42e72a0429db127d6941b483fd34982" Feb 13 04:18:29.841823 env[1466]: time="2024-02-13T04:18:29.841491744Z" level=info msg="TearDown network for sandbox \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" successfully" Feb 13 04:18:29.841823 env[1466]: time="2024-02-13T04:18:29.841586831Z" level=info msg="StopPodSandbox for \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" returns successfully" Feb 13 04:18:29.842657 kubelet[1841]: I0213 04:18:29.842575 1841 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=28cdcf06-4ffd-47b4-8529-e121af6c6439 path="/var/lib/kubelet/pods/28cdcf06-4ffd-47b4-8529-e121af6c6439/volumes" Feb 13 04:18:29.923479 kubelet[1841]: E0213 04:18:29.923394 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:30.924360 kubelet[1841]: E0213 04:18:30.924252 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:31.047256 kubelet[1841]: I0213 04:18:31.047156 1841 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:18:31.047256 kubelet[1841]: E0213 04:18:31.047265 1841 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="28cdcf06-4ffd-47b4-8529-e121af6c6439" containerName="cilium-agent" Feb 13 04:18:31.047818 kubelet[1841]: E0213 04:18:31.047298 1841 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="28cdcf06-4ffd-47b4-8529-e121af6c6439" containerName="mount-cgroup" Feb 13 04:18:31.047818 kubelet[1841]: E0213 04:18:31.047326 1841 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="28cdcf06-4ffd-47b4-8529-e121af6c6439" containerName="apply-sysctl-overwrites" Feb 13 04:18:31.047818 kubelet[1841]: E0213 04:18:31.047353 1841 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="28cdcf06-4ffd-47b4-8529-e121af6c6439" containerName="mount-bpf-fs" Feb 13 04:18:31.047818 kubelet[1841]: E0213 04:18:31.047374 1841 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="28cdcf06-4ffd-47b4-8529-e121af6c6439" containerName="clean-cilium-state" Feb 13 04:18:31.047818 kubelet[1841]: I0213 04:18:31.047449 1841 memory_manager.go:346] "RemoveStaleState removing state" podUID="28cdcf06-4ffd-47b4-8529-e121af6c6439" containerName="cilium-agent" Feb 13 04:18:31.059453 kubelet[1841]: I0213 04:18:31.059357 1841 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:18:31.061902 systemd[1]: Created slice kubepods-besteffort-pod28e3a731_bf55_49a7_a019_64ccfdc669f5.slice. Feb 13 04:18:31.075978 systemd[1]: Created slice kubepods-burstable-pod12dc9494_b5b7_4c6c_a40d_d601ee927af4.slice. Feb 13 04:18:31.079009 kubelet[1841]: I0213 04:18:31.078923 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/28e3a731-bf55-49a7-a019-64ccfdc669f5-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-78v4p\" (UID: \"28e3a731-bf55-49a7-a019-64ccfdc669f5\") " pod="kube-system/cilium-operator-f59cbd8c6-78v4p" Feb 13 04:18:31.079209 kubelet[1841]: I0213 04:18:31.079047 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqb67\" (UniqueName: \"kubernetes.io/projected/28e3a731-bf55-49a7-a019-64ccfdc669f5-kube-api-access-mqb67\") pod \"cilium-operator-f59cbd8c6-78v4p\" (UID: \"28e3a731-bf55-49a7-a019-64ccfdc669f5\") " pod="kube-system/cilium-operator-f59cbd8c6-78v4p" Feb 13 04:18:31.180589 kubelet[1841]: I0213 04:18:31.180359 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cilium-ipsec-secrets\") pod \"cilium-nn4wc\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " pod="kube-system/cilium-nn4wc" Feb 13 04:18:31.180589 kubelet[1841]: I0213 04:18:31.180492 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12dc9494-b5b7-4c6c-a40d-d601ee927af4-hubble-tls\") pod \"cilium-nn4wc\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " pod="kube-system/cilium-nn4wc" Feb 13 04:18:31.180589 kubelet[1841]: I0213 04:18:31.180590 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cilium-run\") pod \"cilium-nn4wc\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " pod="kube-system/cilium-nn4wc" Feb 13 04:18:31.181070 kubelet[1841]: I0213 04:18:31.180723 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cni-path\") pod \"cilium-nn4wc\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " pod="kube-system/cilium-nn4wc" Feb 13 04:18:31.181070 kubelet[1841]: I0213 04:18:31.180931 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-xtables-lock\") pod \"cilium-nn4wc\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " pod="kube-system/cilium-nn4wc" Feb 13 04:18:31.181299 kubelet[1841]: I0213 04:18:31.181093 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-host-proc-sys-net\") pod \"cilium-nn4wc\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " pod="kube-system/cilium-nn4wc" Feb 13 04:18:31.181503 kubelet[1841]: I0213 04:18:31.181301 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcj89\" (UniqueName: \"kubernetes.io/projected/12dc9494-b5b7-4c6c-a40d-d601ee927af4-kube-api-access-tcj89\") pod \"cilium-nn4wc\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " pod="kube-system/cilium-nn4wc" Feb 13 04:18:31.181775 kubelet[1841]: I0213 04:18:31.181696 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-etc-cni-netd\") pod \"cilium-nn4wc\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " pod="kube-system/cilium-nn4wc" Feb 13 04:18:31.181990 kubelet[1841]: I0213 04:18:31.181861 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-lib-modules\") pod \"cilium-nn4wc\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " pod="kube-system/cilium-nn4wc" Feb 13 04:18:31.182119 kubelet[1841]: I0213 04:18:31.182044 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12dc9494-b5b7-4c6c-a40d-d601ee927af4-clustermesh-secrets\") pod \"cilium-nn4wc\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " pod="kube-system/cilium-nn4wc" Feb 13 04:18:31.182357 kubelet[1841]: I0213 04:18:31.182317 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-bpf-maps\") pod \"cilium-nn4wc\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " pod="kube-system/cilium-nn4wc" Feb 13 04:18:31.182572 kubelet[1841]: I0213 04:18:31.182527 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-hostproc\") pod \"cilium-nn4wc\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " pod="kube-system/cilium-nn4wc" Feb 13 04:18:31.182718 kubelet[1841]: I0213 04:18:31.182686 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cilium-cgroup\") pod \"cilium-nn4wc\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " pod="kube-system/cilium-nn4wc" Feb 13 04:18:31.182830 kubelet[1841]: I0213 04:18:31.182779 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cilium-config-path\") pod \"cilium-nn4wc\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " pod="kube-system/cilium-nn4wc" Feb 13 04:18:31.182975 kubelet[1841]: I0213 04:18:31.182862 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-host-proc-sys-kernel\") pod \"cilium-nn4wc\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " pod="kube-system/cilium-nn4wc" Feb 13 04:18:31.367864 env[1466]: time="2024-02-13T04:18:31.367758580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-78v4p,Uid:28e3a731-bf55-49a7-a019-64ccfdc669f5,Namespace:kube-system,Attempt:0,}" Feb 13 04:18:31.383442 env[1466]: time="2024-02-13T04:18:31.383388405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 04:18:31.383442 env[1466]: time="2024-02-13T04:18:31.383408540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 04:18:31.383442 env[1466]: time="2024-02-13T04:18:31.383415404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 04:18:31.383533 env[1466]: time="2024-02-13T04:18:31.383480179Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5464fdc592bfd864eb50683792e538f5d2720e59d556b38fd7d309307e13d9f7 pid=3482 runtime=io.containerd.runc.v2 Feb 13 04:18:31.388846 systemd[1]: Started cri-containerd-5464fdc592bfd864eb50683792e538f5d2720e59d556b38fd7d309307e13d9f7.scope. Feb 13 04:18:31.389023 env[1466]: time="2024-02-13T04:18:31.388997841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nn4wc,Uid:12dc9494-b5b7-4c6c-a40d-d601ee927af4,Namespace:kube-system,Attempt:0,}" Feb 13 04:18:31.394562 env[1466]: time="2024-02-13T04:18:31.394521751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 04:18:31.394562 env[1466]: time="2024-02-13T04:18:31.394543784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 04:18:31.394562 env[1466]: time="2024-02-13T04:18:31.394550665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 04:18:31.394697 env[1466]: time="2024-02-13T04:18:31.394620116Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f pid=3514 runtime=io.containerd.runc.v2 Feb 13 04:18:31.399910 systemd[1]: Started cri-containerd-8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f.scope. Feb 13 04:18:31.411024 env[1466]: time="2024-02-13T04:18:31.410994874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nn4wc,Uid:12dc9494-b5b7-4c6c-a40d-d601ee927af4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f\"" Feb 13 04:18:31.412121 env[1466]: time="2024-02-13T04:18:31.412104604Z" level=info msg="CreateContainer within sandbox \"8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 04:18:31.412850 env[1466]: time="2024-02-13T04:18:31.412831705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-78v4p,Uid:28e3a731-bf55-49a7-a019-64ccfdc669f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5464fdc592bfd864eb50683792e538f5d2720e59d556b38fd7d309307e13d9f7\"" Feb 13 04:18:31.417001 env[1466]: time="2024-02-13T04:18:31.416957744Z" level=info msg="CreateContainer within sandbox \"8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7\"" Feb 13 04:18:31.417174 env[1466]: time="2024-02-13T04:18:31.417131071Z" level=info msg="StartContainer for \"3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7\"" Feb 13 04:18:31.424126 systemd[1]: Started cri-containerd-3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7.scope. Feb 13 04:18:31.429639 systemd[1]: cri-containerd-3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7.scope: Deactivated successfully. Feb 13 04:18:31.429822 systemd[1]: Stopped cri-containerd-3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7.scope. Feb 13 04:18:31.438512 env[1466]: time="2024-02-13T04:18:31.438433953Z" level=info msg="shim disconnected" id=3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7 Feb 13 04:18:31.438512 env[1466]: time="2024-02-13T04:18:31.438468111Z" level=warning msg="cleaning up after shim disconnected" id=3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7 namespace=k8s.io Feb 13 04:18:31.438512 env[1466]: time="2024-02-13T04:18:31.438474209Z" level=info msg="cleaning up dead shim" Feb 13 04:18:31.442162 env[1466]: time="2024-02-13T04:18:31.442112355Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3580 runtime=io.containerd.runc.v2\ntime=\"2024-02-13T04:18:31Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 13 04:18:31.442299 env[1466]: time="2024-02-13T04:18:31.442244415Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Feb 13 04:18:31.442403 env[1466]: time="2024-02-13T04:18:31.442378624Z" level=error msg="Failed to pipe stderr of container \"3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7\"" error="reading from a closed fifo" Feb 13 04:18:31.442403 env[1466]: time="2024-02-13T04:18:31.442380188Z" level=error msg="Failed to pipe stdout of container \"3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7\"" error="reading from a closed fifo" Feb 13 04:18:31.443098 env[1466]: time="2024-02-13T04:18:31.443044343Z" level=error msg="StartContainer for \"3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 13 04:18:31.443235 kubelet[1841]: E0213 04:18:31.443195 1841 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7" Feb 13 04:18:31.443284 kubelet[1841]: E0213 04:18:31.443272 1841 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 13 04:18:31.443284 kubelet[1841]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 13 04:18:31.443284 kubelet[1841]: rm /hostbin/cilium-mount Feb 13 04:18:31.443284 kubelet[1841]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tcj89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-nn4wc_kube-system(12dc9494-b5b7-4c6c-a40d-d601ee927af4): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 13 04:18:31.443409 kubelet[1841]: E0213 04:18:31.443297 1841 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nn4wc" podUID=12dc9494-b5b7-4c6c-a40d-d601ee927af4 Feb 13 04:18:31.874176 env[1466]: time="2024-02-13T04:18:31.874051420Z" level=info msg="StopPodSandbox for \"8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f\"" Feb 13 04:18:31.874518 env[1466]: time="2024-02-13T04:18:31.874177639Z" level=info msg="Container to stop \"3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 04:18:31.883156 systemd[1]: cri-containerd-8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f.scope: Deactivated successfully. Feb 13 04:18:31.909929 env[1466]: time="2024-02-13T04:18:31.909884143Z" level=info msg="shim disconnected" id=8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f Feb 13 04:18:31.910047 env[1466]: time="2024-02-13T04:18:31.909932386Z" level=warning msg="cleaning up after shim disconnected" id=8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f namespace=k8s.io Feb 13 04:18:31.910047 env[1466]: time="2024-02-13T04:18:31.909945861Z" level=info msg="cleaning up dead shim" Feb 13 04:18:31.914957 env[1466]: time="2024-02-13T04:18:31.914932019Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3611 runtime=io.containerd.runc.v2\n" Feb 13 04:18:31.915192 env[1466]: time="2024-02-13T04:18:31.915141243Z" level=info msg="TearDown network for sandbox \"8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f\" successfully" Feb 13 04:18:31.915192 env[1466]: time="2024-02-13T04:18:31.915159852Z" level=info msg="StopPodSandbox for \"8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f\" returns successfully" Feb 13 04:18:31.924653 kubelet[1841]: E0213 04:18:31.924635 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:31.988994 kubelet[1841]: I0213 04:18:31.988886 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cni-path\") pod \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " Feb 13 04:18:31.988994 kubelet[1841]: I0213 04:18:31.988982 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-xtables-lock\") pod \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " Feb 13 04:18:31.989414 kubelet[1841]: I0213 04:18:31.989008 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cni-path" (OuterVolumeSpecName: "cni-path") pod "12dc9494-b5b7-4c6c-a40d-d601ee927af4" (UID: "12dc9494-b5b7-4c6c-a40d-d601ee927af4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.989414 kubelet[1841]: I0213 04:18:31.989046 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-host-proc-sys-net\") pod \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " Feb 13 04:18:31.989414 kubelet[1841]: I0213 04:18:31.989073 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "12dc9494-b5b7-4c6c-a40d-d601ee927af4" (UID: "12dc9494-b5b7-4c6c-a40d-d601ee927af4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.989414 kubelet[1841]: I0213 04:18:31.989110 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "12dc9494-b5b7-4c6c-a40d-d601ee927af4" (UID: "12dc9494-b5b7-4c6c-a40d-d601ee927af4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.989414 kubelet[1841]: I0213 04:18:31.989170 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-etc-cni-netd\") pod \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " Feb 13 04:18:31.990098 kubelet[1841]: I0213 04:18:31.989273 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "12dc9494-b5b7-4c6c-a40d-d601ee927af4" (UID: "12dc9494-b5b7-4c6c-a40d-d601ee927af4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.990098 kubelet[1841]: I0213 04:18:31.989306 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12dc9494-b5b7-4c6c-a40d-d601ee927af4-hubble-tls\") pod \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " Feb 13 04:18:31.990098 kubelet[1841]: I0213 04:18:31.989392 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcj89\" (UniqueName: \"kubernetes.io/projected/12dc9494-b5b7-4c6c-a40d-d601ee927af4-kube-api-access-tcj89\") pod \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " Feb 13 04:18:31.990098 kubelet[1841]: I0213 04:18:31.989485 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cilium-config-path\") pod \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " Feb 13 04:18:31.990098 kubelet[1841]: I0213 04:18:31.989544 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-host-proc-sys-kernel\") pod \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " Feb 13 04:18:31.990098 kubelet[1841]: I0213 04:18:31.989596 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-lib-modules\") pod \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " Feb 13 04:18:31.990873 kubelet[1841]: I0213 04:18:31.989663 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12dc9494-b5b7-4c6c-a40d-d601ee927af4-clustermesh-secrets\") pod \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " Feb 13 04:18:31.990873 kubelet[1841]: I0213 04:18:31.989721 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-hostproc\") pod \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " Feb 13 04:18:31.990873 kubelet[1841]: I0213 04:18:31.989699 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "12dc9494-b5b7-4c6c-a40d-d601ee927af4" (UID: "12dc9494-b5b7-4c6c-a40d-d601ee927af4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.990873 kubelet[1841]: I0213 04:18:31.989800 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cilium-cgroup\") pod \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " Feb 13 04:18:31.990873 kubelet[1841]: I0213 04:18:31.989726 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "12dc9494-b5b7-4c6c-a40d-d601ee927af4" (UID: "12dc9494-b5b7-4c6c-a40d-d601ee927af4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.991460 kubelet[1841]: I0213 04:18:31.989917 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cilium-ipsec-secrets\") pod \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " Feb 13 04:18:31.991460 kubelet[1841]: I0213 04:18:31.989914 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "12dc9494-b5b7-4c6c-a40d-d601ee927af4" (UID: "12dc9494-b5b7-4c6c-a40d-d601ee927af4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.991460 kubelet[1841]: I0213 04:18:31.989875 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-hostproc" (OuterVolumeSpecName: "hostproc") pod "12dc9494-b5b7-4c6c-a40d-d601ee927af4" (UID: "12dc9494-b5b7-4c6c-a40d-d601ee927af4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.991460 kubelet[1841]: I0213 04:18:31.990015 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cilium-run\") pod \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " Feb 13 04:18:31.991460 kubelet[1841]: W0213 04:18:31.989964 1841 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/12dc9494-b5b7-4c6c-a40d-d601ee927af4/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 04:18:31.991460 kubelet[1841]: I0213 04:18:31.990094 1841 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-bpf-maps\") pod \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\" (UID: \"12dc9494-b5b7-4c6c-a40d-d601ee927af4\") " Feb 13 04:18:31.992144 kubelet[1841]: I0213 04:18:31.990191 1841 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-xtables-lock\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:31.992144 kubelet[1841]: I0213 04:18:31.990120 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "12dc9494-b5b7-4c6c-a40d-d601ee927af4" (UID: "12dc9494-b5b7-4c6c-a40d-d601ee927af4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.992144 kubelet[1841]: I0213 04:18:31.990231 1841 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-host-proc-sys-net\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:31.992144 kubelet[1841]: I0213 04:18:31.990272 1841 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-etc-cni-netd\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:31.992144 kubelet[1841]: I0213 04:18:31.990301 1841 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cni-path\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:31.992144 kubelet[1841]: I0213 04:18:31.990213 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "12dc9494-b5b7-4c6c-a40d-d601ee927af4" (UID: "12dc9494-b5b7-4c6c-a40d-d601ee927af4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 04:18:31.992144 kubelet[1841]: I0213 04:18:31.990332 1841 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-host-proc-sys-kernel\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:31.992946 kubelet[1841]: I0213 04:18:31.990368 1841 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-lib-modules\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:31.992946 kubelet[1841]: I0213 04:18:31.990415 1841 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-hostproc\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:31.992946 kubelet[1841]: I0213 04:18:31.990474 1841 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cilium-cgroup\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:31.994762 kubelet[1841]: I0213 04:18:31.994723 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "12dc9494-b5b7-4c6c-a40d-d601ee927af4" (UID: "12dc9494-b5b7-4c6c-a40d-d601ee927af4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 04:18:31.994881 kubelet[1841]: I0213 04:18:31.994869 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12dc9494-b5b7-4c6c-a40d-d601ee927af4-kube-api-access-tcj89" (OuterVolumeSpecName: "kube-api-access-tcj89") pod "12dc9494-b5b7-4c6c-a40d-d601ee927af4" (UID: "12dc9494-b5b7-4c6c-a40d-d601ee927af4"). InnerVolumeSpecName "kube-api-access-tcj89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 04:18:31.994922 kubelet[1841]: I0213 04:18:31.994891 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12dc9494-b5b7-4c6c-a40d-d601ee927af4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "12dc9494-b5b7-4c6c-a40d-d601ee927af4" (UID: "12dc9494-b5b7-4c6c-a40d-d601ee927af4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 04:18:31.995004 kubelet[1841]: I0213 04:18:31.994963 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "12dc9494-b5b7-4c6c-a40d-d601ee927af4" (UID: "12dc9494-b5b7-4c6c-a40d-d601ee927af4"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 04:18:31.995004 kubelet[1841]: I0213 04:18:31.994977 1841 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12dc9494-b5b7-4c6c-a40d-d601ee927af4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "12dc9494-b5b7-4c6c-a40d-d601ee927af4" (UID: "12dc9494-b5b7-4c6c-a40d-d601ee927af4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 04:18:32.090955 kubelet[1841]: I0213 04:18:32.090848 1841 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cilium-ipsec-secrets\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:32.090955 kubelet[1841]: I0213 04:18:32.090921 1841 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cilium-run\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:32.090955 kubelet[1841]: I0213 04:18:32.090957 1841 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12dc9494-b5b7-4c6c-a40d-d601ee927af4-bpf-maps\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:32.091524 kubelet[1841]: I0213 04:18:32.090987 1841 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12dc9494-b5b7-4c6c-a40d-d601ee927af4-hubble-tls\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:32.091524 kubelet[1841]: I0213 04:18:32.091023 1841 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-tcj89\" (UniqueName: \"kubernetes.io/projected/12dc9494-b5b7-4c6c-a40d-d601ee927af4-kube-api-access-tcj89\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:32.091524 kubelet[1841]: I0213 04:18:32.091054 1841 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12dc9494-b5b7-4c6c-a40d-d601ee927af4-cilium-config-path\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:32.091524 kubelet[1841]: I0213 04:18:32.091083 1841 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12dc9494-b5b7-4c6c-a40d-d601ee927af4-clustermesh-secrets\") on node \"10.67.80.11\" DevicePath \"\"" Feb 13 04:18:32.197936 systemd[1]: var-lib-kubelet-pods-12dc9494\x2db5b7\x2d4c6c\x2da40d\x2dd601ee927af4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtcj89.mount: Deactivated successfully. Feb 13 04:18:32.198026 systemd[1]: var-lib-kubelet-pods-12dc9494\x2db5b7\x2d4c6c\x2da40d\x2dd601ee927af4-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 13 04:18:32.198080 systemd[1]: var-lib-kubelet-pods-12dc9494\x2db5b7\x2d4c6c\x2da40d\x2dd601ee927af4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 04:18:32.198111 systemd[1]: var-lib-kubelet-pods-12dc9494\x2db5b7\x2d4c6c\x2da40d\x2dd601ee927af4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 04:18:32.804824 kubelet[1841]: E0213 04:18:32.804753 1841 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 04:18:32.881128 kubelet[1841]: I0213 04:18:32.881065 1841 scope.go:115] "RemoveContainer" containerID="3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7" Feb 13 04:18:32.883525 env[1466]: time="2024-02-13T04:18:32.883452935Z" level=info msg="RemoveContainer for \"3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7\"" Feb 13 04:18:32.886110 env[1466]: time="2024-02-13T04:18:32.886098623Z" level=info msg="RemoveContainer for \"3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7\" returns successfully" Feb 13 04:18:32.887045 systemd[1]: Removed slice kubepods-burstable-pod12dc9494_b5b7_4c6c_a40d_d601ee927af4.slice. Feb 13 04:18:32.901028 kubelet[1841]: I0213 04:18:32.900991 1841 topology_manager.go:210] "Topology Admit Handler" Feb 13 04:18:32.901028 kubelet[1841]: E0213 04:18:32.901014 1841 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="12dc9494-b5b7-4c6c-a40d-d601ee927af4" containerName="mount-cgroup" Feb 13 04:18:32.901028 kubelet[1841]: I0213 04:18:32.901027 1841 memory_manager.go:346] "RemoveStaleState removing state" podUID="12dc9494-b5b7-4c6c-a40d-d601ee927af4" containerName="mount-cgroup" Feb 13 04:18:32.903635 systemd[1]: Created slice kubepods-burstable-podf780d069_7217_411f_9591_8c27d23f3d0c.slice. Feb 13 04:18:32.925267 kubelet[1841]: E0213 04:18:32.925227 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:32.999726 kubelet[1841]: I0213 04:18:32.999613 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f780d069-7217-411f-9591-8c27d23f3d0c-lib-modules\") pod \"cilium-vlbcx\" (UID: \"f780d069-7217-411f-9591-8c27d23f3d0c\") " pod="kube-system/cilium-vlbcx" Feb 13 04:18:33.000042 kubelet[1841]: I0213 04:18:32.999920 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f780d069-7217-411f-9591-8c27d23f3d0c-host-proc-sys-kernel\") pod \"cilium-vlbcx\" (UID: \"f780d069-7217-411f-9591-8c27d23f3d0c\") " pod="kube-system/cilium-vlbcx" Feb 13 04:18:33.000042 kubelet[1841]: I0213 04:18:32.999998 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f780d069-7217-411f-9591-8c27d23f3d0c-hostproc\") pod \"cilium-vlbcx\" (UID: \"f780d069-7217-411f-9591-8c27d23f3d0c\") " pod="kube-system/cilium-vlbcx" Feb 13 04:18:33.000478 kubelet[1841]: I0213 04:18:33.000070 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f780d069-7217-411f-9591-8c27d23f3d0c-cni-path\") pod \"cilium-vlbcx\" (UID: \"f780d069-7217-411f-9591-8c27d23f3d0c\") " pod="kube-system/cilium-vlbcx" Feb 13 04:18:33.000478 kubelet[1841]: I0213 04:18:33.000177 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f780d069-7217-411f-9591-8c27d23f3d0c-cilium-run\") pod \"cilium-vlbcx\" (UID: \"f780d069-7217-411f-9591-8c27d23f3d0c\") " pod="kube-system/cilium-vlbcx" Feb 13 04:18:33.000478 kubelet[1841]: I0213 04:18:33.000442 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f780d069-7217-411f-9591-8c27d23f3d0c-xtables-lock\") pod \"cilium-vlbcx\" (UID: \"f780d069-7217-411f-9591-8c27d23f3d0c\") " pod="kube-system/cilium-vlbcx" Feb 13 04:18:33.001021 kubelet[1841]: I0213 04:18:33.000584 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f780d069-7217-411f-9591-8c27d23f3d0c-cilium-ipsec-secrets\") pod \"cilium-vlbcx\" (UID: \"f780d069-7217-411f-9591-8c27d23f3d0c\") " pod="kube-system/cilium-vlbcx" Feb 13 04:18:33.001021 kubelet[1841]: I0213 04:18:33.000708 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ft8q\" (UniqueName: \"kubernetes.io/projected/f780d069-7217-411f-9591-8c27d23f3d0c-kube-api-access-7ft8q\") pod \"cilium-vlbcx\" (UID: \"f780d069-7217-411f-9591-8c27d23f3d0c\") " pod="kube-system/cilium-vlbcx" Feb 13 04:18:33.001021 kubelet[1841]: I0213 04:18:33.000855 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f780d069-7217-411f-9591-8c27d23f3d0c-cilium-cgroup\") pod \"cilium-vlbcx\" (UID: \"f780d069-7217-411f-9591-8c27d23f3d0c\") " pod="kube-system/cilium-vlbcx" Feb 13 04:18:33.001021 kubelet[1841]: I0213 04:18:33.000981 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f780d069-7217-411f-9591-8c27d23f3d0c-etc-cni-netd\") pod \"cilium-vlbcx\" (UID: \"f780d069-7217-411f-9591-8c27d23f3d0c\") " pod="kube-system/cilium-vlbcx" Feb 13 04:18:33.001742 kubelet[1841]: I0213 04:18:33.001109 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f780d069-7217-411f-9591-8c27d23f3d0c-clustermesh-secrets\") pod \"cilium-vlbcx\" (UID: \"f780d069-7217-411f-9591-8c27d23f3d0c\") " pod="kube-system/cilium-vlbcx" Feb 13 04:18:33.001742 kubelet[1841]: I0213 04:18:33.001226 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f780d069-7217-411f-9591-8c27d23f3d0c-hubble-tls\") pod \"cilium-vlbcx\" (UID: \"f780d069-7217-411f-9591-8c27d23f3d0c\") " pod="kube-system/cilium-vlbcx" Feb 13 04:18:33.001742 kubelet[1841]: I0213 04:18:33.001349 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f780d069-7217-411f-9591-8c27d23f3d0c-bpf-maps\") pod \"cilium-vlbcx\" (UID: \"f780d069-7217-411f-9591-8c27d23f3d0c\") " pod="kube-system/cilium-vlbcx" Feb 13 04:18:33.001742 kubelet[1841]: I0213 04:18:33.001490 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f780d069-7217-411f-9591-8c27d23f3d0c-cilium-config-path\") pod \"cilium-vlbcx\" (UID: \"f780d069-7217-411f-9591-8c27d23f3d0c\") " pod="kube-system/cilium-vlbcx" Feb 13 04:18:33.001742 kubelet[1841]: I0213 04:18:33.001608 1841 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f780d069-7217-411f-9591-8c27d23f3d0c-host-proc-sys-net\") pod \"cilium-vlbcx\" (UID: \"f780d069-7217-411f-9591-8c27d23f3d0c\") " pod="kube-system/cilium-vlbcx" Feb 13 04:18:33.519454 env[1466]: time="2024-02-13T04:18:33.519352962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vlbcx,Uid:f780d069-7217-411f-9591-8c27d23f3d0c,Namespace:kube-system,Attempt:0,}" Feb 13 04:18:33.534296 env[1466]: time="2024-02-13T04:18:33.534189916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 04:18:33.534296 env[1466]: time="2024-02-13T04:18:33.534243756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 04:18:33.534296 env[1466]: time="2024-02-13T04:18:33.534253360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 04:18:33.534396 env[1466]: time="2024-02-13T04:18:33.534340500Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36b6fc8ca01b21f0bb67397c785242fde4483a784855e865f02424a4d3a224ce pid=3638 runtime=io.containerd.runc.v2 Feb 13 04:18:33.541532 systemd[1]: Started cri-containerd-36b6fc8ca01b21f0bb67397c785242fde4483a784855e865f02424a4d3a224ce.scope. Feb 13 04:18:33.551020 env[1466]: time="2024-02-13T04:18:33.550993211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vlbcx,Uid:f780d069-7217-411f-9591-8c27d23f3d0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"36b6fc8ca01b21f0bb67397c785242fde4483a784855e865f02424a4d3a224ce\"" Feb 13 04:18:33.552229 env[1466]: time="2024-02-13T04:18:33.552211745Z" level=info msg="CreateContainer within sandbox \"36b6fc8ca01b21f0bb67397c785242fde4483a784855e865f02424a4d3a224ce\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 04:18:33.556595 env[1466]: time="2024-02-13T04:18:33.556576907Z" level=info msg="CreateContainer within sandbox \"36b6fc8ca01b21f0bb67397c785242fde4483a784855e865f02424a4d3a224ce\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bf8092d91ff3a037590ee9d9156feb63dc7e75ab19a4327c3b284ea39d2f8219\"" Feb 13 04:18:33.556816 env[1466]: time="2024-02-13T04:18:33.556773620Z" level=info msg="StartContainer for \"bf8092d91ff3a037590ee9d9156feb63dc7e75ab19a4327c3b284ea39d2f8219\"" Feb 13 04:18:33.564698 systemd[1]: Started cri-containerd-bf8092d91ff3a037590ee9d9156feb63dc7e75ab19a4327c3b284ea39d2f8219.scope. Feb 13 04:18:33.578273 env[1466]: time="2024-02-13T04:18:33.578241845Z" level=info msg="StartContainer for \"bf8092d91ff3a037590ee9d9156feb63dc7e75ab19a4327c3b284ea39d2f8219\" returns successfully" Feb 13 04:18:33.584238 systemd[1]: cri-containerd-bf8092d91ff3a037590ee9d9156feb63dc7e75ab19a4327c3b284ea39d2f8219.scope: Deactivated successfully. Feb 13 04:18:33.613983 env[1466]: time="2024-02-13T04:18:33.613913718Z" level=info msg="shim disconnected" id=bf8092d91ff3a037590ee9d9156feb63dc7e75ab19a4327c3b284ea39d2f8219 Feb 13 04:18:33.613983 env[1466]: time="2024-02-13T04:18:33.613953736Z" level=warning msg="cleaning up after shim disconnected" id=bf8092d91ff3a037590ee9d9156feb63dc7e75ab19a4327c3b284ea39d2f8219 namespace=k8s.io Feb 13 04:18:33.613983 env[1466]: time="2024-02-13T04:18:33.613964435Z" level=info msg="cleaning up dead shim" Feb 13 04:18:33.620318 env[1466]: time="2024-02-13T04:18:33.620259847Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3719 runtime=io.containerd.runc.v2\n" Feb 13 04:18:33.840464 env[1466]: time="2024-02-13T04:18:33.840179196Z" level=info msg="StopPodSandbox for \"8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f\"" Feb 13 04:18:33.840792 env[1466]: time="2024-02-13T04:18:33.840385524Z" level=info msg="TearDown network for sandbox \"8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f\" successfully" Feb 13 04:18:33.840792 env[1466]: time="2024-02-13T04:18:33.840524551Z" level=info msg="StopPodSandbox for \"8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f\" returns successfully" Feb 13 04:18:33.842112 kubelet[1841]: I0213 04:18:33.842041 1841 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=12dc9494-b5b7-4c6c-a40d-d601ee927af4 path="/var/lib/kubelet/pods/12dc9494-b5b7-4c6c-a40d-d601ee927af4/volumes" Feb 13 04:18:33.891538 env[1466]: time="2024-02-13T04:18:33.891408014Z" level=info msg="CreateContainer within sandbox \"36b6fc8ca01b21f0bb67397c785242fde4483a784855e865f02424a4d3a224ce\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 04:18:33.905601 env[1466]: time="2024-02-13T04:18:33.905482984Z" level=info msg="CreateContainer within sandbox \"36b6fc8ca01b21f0bb67397c785242fde4483a784855e865f02424a4d3a224ce\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"37c8594f6e21e790c7b93f1eb6d808014800229dad94201652a1beb86537c59e\"" Feb 13 04:18:33.906528 env[1466]: time="2024-02-13T04:18:33.906392148Z" level=info msg="StartContainer for \"37c8594f6e21e790c7b93f1eb6d808014800229dad94201652a1beb86537c59e\"" Feb 13 04:18:33.925495 kubelet[1841]: E0213 04:18:33.925476 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:33.926576 systemd[1]: Started cri-containerd-37c8594f6e21e790c7b93f1eb6d808014800229dad94201652a1beb86537c59e.scope. Feb 13 04:18:33.938704 env[1466]: time="2024-02-13T04:18:33.938675321Z" level=info msg="StartContainer for \"37c8594f6e21e790c7b93f1eb6d808014800229dad94201652a1beb86537c59e\" returns successfully" Feb 13 04:18:33.942778 systemd[1]: cri-containerd-37c8594f6e21e790c7b93f1eb6d808014800229dad94201652a1beb86537c59e.scope: Deactivated successfully. Feb 13 04:18:33.972211 env[1466]: time="2024-02-13T04:18:33.972138781Z" level=info msg="shim disconnected" id=37c8594f6e21e790c7b93f1eb6d808014800229dad94201652a1beb86537c59e Feb 13 04:18:33.972211 env[1466]: time="2024-02-13T04:18:33.972188610Z" level=warning msg="cleaning up after shim disconnected" id=37c8594f6e21e790c7b93f1eb6d808014800229dad94201652a1beb86537c59e namespace=k8s.io Feb 13 04:18:33.972211 env[1466]: time="2024-02-13T04:18:33.972200147Z" level=info msg="cleaning up dead shim" Feb 13 04:18:33.979173 env[1466]: time="2024-02-13T04:18:33.979099790Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3779 runtime=io.containerd.runc.v2\n" Feb 13 04:18:34.547613 kubelet[1841]: W0213 04:18:34.547484 1841 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12dc9494_b5b7_4c6c_a40d_d601ee927af4.slice/cri-containerd-3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7.scope WatchSource:0}: container "3f5c14299c0cb333deca112b0b07acfa6a3aa72e8b78fe5b435f13662a3af5c7" in namespace "k8s.io": not found Feb 13 04:18:34.902187 env[1466]: time="2024-02-13T04:18:34.901960353Z" level=info msg="CreateContainer within sandbox \"36b6fc8ca01b21f0bb67397c785242fde4483a784855e865f02424a4d3a224ce\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 04:18:34.920775 env[1466]: time="2024-02-13T04:18:34.920734430Z" level=info msg="CreateContainer within sandbox \"36b6fc8ca01b21f0bb67397c785242fde4483a784855e865f02424a4d3a224ce\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"11b4a0fe9f2182eeb82195280e0d275bc490184e155fa924a55d611ef9543370\"" Feb 13 04:18:34.921112 env[1466]: time="2024-02-13T04:18:34.921070189Z" level=info msg="StartContainer for \"11b4a0fe9f2182eeb82195280e0d275bc490184e155fa924a55d611ef9543370\"" Feb 13 04:18:34.926123 kubelet[1841]: E0213 04:18:34.926084 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:34.930098 systemd[1]: Started cri-containerd-11b4a0fe9f2182eeb82195280e0d275bc490184e155fa924a55d611ef9543370.scope. Feb 13 04:18:34.942582 env[1466]: time="2024-02-13T04:18:34.942529950Z" level=info msg="StartContainer for \"11b4a0fe9f2182eeb82195280e0d275bc490184e155fa924a55d611ef9543370\" returns successfully" Feb 13 04:18:34.944106 systemd[1]: cri-containerd-11b4a0fe9f2182eeb82195280e0d275bc490184e155fa924a55d611ef9543370.scope: Deactivated successfully. Feb 13 04:18:34.954549 env[1466]: time="2024-02-13T04:18:34.954522966Z" level=info msg="shim disconnected" id=11b4a0fe9f2182eeb82195280e0d275bc490184e155fa924a55d611ef9543370 Feb 13 04:18:34.954645 env[1466]: time="2024-02-13T04:18:34.954551426Z" level=warning msg="cleaning up after shim disconnected" id=11b4a0fe9f2182eeb82195280e0d275bc490184e155fa924a55d611ef9543370 namespace=k8s.io Feb 13 04:18:34.954645 env[1466]: time="2024-02-13T04:18:34.954557538Z" level=info msg="cleaning up dead shim" Feb 13 04:18:34.958117 env[1466]: time="2024-02-13T04:18:34.958075430Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3834 runtime=io.containerd.runc.v2\n" Feb 13 04:18:35.274061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11b4a0fe9f2182eeb82195280e0d275bc490184e155fa924a55d611ef9543370-rootfs.mount: Deactivated successfully. Feb 13 04:18:35.279349 kubelet[1841]: I0213 04:18:35.279302 1841 setters.go:548] "Node became not ready" node="10.67.80.11" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-13 04:18:35.279273448 +0000 UTC m=+387.911383827 LastTransitionTime:2024-02-13 04:18:35.279273448 +0000 UTC m=+387.911383827 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 13 04:18:35.909843 env[1466]: time="2024-02-13T04:18:35.909711770Z" level=info msg="CreateContainer within sandbox \"36b6fc8ca01b21f0bb67397c785242fde4483a784855e865f02424a4d3a224ce\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 04:18:35.924164 env[1466]: time="2024-02-13T04:18:35.924106539Z" level=info msg="CreateContainer within sandbox \"36b6fc8ca01b21f0bb67397c785242fde4483a784855e865f02424a4d3a224ce\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d70c820c0608196aa20f38d553e714152583f0f40f6fcd6680fe4ce9e3b51e12\"" Feb 13 04:18:35.924434 env[1466]: time="2024-02-13T04:18:35.924398733Z" level=info msg="StartContainer for \"d70c820c0608196aa20f38d553e714152583f0f40f6fcd6680fe4ce9e3b51e12\"" Feb 13 04:18:35.926718 kubelet[1841]: E0213 04:18:35.926690 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:35.932801 systemd[1]: Started cri-containerd-d70c820c0608196aa20f38d553e714152583f0f40f6fcd6680fe4ce9e3b51e12.scope. Feb 13 04:18:35.944850 env[1466]: time="2024-02-13T04:18:35.944789155Z" level=info msg="StartContainer for \"d70c820c0608196aa20f38d553e714152583f0f40f6fcd6680fe4ce9e3b51e12\" returns successfully" Feb 13 04:18:35.945109 systemd[1]: cri-containerd-d70c820c0608196aa20f38d553e714152583f0f40f6fcd6680fe4ce9e3b51e12.scope: Deactivated successfully. Feb 13 04:18:35.955501 env[1466]: time="2024-02-13T04:18:35.955467435Z" level=info msg="shim disconnected" id=d70c820c0608196aa20f38d553e714152583f0f40f6fcd6680fe4ce9e3b51e12 Feb 13 04:18:35.955501 env[1466]: time="2024-02-13T04:18:35.955499610Z" level=warning msg="cleaning up after shim disconnected" id=d70c820c0608196aa20f38d553e714152583f0f40f6fcd6680fe4ce9e3b51e12 namespace=k8s.io Feb 13 04:18:35.955639 env[1466]: time="2024-02-13T04:18:35.955510058Z" level=info msg="cleaning up dead shim" Feb 13 04:18:35.960043 env[1466]: time="2024-02-13T04:18:35.960019648Z" level=warning msg="cleanup warnings time=\"2024-02-13T04:18:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3887 runtime=io.containerd.runc.v2\n" Feb 13 04:18:36.274489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d70c820c0608196aa20f38d553e714152583f0f40f6fcd6680fe4ce9e3b51e12-rootfs.mount: Deactivated successfully. Feb 13 04:18:36.918978 env[1466]: time="2024-02-13T04:18:36.918851534Z" level=info msg="CreateContainer within sandbox \"36b6fc8ca01b21f0bb67397c785242fde4483a784855e865f02424a4d3a224ce\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 04:18:36.926928 kubelet[1841]: E0213 04:18:36.926844 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:36.936902 env[1466]: time="2024-02-13T04:18:36.936773910Z" level=info msg="CreateContainer within sandbox \"36b6fc8ca01b21f0bb67397c785242fde4483a784855e865f02424a4d3a224ce\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"63751cf76689181d6111689d5ff9ba844bcd1ef44688d263afa8ad2b3159874a\"" Feb 13 04:18:36.937565 env[1466]: time="2024-02-13T04:18:36.937513968Z" level=info msg="StartContainer for \"63751cf76689181d6111689d5ff9ba844bcd1ef44688d263afa8ad2b3159874a\"" Feb 13 04:18:36.946294 systemd[1]: Started cri-containerd-63751cf76689181d6111689d5ff9ba844bcd1ef44688d263afa8ad2b3159874a.scope. Feb 13 04:18:36.959183 env[1466]: time="2024-02-13T04:18:36.959157746Z" level=info msg="StartContainer for \"63751cf76689181d6111689d5ff9ba844bcd1ef44688d263afa8ad2b3159874a\" returns successfully" Feb 13 04:18:37.101433 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 04:18:37.666876 kubelet[1841]: W0213 04:18:37.666765 1841 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf780d069_7217_411f_9591_8c27d23f3d0c.slice/cri-containerd-bf8092d91ff3a037590ee9d9156feb63dc7e75ab19a4327c3b284ea39d2f8219.scope WatchSource:0}: task bf8092d91ff3a037590ee9d9156feb63dc7e75ab19a4327c3b284ea39d2f8219 not found: not found Feb 13 04:18:37.927724 kubelet[1841]: E0213 04:18:37.927518 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:37.942195 kubelet[1841]: I0213 04:18:37.942104 1841 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vlbcx" podStartSLOduration=5.942027416 pod.CreationTimestamp="2024-02-13 04:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 04:18:37.941455043 +0000 UTC m=+390.573565494" watchObservedRunningTime="2024-02-13 04:18:37.942027416 +0000 UTC m=+390.574137845" Feb 13 04:18:38.928230 kubelet[1841]: E0213 04:18:38.928186 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:39.915304 systemd-networkd[1309]: lxc_health: Link UP Feb 13 04:18:39.928690 kubelet[1841]: E0213 04:18:39.928644 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:39.938336 systemd-networkd[1309]: lxc_health: Gained carrier Feb 13 04:18:39.938443 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 04:18:40.778023 kubelet[1841]: W0213 04:18:40.777981 1841 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf780d069_7217_411f_9591_8c27d23f3d0c.slice/cri-containerd-37c8594f6e21e790c7b93f1eb6d808014800229dad94201652a1beb86537c59e.scope WatchSource:0}: task 37c8594f6e21e790c7b93f1eb6d808014800229dad94201652a1beb86537c59e not found: not found Feb 13 04:18:40.929389 kubelet[1841]: E0213 04:18:40.929370 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:41.673541 systemd-networkd[1309]: lxc_health: Gained IPv6LL Feb 13 04:18:41.929842 kubelet[1841]: E0213 04:18:41.929789 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:42.930451 kubelet[1841]: E0213 04:18:42.930318 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:43.884533 kubelet[1841]: W0213 04:18:43.884411 1841 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf780d069_7217_411f_9591_8c27d23f3d0c.slice/cri-containerd-11b4a0fe9f2182eeb82195280e0d275bc490184e155fa924a55d611ef9543370.scope WatchSource:0}: task 11b4a0fe9f2182eeb82195280e0d275bc490184e155fa924a55d611ef9543370 not found: not found Feb 13 04:18:43.930628 kubelet[1841]: E0213 04:18:43.930551 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:44.931802 kubelet[1841]: E0213 04:18:44.931702 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:45.932600 kubelet[1841]: E0213 04:18:45.932510 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:46.933824 kubelet[1841]: E0213 04:18:46.933718 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:46.995745 kubelet[1841]: W0213 04:18:46.995665 1841 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf780d069_7217_411f_9591_8c27d23f3d0c.slice/cri-containerd-d70c820c0608196aa20f38d553e714152583f0f40f6fcd6680fe4ce9e3b51e12.scope WatchSource:0}: task d70c820c0608196aa20f38d553e714152583f0f40f6fcd6680fe4ce9e3b51e12 not found: not found Feb 13 04:18:47.662036 kubelet[1841]: E0213 04:18:47.661962 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:47.934581 kubelet[1841]: E0213 04:18:47.934502 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:48.934949 kubelet[1841]: E0213 04:18:48.934829 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:49.935173 kubelet[1841]: E0213 04:18:49.935057 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:50.936209 kubelet[1841]: E0213 04:18:50.936103 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:51.936878 kubelet[1841]: E0213 04:18:51.936771 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:52.937635 kubelet[1841]: E0213 04:18:52.937523 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:53.938478 kubelet[1841]: E0213 04:18:53.938376 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:54.939013 kubelet[1841]: E0213 04:18:54.938907 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:55.940087 kubelet[1841]: E0213 04:18:55.939971 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:56.940696 kubelet[1841]: E0213 04:18:56.940585 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:57.941333 kubelet[1841]: E0213 04:18:57.941233 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:58.942410 kubelet[1841]: E0213 04:18:58.942306 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:18:59.942895 kubelet[1841]: E0213 04:18:59.942788 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:00.943861 kubelet[1841]: E0213 04:19:00.943777 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:01.944999 kubelet[1841]: E0213 04:19:01.944895 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:02.945853 kubelet[1841]: E0213 04:19:02.945750 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:03.947083 kubelet[1841]: E0213 04:19:03.946980 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:04.948338 kubelet[1841]: E0213 04:19:04.948236 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:05.949059 kubelet[1841]: E0213 04:19:05.948961 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:06.950292 kubelet[1841]: E0213 04:19:06.950188 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:07.662016 kubelet[1841]: E0213 04:19:07.661944 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:07.688346 env[1466]: time="2024-02-13T04:19:07.688233924Z" level=info msg="StopPodSandbox for \"8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f\"" Feb 13 04:19:07.689575 env[1466]: time="2024-02-13T04:19:07.688483854Z" level=info msg="TearDown network for sandbox \"8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f\" successfully" Feb 13 04:19:07.689575 env[1466]: time="2024-02-13T04:19:07.688585156Z" level=info msg="StopPodSandbox for \"8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f\" returns successfully" Feb 13 04:19:07.689889 env[1466]: time="2024-02-13T04:19:07.689555567Z" level=info msg="RemovePodSandbox for \"8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f\"" Feb 13 04:19:07.689889 env[1466]: time="2024-02-13T04:19:07.689635010Z" level=info msg="Forcibly stopping sandbox \"8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f\"" Feb 13 04:19:07.689889 env[1466]: time="2024-02-13T04:19:07.689814583Z" level=info msg="TearDown network for sandbox \"8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f\" successfully" Feb 13 04:19:07.694399 env[1466]: time="2024-02-13T04:19:07.694322086Z" level=info msg="RemovePodSandbox \"8a11ba6135d61994dfe4f1d0e9c78ceb21a6de3e434e45aaf306dad74532cb7f\" returns successfully" Feb 13 04:19:07.695223 env[1466]: time="2024-02-13T04:19:07.695143000Z" level=info msg="StopPodSandbox for \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\"" Feb 13 04:19:07.695467 env[1466]: time="2024-02-13T04:19:07.695330162Z" level=info msg="TearDown network for sandbox \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" successfully" Feb 13 04:19:07.695467 env[1466]: time="2024-02-13T04:19:07.695439325Z" level=info msg="StopPodSandbox for \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" returns successfully" Feb 13 04:19:07.696230 env[1466]: time="2024-02-13T04:19:07.696149004Z" level=info msg="RemovePodSandbox for \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\"" Feb 13 04:19:07.696466 env[1466]: time="2024-02-13T04:19:07.696240546Z" level=info msg="Forcibly stopping sandbox \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\"" Feb 13 04:19:07.696659 env[1466]: time="2024-02-13T04:19:07.696474302Z" level=info msg="TearDown network for sandbox \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" successfully" Feb 13 04:19:07.700304 env[1466]: time="2024-02-13T04:19:07.700228436Z" level=info msg="RemovePodSandbox \"92a80adf7b34dd78d2eb18313b4dcde4de3f79b67a969f7999ca2e969757201e\" returns successfully" Feb 13 04:19:07.950934 kubelet[1841]: E0213 04:19:07.950868 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:08.951586 kubelet[1841]: E0213 04:19:08.951518 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:09.951793 kubelet[1841]: E0213 04:19:09.951713 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:10.952490 kubelet[1841]: E0213 04:19:10.952388 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:11.952928 kubelet[1841]: E0213 04:19:11.952855 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:12.953975 kubelet[1841]: E0213 04:19:12.953864 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:13.954498 kubelet[1841]: E0213 04:19:13.954386 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:14.955043 kubelet[1841]: E0213 04:19:14.954937 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:15.955445 kubelet[1841]: E0213 04:19:15.955310 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:16.955869 kubelet[1841]: E0213 04:19:16.955769 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:17.956814 kubelet[1841]: E0213 04:19:17.956744 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:18.957319 kubelet[1841]: E0213 04:19:18.957262 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:19.957946 kubelet[1841]: E0213 04:19:19.957876 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:20.958631 kubelet[1841]: E0213 04:19:20.958558 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:21.959533 kubelet[1841]: E0213 04:19:21.959446 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:22.960612 kubelet[1841]: E0213 04:19:22.960547 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:23.961731 kubelet[1841]: E0213 04:19:23.961656 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:24.962699 kubelet[1841]: E0213 04:19:24.962630 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:25.963203 kubelet[1841]: E0213 04:19:25.963132 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:26.963356 kubelet[1841]: E0213 04:19:26.963280 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:27.661755 kubelet[1841]: E0213 04:19:27.661600 1841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:27.964618 kubelet[1841]: E0213 04:19:27.964517 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:28.965206 kubelet[1841]: E0213 04:19:28.965137 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:29.966096 kubelet[1841]: E0213 04:19:29.965978 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:30.966793 kubelet[1841]: E0213 04:19:30.966690 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:31.967810 kubelet[1841]: E0213 04:19:31.967732 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:32.968876 kubelet[1841]: E0213 04:19:32.968770 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 04:19:33.969252 kubelet[1841]: E0213 04:19:33.969172 1841 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"