Feb 13 06:20:58.568279 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 13 06:20:58.568292 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 06:20:58.568299 kernel: BIOS-provided physical RAM map: Feb 13 06:20:58.568303 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 06:20:58.568306 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 06:20:58.568310 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 06:20:58.568315 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 06:20:58.568319 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 06:20:58.568323 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819e2fff] usable Feb 13 06:20:58.568326 kernel: BIOS-e820: [mem 0x00000000819e3000-0x00000000819e3fff] ACPI NVS Feb 13 06:20:58.568331 kernel: BIOS-e820: [mem 0x00000000819e4000-0x00000000819e4fff] reserved Feb 13 06:20:58.568335 kernel: BIOS-e820: [mem 0x00000000819e5000-0x000000008afccfff] usable Feb 13 06:20:58.568339 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 13 06:20:58.568343 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 13 06:20:58.568348 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 13 06:20:58.568353 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 13 06:20:58.568357 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 13 06:20:58.568361 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 13 06:20:58.568365 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 06:20:58.568370 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 06:20:58.568374 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 06:20:58.568378 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 06:20:58.568384 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 06:20:58.568389 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 13 06:20:58.568393 kernel: NX (Execute Disable) protection: active Feb 13 06:20:58.568397 kernel: SMBIOS 3.2.1 present. Feb 13 06:20:58.568402 kernel: DMI: Supermicro SYS-5019C-MR/X11SCM-F, BIOS 1.9 09/16/2022 Feb 13 06:20:58.568406 kernel: tsc: Detected 3400.000 MHz processor Feb 13 06:20:58.568411 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 06:20:58.568415 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 06:20:58.568420 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 06:20:58.568424 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 13 06:20:58.568429 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 06:20:58.568433 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 13 06:20:58.568437 kernel: Using GB pages for direct mapping Feb 13 06:20:58.568442 kernel: ACPI: Early table checksum verification disabled Feb 13 06:20:58.568447 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 06:20:58.568451 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 06:20:58.568455 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 13 06:20:58.568460 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 06:20:58.568466 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 13 06:20:58.568471 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 13 06:20:58.568477 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 13 06:20:58.568481 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 06:20:58.568486 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 06:20:58.568491 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 06:20:58.568495 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 06:20:58.568500 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 06:20:58.568505 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 06:20:58.568509 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 06:20:58.568515 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 06:20:58.568520 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 06:20:58.568524 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 06:20:58.568529 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 06:20:58.568534 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 06:20:58.568538 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 06:20:58.568543 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 06:20:58.568548 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 06:20:58.568553 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 06:20:58.568558 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 13 06:20:58.568563 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 06:20:58.568567 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 06:20:58.568572 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 06:20:58.568577 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 13 06:20:58.568581 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 06:20:58.568586 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 06:20:58.568591 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 06:20:58.568596 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 06:20:58.568601 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 06:20:58.568606 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 13 06:20:58.568610 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 13 06:20:58.568615 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 13 06:20:58.568620 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 13 06:20:58.568624 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 13 06:20:58.568629 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 13 06:20:58.568634 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 13 06:20:58.568639 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 13 06:20:58.568644 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 13 06:20:58.568649 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 13 06:20:58.568653 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 13 06:20:58.568658 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 13 06:20:58.568662 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 13 06:20:58.568667 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 13 06:20:58.568672 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 13 06:20:58.568677 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 13 06:20:58.568682 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 13 06:20:58.568686 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 13 06:20:58.568691 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 13 06:20:58.568696 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 13 06:20:58.568701 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 13 06:20:58.568705 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 13 06:20:58.568710 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 13 06:20:58.568714 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 13 06:20:58.568720 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 13 06:20:58.568725 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 13 06:20:58.568729 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 13 06:20:58.568734 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 13 06:20:58.568739 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 13 06:20:58.568743 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 13 06:20:58.568748 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 13 06:20:58.568753 kernel: No NUMA configuration found Feb 13 06:20:58.568757 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 13 06:20:58.568762 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 13 06:20:58.568767 kernel: Zone ranges: Feb 13 06:20:58.568772 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 06:20:58.568777 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 06:20:58.568781 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 13 06:20:58.568786 kernel: Movable zone start for each node Feb 13 06:20:58.568791 kernel: Early memory node ranges Feb 13 06:20:58.568795 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 06:20:58.568800 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 06:20:58.568805 kernel: node 0: [mem 0x0000000040400000-0x00000000819e2fff] Feb 13 06:20:58.568810 kernel: node 0: [mem 0x00000000819e5000-0x000000008afccfff] Feb 13 06:20:58.568815 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 13 06:20:58.568820 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 13 06:20:58.568824 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 13 06:20:58.568829 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 13 06:20:58.568834 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 06:20:58.568842 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 06:20:58.568848 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 06:20:58.568853 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 06:20:58.568858 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 13 06:20:58.568864 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 13 06:20:58.568869 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 13 06:20:58.568874 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 13 06:20:58.568879 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 06:20:58.568884 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 06:20:58.568889 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 06:20:58.568894 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 06:20:58.568900 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 06:20:58.568905 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 06:20:58.568909 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 06:20:58.568915 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 06:20:58.568919 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 06:20:58.568924 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 06:20:58.568929 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 06:20:58.568934 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 06:20:58.568939 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 06:20:58.568945 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 06:20:58.568950 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 06:20:58.568955 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 06:20:58.568960 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 06:20:58.568965 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 06:20:58.568970 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 06:20:58.568975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 06:20:58.568980 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 06:20:58.568985 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 06:20:58.568991 kernel: TSC deadline timer available Feb 13 06:20:58.568996 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 06:20:58.569001 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 13 06:20:58.569006 kernel: Booting paravirtualized kernel on bare hardware Feb 13 06:20:58.569011 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 06:20:58.569016 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 13 06:20:58.569021 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 13 06:20:58.569026 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 13 06:20:58.569031 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 06:20:58.569036 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 13 06:20:58.569041 kernel: Policy zone: Normal Feb 13 06:20:58.569047 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 06:20:58.569052 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 06:20:58.569057 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 06:20:58.569062 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 06:20:58.569067 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 06:20:58.569073 kernel: Memory: 32724720K/33452980K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 13 06:20:58.569079 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 06:20:58.569084 kernel: ftrace: allocating 34475 entries in 135 pages Feb 13 06:20:58.569089 kernel: ftrace: allocated 135 pages with 4 groups Feb 13 06:20:58.569094 kernel: rcu: Hierarchical RCU implementation. Feb 13 06:20:58.569099 kernel: rcu: RCU event tracing is enabled. Feb 13 06:20:58.569104 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 06:20:58.569109 kernel: Rude variant of Tasks RCU enabled. Feb 13 06:20:58.569114 kernel: Tracing variant of Tasks RCU enabled. Feb 13 06:20:58.569119 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 06:20:58.569125 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 06:20:58.569130 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 06:20:58.569135 kernel: random: crng init done Feb 13 06:20:58.569140 kernel: Console: colour dummy device 80x25 Feb 13 06:20:58.569145 kernel: printk: console [tty0] enabled Feb 13 06:20:58.569150 kernel: printk: console [ttyS1] enabled Feb 13 06:20:58.569155 kernel: ACPI: Core revision 20210730 Feb 13 06:20:58.569160 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 13 06:20:58.569165 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 06:20:58.569171 kernel: DMAR: Host address width 39 Feb 13 06:20:58.569176 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 06:20:58.569181 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 06:20:58.569186 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 13 06:20:58.569191 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 13 06:20:58.569196 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 06:20:58.569201 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 06:20:58.569206 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 06:20:58.569211 kernel: x2apic enabled Feb 13 06:20:58.569217 kernel: Switched APIC routing to cluster x2apic. Feb 13 06:20:58.569222 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 06:20:58.569227 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 06:20:58.569232 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 06:20:58.569237 kernel: process: using mwait in idle threads Feb 13 06:20:58.569242 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 06:20:58.569247 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 06:20:58.569252 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 06:20:58.569257 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 06:20:58.569263 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 13 06:20:58.569268 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 06:20:58.569273 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 06:20:58.569278 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 06:20:58.569282 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 06:20:58.569287 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 13 06:20:58.569292 kernel: TAA: Mitigation: TSX disabled Feb 13 06:20:58.569297 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 06:20:58.569302 kernel: SRBDS: Mitigation: Microcode Feb 13 06:20:58.569307 kernel: GDS: Vulnerable: No microcode Feb 13 06:20:58.569312 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 06:20:58.569318 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 06:20:58.569323 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 06:20:58.569328 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 06:20:58.569333 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 06:20:58.569338 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 06:20:58.569343 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 06:20:58.569348 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 06:20:58.569352 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 06:20:58.569357 kernel: Freeing SMP alternatives memory: 32K Feb 13 06:20:58.569362 kernel: pid_max: default: 32768 minimum: 301 Feb 13 06:20:58.569367 kernel: LSM: Security Framework initializing Feb 13 06:20:58.569372 kernel: SELinux: Initializing. Feb 13 06:20:58.569378 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 06:20:58.569384 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 06:20:58.569389 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 06:20:58.569394 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 06:20:58.569399 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 06:20:58.569404 kernel: ... version: 4 Feb 13 06:20:58.569409 kernel: ... bit width: 48 Feb 13 06:20:58.569414 kernel: ... generic registers: 4 Feb 13 06:20:58.569419 kernel: ... value mask: 0000ffffffffffff Feb 13 06:20:58.569424 kernel: ... max period: 00007fffffffffff Feb 13 06:20:58.569430 kernel: ... fixed-purpose events: 3 Feb 13 06:20:58.569435 kernel: ... event mask: 000000070000000f Feb 13 06:20:58.569440 kernel: signal: max sigframe size: 2032 Feb 13 06:20:58.569445 kernel: rcu: Hierarchical SRCU implementation. Feb 13 06:20:58.569450 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 06:20:58.569455 kernel: smp: Bringing up secondary CPUs ... Feb 13 06:20:58.569460 kernel: x86: Booting SMP configuration: Feb 13 06:20:58.569465 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 13 06:20:58.569470 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 06:20:58.569476 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 13 06:20:58.569481 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 06:20:58.569486 kernel: smpboot: Max logical packages: 1 Feb 13 06:20:58.569491 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 06:20:58.569496 kernel: devtmpfs: initialized Feb 13 06:20:58.569501 kernel: x86/mm: Memory block size: 128MB Feb 13 06:20:58.569506 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819e3000-0x819e3fff] (4096 bytes) Feb 13 06:20:58.569511 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 13 06:20:58.569517 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 06:20:58.569522 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 06:20:58.569527 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 06:20:58.569532 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 06:20:58.569537 kernel: audit: initializing netlink subsys (disabled) Feb 13 06:20:58.569542 kernel: audit: type=2000 audit(1707805253.040:1): state=initialized audit_enabled=0 res=1 Feb 13 06:20:58.569547 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 06:20:58.569552 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 06:20:58.569557 kernel: cpuidle: using governor menu Feb 13 06:20:58.569563 kernel: ACPI: bus type PCI registered Feb 13 06:20:58.569568 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 06:20:58.569573 kernel: dca service started, version 1.12.1 Feb 13 06:20:58.569578 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 06:20:58.569583 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 13 06:20:58.569588 kernel: PCI: Using configuration type 1 for base access Feb 13 06:20:58.569593 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 06:20:58.569598 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 06:20:58.569603 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 06:20:58.569609 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 06:20:58.569614 kernel: ACPI: Added _OSI(Module Device) Feb 13 06:20:58.569619 kernel: ACPI: Added _OSI(Processor Device) Feb 13 06:20:58.569624 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 06:20:58.569629 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 06:20:58.569634 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 13 06:20:58.569639 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 13 06:20:58.569643 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 13 06:20:58.569648 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 06:20:58.569654 kernel: ACPI: Dynamic OEM Table Load: Feb 13 06:20:58.569659 kernel: ACPI: SSDT 0xFFFF910840213700 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 06:20:58.569664 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 13 06:20:58.569669 kernel: ACPI: Dynamic OEM Table Load: Feb 13 06:20:58.569674 kernel: ACPI: SSDT 0xFFFF910841AE4800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 06:20:58.569679 kernel: ACPI: Dynamic OEM Table Load: Feb 13 06:20:58.569684 kernel: ACPI: SSDT 0xFFFF910841A59000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 06:20:58.569689 kernel: ACPI: Dynamic OEM Table Load: Feb 13 06:20:58.569694 kernel: ACPI: SSDT 0xFFFF910841A5B000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 06:20:58.569699 kernel: ACPI: Dynamic OEM Table Load: Feb 13 06:20:58.569704 kernel: ACPI: SSDT 0xFFFF910840149000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 06:20:58.569709 kernel: ACPI: Dynamic OEM Table Load: Feb 13 06:20:58.569714 kernel: ACPI: SSDT 0xFFFF910841AE3C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 06:20:58.569719 kernel: ACPI: Interpreter enabled Feb 13 06:20:58.569724 kernel: ACPI: PM: (supports S0 S5) Feb 13 06:20:58.569729 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 06:20:58.569734 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 06:20:58.569739 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 06:20:58.569744 kernel: HEST: Table parsing has been initialized. Feb 13 06:20:58.569750 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 06:20:58.569755 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 06:20:58.569760 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 06:20:58.569765 kernel: ACPI: PM: Power Resource [USBC] Feb 13 06:20:58.569770 kernel: ACPI: PM: Power Resource [V0PR] Feb 13 06:20:58.569774 kernel: ACPI: PM: Power Resource [V1PR] Feb 13 06:20:58.569779 kernel: ACPI: PM: Power Resource [V2PR] Feb 13 06:20:58.569784 kernel: ACPI: PM: Power Resource [WRST] Feb 13 06:20:58.569789 kernel: ACPI: PM: Power Resource [FN00] Feb 13 06:20:58.569795 kernel: ACPI: PM: Power Resource [FN01] Feb 13 06:20:58.569800 kernel: ACPI: PM: Power Resource [FN02] Feb 13 06:20:58.569805 kernel: ACPI: PM: Power Resource [FN03] Feb 13 06:20:58.569810 kernel: ACPI: PM: Power Resource [FN04] Feb 13 06:20:58.569815 kernel: ACPI: PM: Power Resource [PIN] Feb 13 06:20:58.569820 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 06:20:58.569885 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 06:20:58.569931 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 06:20:58.569975 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 06:20:58.569983 kernel: PCI host bridge to bus 0000:00 Feb 13 06:20:58.570026 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 06:20:58.570064 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 06:20:58.570101 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 06:20:58.570138 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 13 06:20:58.570174 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 06:20:58.570212 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 06:20:58.570264 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 06:20:58.570313 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 06:20:58.570358 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 06:20:58.570406 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 06:20:58.570449 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 13 06:20:58.570497 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 06:20:58.570540 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 13 06:20:58.570587 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 06:20:58.570630 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 13 06:20:58.570671 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 06:20:58.570715 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 06:20:58.570760 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 13 06:20:58.570801 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 13 06:20:58.570849 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 06:20:58.570891 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 06:20:58.570936 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 06:20:58.570978 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 06:20:58.571022 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 06:20:58.571067 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 13 06:20:58.571108 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 06:20:58.571153 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 06:20:58.571193 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 13 06:20:58.571235 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 06:20:58.571279 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 06:20:58.571321 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 13 06:20:58.571363 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 06:20:58.571410 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 06:20:58.571453 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 13 06:20:58.571494 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 13 06:20:58.571535 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 13 06:20:58.571577 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 13 06:20:58.571626 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 13 06:20:58.571670 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 13 06:20:58.571711 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 06:20:58.571758 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 06:20:58.571801 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 06:20:58.571852 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 06:20:58.571895 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 06:20:58.571943 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 06:20:58.571986 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 06:20:58.572032 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 06:20:58.572077 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 06:20:58.572122 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 13 06:20:58.572167 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 13 06:20:58.572212 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 06:20:58.572254 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 06:20:58.572302 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 06:20:58.572352 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 06:20:58.572398 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 13 06:20:58.572440 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 06:20:58.572486 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 06:20:58.572528 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 06:20:58.572576 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 06:20:58.572623 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 06:20:58.572668 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 13 06:20:58.572710 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 13 06:20:58.572754 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 06:20:58.572797 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 06:20:58.572845 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 06:20:58.572888 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 06:20:58.572933 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 13 06:20:58.572977 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 13 06:20:58.573019 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 06:20:58.573063 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 06:20:58.573105 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 06:20:58.573148 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 06:20:58.573189 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 06:20:58.573231 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 06:20:58.573281 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 13 06:20:58.573326 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 13 06:20:58.573369 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 06:20:58.573416 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 13 06:20:58.573459 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 06:20:58.573501 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 06:20:58.573543 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 06:20:58.573587 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 06:20:58.573636 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 06:20:58.573680 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 13 06:20:58.573724 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 06:20:58.573769 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 13 06:20:58.573812 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 06:20:58.573856 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 06:20:58.573899 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 06:20:58.573943 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 06:20:58.573984 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 06:20:58.574035 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 06:20:58.574079 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 13 06:20:58.574123 kernel: pci 0000:06:00.0: supports D1 D2 Feb 13 06:20:58.574200 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 06:20:58.574264 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 06:20:58.574307 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 06:20:58.574351 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 06:20:58.574400 kernel: pci_bus 0000:07: extended config space not accessible Feb 13 06:20:58.574451 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 06:20:58.574497 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 13 06:20:58.574543 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 13 06:20:58.574588 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 06:20:58.574633 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 06:20:58.574680 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 06:20:58.574727 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 06:20:58.574770 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 06:20:58.574813 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 06:20:58.574858 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 06:20:58.574866 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 06:20:58.574872 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 06:20:58.574879 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 06:20:58.574884 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 06:20:58.574889 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 06:20:58.574895 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 06:20:58.574900 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 06:20:58.574905 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 06:20:58.574911 kernel: iommu: Default domain type: Translated Feb 13 06:20:58.574916 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 06:20:58.574960 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 13 06:20:58.575008 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 06:20:58.575053 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 13 06:20:58.575061 kernel: vgaarb: loaded Feb 13 06:20:58.575066 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 06:20:58.575072 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 06:20:58.575077 kernel: PTP clock support registered Feb 13 06:20:58.575083 kernel: PCI: Using ACPI for IRQ routing Feb 13 06:20:58.575088 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 06:20:58.575093 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 06:20:58.575100 kernel: e820: reserve RAM buffer [mem 0x819e3000-0x83ffffff] Feb 13 06:20:58.575105 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 13 06:20:58.575110 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 13 06:20:58.575115 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 13 06:20:58.575120 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 13 06:20:58.575126 kernel: clocksource: Switched to clocksource tsc-early Feb 13 06:20:58.575131 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 06:20:58.575136 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 06:20:58.575142 kernel: pnp: PnP ACPI init Feb 13 06:20:58.575187 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 06:20:58.575229 kernel: pnp 00:02: [dma 0 disabled] Feb 13 06:20:58.575274 kernel: pnp 00:03: [dma 0 disabled] Feb 13 06:20:58.575315 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 06:20:58.575354 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 06:20:58.575398 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 06:20:58.575442 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 06:20:58.575481 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 06:20:58.575518 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 06:20:58.575556 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 06:20:58.575593 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 06:20:58.575630 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 06:20:58.575669 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 06:20:58.575709 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 06:20:58.575752 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 06:20:58.575790 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 06:20:58.575828 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 06:20:58.575865 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 06:20:58.575903 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 06:20:58.575940 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 06:20:58.575980 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 06:20:58.576021 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 06:20:58.576029 kernel: pnp: PnP ACPI: found 10 devices Feb 13 06:20:58.576035 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 06:20:58.576040 kernel: NET: Registered PF_INET protocol family Feb 13 06:20:58.576045 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 06:20:58.576051 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 06:20:58.576056 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 06:20:58.576063 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 06:20:58.576068 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 13 06:20:58.576074 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 06:20:58.576079 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 06:20:58.576084 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 06:20:58.576090 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 06:20:58.576095 kernel: NET: Registered PF_XDP protocol family Feb 13 06:20:58.576138 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 13 06:20:58.576181 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 13 06:20:58.576224 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 13 06:20:58.576268 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 06:20:58.576313 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 06:20:58.576357 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 06:20:58.576403 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 06:20:58.576446 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 06:20:58.576488 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 06:20:58.576533 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 06:20:58.576574 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 06:20:58.576616 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 06:20:58.576658 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 06:20:58.576700 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 06:20:58.576745 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 06:20:58.576786 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 06:20:58.576829 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 06:20:58.576870 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 06:20:58.576915 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 06:20:58.576957 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 06:20:58.577001 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 06:20:58.577043 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 06:20:58.577085 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 06:20:58.577129 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 06:20:58.577168 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 06:20:58.577205 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 06:20:58.577242 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 06:20:58.577278 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 06:20:58.577315 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 13 06:20:58.577352 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 06:20:58.577399 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 13 06:20:58.577441 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 06:20:58.577483 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 13 06:20:58.577523 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 13 06:20:58.577565 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 06:20:58.577605 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 13 06:20:58.577647 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 13 06:20:58.577688 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 13 06:20:58.577731 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 06:20:58.577773 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 13 06:20:58.577781 kernel: PCI: CLS 64 bytes, default 64 Feb 13 06:20:58.577786 kernel: DMAR: No ATSR found Feb 13 06:20:58.577791 kernel: DMAR: No SATC found Feb 13 06:20:58.577797 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 06:20:58.577838 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 06:20:58.577884 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 06:20:58.577927 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 13 06:20:58.577968 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 13 06:20:58.578011 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 13 06:20:58.578052 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 13 06:20:58.578095 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 13 06:20:58.578136 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 13 06:20:58.578179 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 13 06:20:58.578222 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 13 06:20:58.578263 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 13 06:20:58.578307 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 13 06:20:58.578349 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 13 06:20:58.578394 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 13 06:20:58.578437 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 13 06:20:58.578479 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 13 06:20:58.578522 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 13 06:20:58.578566 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 13 06:20:58.578607 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 13 06:20:58.578649 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 13 06:20:58.578692 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 13 06:20:58.578735 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 13 06:20:58.578779 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 13 06:20:58.578822 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 13 06:20:58.578867 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 06:20:58.578912 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 13 06:20:58.578958 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 13 06:20:58.578967 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 06:20:58.578972 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 06:20:58.578978 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 13 06:20:58.578983 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 13 06:20:58.578988 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 06:20:58.578994 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 06:20:58.579000 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 06:20:58.579044 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 06:20:58.579053 kernel: Initialise system trusted keyrings Feb 13 06:20:58.579058 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 06:20:58.579064 kernel: Key type asymmetric registered Feb 13 06:20:58.579069 kernel: Asymmetric key parser 'x509' registered Feb 13 06:20:58.579074 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 13 06:20:58.579079 kernel: io scheduler mq-deadline registered Feb 13 06:20:58.579086 kernel: io scheduler kyber registered Feb 13 06:20:58.579091 kernel: io scheduler bfq registered Feb 13 06:20:58.579133 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 13 06:20:58.579176 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 13 06:20:58.579218 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 13 06:20:58.579261 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 13 06:20:58.579304 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 13 06:20:58.579346 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 13 06:20:58.579400 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 06:20:58.579408 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 06:20:58.579414 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 06:20:58.579419 kernel: pstore: Registered erst as persistent store backend Feb 13 06:20:58.579425 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 06:20:58.579430 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 06:20:58.579435 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 06:20:58.579441 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 06:20:58.579447 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 13 06:20:58.579491 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 06:20:58.579500 kernel: i8042: PNP: No PS/2 controller found. Feb 13 06:20:58.579537 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 06:20:58.579577 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 06:20:58.579615 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-13T06:20:57 UTC (1707805257) Feb 13 06:20:58.579652 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 06:20:58.579660 kernel: fail to initialize ptp_kvm Feb 13 06:20:58.579667 kernel: intel_pstate: Intel P-state driver initializing Feb 13 06:20:58.579673 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 06:20:58.579678 kernel: intel_pstate: HWP enabled Feb 13 06:20:58.579683 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 13 06:20:58.579688 kernel: vesafb: scrolling: redraw Feb 13 06:20:58.579694 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 13 06:20:58.579699 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000cc312300, using 768k, total 768k Feb 13 06:20:58.579704 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 06:20:58.579710 kernel: fb0: VESA VGA frame buffer device Feb 13 06:20:58.579716 kernel: NET: Registered PF_INET6 protocol family Feb 13 06:20:58.579721 kernel: Segment Routing with IPv6 Feb 13 06:20:58.579727 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 06:20:58.579732 kernel: NET: Registered PF_PACKET protocol family Feb 13 06:20:58.579737 kernel: Key type dns_resolver registered Feb 13 06:20:58.579742 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 13 06:20:58.579748 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 06:20:58.579753 kernel: IPI shorthand broadcast: enabled Feb 13 06:20:58.579758 kernel: sched_clock: Marking stable (1733353288, 1334611811)->(4488457465, -1420492366) Feb 13 06:20:58.579765 kernel: registered taskstats version 1 Feb 13 06:20:58.579770 kernel: Loading compiled-in X.509 certificates Feb 13 06:20:58.579775 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 13 06:20:58.579781 kernel: Key type .fscrypt registered Feb 13 06:20:58.579786 kernel: Key type fscrypt-provisioning registered Feb 13 06:20:58.579791 kernel: pstore: Using crash dump compression: deflate Feb 13 06:20:58.579796 kernel: ima: Allocated hash algorithm: sha1 Feb 13 06:20:58.579802 kernel: ima: No architecture policies found Feb 13 06:20:58.579807 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 13 06:20:58.579813 kernel: Write protecting the kernel read-only data: 28672k Feb 13 06:20:58.579818 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 13 06:20:58.579824 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 13 06:20:58.579829 kernel: Run /init as init process Feb 13 06:20:58.579834 kernel: with arguments: Feb 13 06:20:58.579840 kernel: /init Feb 13 06:20:58.579845 kernel: with environment: Feb 13 06:20:58.579850 kernel: HOME=/ Feb 13 06:20:58.579856 kernel: TERM=linux Feb 13 06:20:58.579862 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 06:20:58.579868 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 06:20:58.579875 systemd[1]: Detected architecture x86-64. Feb 13 06:20:58.579881 systemd[1]: Running in initrd. Feb 13 06:20:58.579886 systemd[1]: No hostname configured, using default hostname. Feb 13 06:20:58.579892 systemd[1]: Hostname set to . Feb 13 06:20:58.579897 systemd[1]: Initializing machine ID from random generator. Feb 13 06:20:58.579904 systemd[1]: Queued start job for default target initrd.target. Feb 13 06:20:58.579909 systemd[1]: Started systemd-ask-password-console.path. Feb 13 06:20:58.579914 systemd[1]: Reached target cryptsetup.target. Feb 13 06:20:58.579920 systemd[1]: Reached target ignition-diskful-subsequent.target. Feb 13 06:20:58.579925 systemd[1]: Reached target paths.target. Feb 13 06:20:58.579931 systemd[1]: Reached target slices.target. Feb 13 06:20:58.579936 systemd[1]: Reached target swap.target. Feb 13 06:20:58.579941 systemd[1]: Reached target timers.target. Feb 13 06:20:58.579948 systemd[1]: Listening on iscsid.socket. Feb 13 06:20:58.579954 systemd[1]: Listening on iscsiuio.socket. Feb 13 06:20:58.579959 systemd[1]: Listening on systemd-journald-audit.socket. Feb 13 06:20:58.579965 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 13 06:20:58.579970 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Feb 13 06:20:58.579976 systemd[1]: Listening on systemd-journald.socket. Feb 13 06:20:58.579981 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Feb 13 06:20:58.579987 kernel: clocksource: Switched to clocksource tsc Feb 13 06:20:58.579993 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 06:20:58.579998 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 06:20:58.580004 systemd[1]: Reached target sockets.target. Feb 13 06:20:58.580009 systemd[1]: Starting iscsiuio.service... Feb 13 06:20:58.580015 systemd[1]: Starting kmod-static-nodes.service... Feb 13 06:20:58.580020 kernel: SCSI subsystem initialized Feb 13 06:20:58.580026 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 06:20:58.580031 kernel: Loading iSCSI transport class v2.0-870. Feb 13 06:20:58.580037 systemd[1]: Starting systemd-journald.service... Feb 13 06:20:58.580043 systemd[1]: Starting systemd-modules-load.service... Feb 13 06:20:58.580051 systemd-journald[269]: Journal started Feb 13 06:20:58.580078 systemd-journald[269]: Runtime Journal (/run/log/journal/a1627d40e9524ba3bb42679978b207f1) is 8.0M, max 640.1M, 632.1M free. Feb 13 06:20:58.583008 systemd-modules-load[270]: Inserted module 'overlay' Feb 13 06:20:58.607386 systemd[1]: Starting systemd-vconsole-setup.service... Feb 13 06:20:58.640388 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 06:20:58.640404 systemd[1]: Started iscsiuio.service. Feb 13 06:20:58.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:58.666428 kernel: Bridge firewalling registered Feb 13 06:20:58.666443 kernel: audit: type=1130 audit(1707805258.664:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:58.666450 systemd[1]: Started systemd-journald.service. Feb 13 06:20:58.725952 systemd-modules-load[270]: Inserted module 'br_netfilter' Feb 13 06:20:58.769357 kernel: audit: type=1130 audit(1707805258.724:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:58.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:58.726239 systemd[1]: Finished kmod-static-nodes.service. Feb 13 06:20:58.865193 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 06:20:58.865205 kernel: audit: type=1130 audit(1707805258.788:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:58.865213 kernel: device-mapper: uevent: version 1.0.3 Feb 13 06:20:58.865220 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 13 06:20:58.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:58.789560 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 06:20:58.933206 kernel: audit: type=1130 audit(1707805258.887:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:58.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:58.887342 systemd-modules-load[270]: Inserted module 'dm_multipath' Feb 13 06:20:58.986525 kernel: audit: type=1130 audit(1707805258.940:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:58.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:58.888727 systemd[1]: Finished systemd-modules-load.service. Feb 13 06:20:59.041292 kernel: audit: type=1130 audit(1707805258.993:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:58.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:58.941818 systemd[1]: Finished systemd-vconsole-setup.service. Feb 13 06:20:58.994942 systemd[1]: Starting dracut-cmdline-ask.service... Feb 13 06:20:59.041701 systemd[1]: Starting systemd-sysctl.service... Feb 13 06:20:59.041984 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 06:20:59.044939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 06:20:59.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:59.045365 systemd[1]: Finished systemd-sysctl.service. Feb 13 06:20:59.094445 kernel: audit: type=1130 audit(1707805259.043:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:59.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:59.106731 systemd[1]: Finished dracut-cmdline-ask.service. Feb 13 06:20:59.212480 kernel: audit: type=1130 audit(1707805259.105:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:59.212538 kernel: audit: type=1130 audit(1707805259.161:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:59.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:59.162977 systemd[1]: Starting dracut-cmdline.service... Feb 13 06:20:59.245501 kernel: iscsi: registered transport (tcp) Feb 13 06:20:59.245512 dracut-cmdline[291]: dracut-dracut-053 Feb 13 06:20:59.245512 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 13 06:20:59.245512 dracut-cmdline[291]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 06:20:59.346511 kernel: iscsi: registered transport (qla4xxx) Feb 13 06:20:59.346527 kernel: QLogic iSCSI HBA Driver Feb 13 06:20:59.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:59.306357 systemd[1]: Finished dracut-cmdline.service. Feb 13 06:20:59.328162 systemd[1]: Starting dracut-pre-udev.service... Feb 13 06:20:59.391482 kernel: raid6: avx2x4 gen() 44714 MB/s Feb 13 06:20:59.354848 systemd[1]: Starting iscsid.service... Feb 13 06:20:59.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:59.373545 systemd[1]: Started iscsid.service. Feb 13 06:20:59.429488 kernel: raid6: avx2x4 xor() 14922 MB/s Feb 13 06:20:59.429499 iscsid[454]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 13 06:20:59.429499 iscsid[454]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 13 06:20:59.429499 iscsid[454]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 13 06:20:59.429499 iscsid[454]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 13 06:20:59.429499 iscsid[454]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 13 06:20:59.429499 iscsid[454]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 13 06:20:59.429499 iscsid[454]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 13 06:20:59.597504 kernel: raid6: avx2x2 gen() 53827 MB/s Feb 13 06:20:59.597517 kernel: raid6: avx2x2 xor() 32794 MB/s Feb 13 06:20:59.597523 kernel: raid6: avx2x1 gen() 45093 MB/s Feb 13 06:20:59.597530 kernel: raid6: avx2x1 xor() 28131 MB/s Feb 13 06:20:59.597536 kernel: raid6: sse2x4 gen() 21534 MB/s Feb 13 06:20:59.639415 kernel: raid6: sse2x4 xor() 11895 MB/s Feb 13 06:20:59.674418 kernel: raid6: sse2x2 gen() 22161 MB/s Feb 13 06:20:59.708439 kernel: raid6: sse2x2 xor() 13685 MB/s Feb 13 06:20:59.742439 kernel: raid6: sse2x1 gen() 18689 MB/s Feb 13 06:20:59.795188 kernel: raid6: sse2x1 xor() 9122 MB/s Feb 13 06:20:59.795203 kernel: raid6: using algorithm avx2x2 gen() 53827 MB/s Feb 13 06:20:59.795210 kernel: raid6: .... xor() 32794 MB/s, rmw enabled Feb 13 06:20:59.813688 kernel: raid6: using avx2x2 recovery algorithm Feb 13 06:20:59.860428 kernel: xor: automatically using best checksumming function avx Feb 13 06:20:59.939416 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 13 06:20:59.944518 systemd[1]: Finished dracut-pre-udev.service. Feb 13 06:20:59.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:59.952000 audit: BPF prog-id=6 op=LOAD Feb 13 06:20:59.952000 audit: BPF prog-id=7 op=LOAD Feb 13 06:20:59.954408 systemd[1]: Starting systemd-udevd.service... Feb 13 06:20:59.962559 systemd-udevd[470]: Using default interface naming scheme 'v252'. Feb 13 06:20:59.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:20:59.968624 systemd[1]: Started systemd-udevd.service. Feb 13 06:21:00.008538 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Feb 13 06:20:59.984970 systemd[1]: Starting dracut-pre-trigger.service... Feb 13 06:21:00.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:00.011121 systemd[1]: Finished dracut-pre-trigger.service. Feb 13 06:21:00.025369 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 06:21:00.075862 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 06:21:00.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:00.076427 systemd[1]: Starting dracut-initqueue.service... Feb 13 06:21:00.102391 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 06:21:00.104390 kernel: libata version 3.00 loaded. Feb 13 06:21:00.104414 kernel: ACPI: bus type USB registered Feb 13 06:21:00.140715 kernel: usbcore: registered new interface driver usbfs Feb 13 06:21:00.140760 kernel: usbcore: registered new interface driver hub Feb 13 06:21:00.159111 kernel: usbcore: registered new device driver usb Feb 13 06:21:00.177390 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 06:21:00.213105 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 06:21:00.213388 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 06:21:00.246682 kernel: AES CTR mode by8 optimization enabled Feb 13 06:21:00.246702 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 06:21:00.251388 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 06:21:00.251467 kernel: pps pps0: new PPS source ptp0 Feb 13 06:21:00.251529 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 13 06:21:00.251586 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 06:21:00.258425 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 13 06:21:00.258497 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 06:21:00.267433 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 06:21:00.278388 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 06:21:00.278486 kernel: scsi host0: ahci Feb 13 06:21:00.278564 kernel: scsi host1: ahci Feb 13 06:21:00.278621 kernel: scsi host2: ahci Feb 13 06:21:00.278690 kernel: scsi host3: ahci Feb 13 06:21:00.278751 kernel: scsi host4: ahci Feb 13 06:21:00.278868 kernel: scsi host5: ahci Feb 13 06:21:00.278919 kernel: scsi host6: ahci Feb 13 06:21:00.278970 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Feb 13 06:21:00.278978 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Feb 13 06:21:00.278985 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Feb 13 06:21:00.278991 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Feb 13 06:21:00.278997 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Feb 13 06:21:00.279004 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Feb 13 06:21:00.279011 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Feb 13 06:21:00.282432 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 13 06:21:00.318439 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 06:21:00.318525 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 06:21:00.357336 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 06:21:00.357420 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:24 Feb 13 06:21:00.395072 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 13 06:21:00.395143 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 06:21:00.395205 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 06:21:00.420936 kernel: hub 1-0:1.0: USB hub found Feb 13 06:21:00.478600 kernel: pps pps1: new PPS source ptp1 Feb 13 06:21:00.478703 kernel: hub 1-0:1.0: 16 ports detected Feb 13 06:21:00.478786 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 13 06:21:00.513387 kernel: hub 2-0:1.0: USB hub found Feb 13 06:21:00.513473 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 06:21:00.513539 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 06:21:00.529389 kernel: hub 2-0:1.0: 10 ports detected Feb 13 06:21:00.529557 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 06:21:00.549879 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:25 Feb 13 06:21:00.563334 kernel: usb: port power management may be unreliable Feb 13 06:21:00.577390 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 13 06:21:00.598491 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 06:21:00.598521 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 06:21:00.598675 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 06:21:00.714392 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 06:21:00.714468 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 06:21:00.738446 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 13 06:21:00.738519 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 06:21:00.754449 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 06:21:00.754521 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 06:21:00.769443 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 06:21:00.883450 kernel: hub 1-14:1.0: USB hub found Feb 13 06:21:00.883531 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 06:21:00.915302 kernel: hub 1-14:1.0: 4 ports detected Feb 13 06:21:00.915399 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU004, max UDMA/133 Feb 13 06:21:01.050429 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 06:21:01.065427 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 06:21:01.065500 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Feb 13 06:21:01.120414 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 06:21:01.120485 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 06:21:01.150966 kernel: ata1.00: Features: NCQ-prio Feb 13 06:21:01.183597 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 06:21:01.183614 kernel: ata2.00: Features: NCQ-prio Feb 13 06:21:01.203448 kernel: ata1.00: configured for UDMA/133 Feb 13 06:21:01.203464 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U004 PQ: 0 ANSI: 5 Feb 13 06:21:01.215439 kernel: ata2.00: configured for UDMA/133 Feb 13 06:21:01.215456 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 06:21:01.255457 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Feb 13 06:21:01.295439 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 13 06:21:01.321388 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 06:21:01.337944 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 06:21:01.337960 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 06:21:01.354073 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 06:21:01.354158 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 06:21:01.392080 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 06:21:01.392156 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 06:21:01.392221 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 06:21:01.392276 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 06:21:01.393451 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 13 06:21:01.408617 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 06:21:01.440798 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 06:21:01.440874 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 06:21:01.473086 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 06:21:01.473162 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 06:21:01.545392 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 06:21:01.561033 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 06:21:01.576320 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 06:21:01.576335 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 06:21:01.609421 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 06:21:01.624515 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 06:21:01.624530 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 06:21:01.657391 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Feb 13 06:21:01.674458 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 13 06:21:01.767508 kernel: usbcore: registered new interface driver usbhid Feb 13 06:21:01.767527 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by (udev-worker) (521) Feb 13 06:21:01.767535 kernel: usbhid: USB HID core driver Feb 13 06:21:01.767542 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Feb 13 06:21:01.767669 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 06:21:01.744512 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 13 06:21:01.780334 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 13 06:21:01.810985 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 06:21:01.876708 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 06:21:01.876804 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 06:21:01.876813 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 06:21:01.844320 systemd[1]: Reached target initrd-root-device.target. Feb 13 06:21:01.912700 systemd[1]: Starting disk-uuid.service... Feb 13 06:21:01.921758 systemd[1]: Finished dracut-initqueue.service. Feb 13 06:21:01.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:01.947847 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 06:21:02.016604 kernel: kauditd_printk_skb: 10 callbacks suppressed Feb 13 06:21:02.016617 kernel: audit: type=1130 audit(1707805261.946:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:01.947897 systemd[1]: Finished disk-uuid.service. Feb 13 06:21:02.106794 kernel: audit: type=1130 audit(1707805262.015:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.106806 kernel: audit: type=1131 audit(1707805262.015:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.016796 systemd[1]: Reached target local-fs-pre.target. Feb 13 06:21:02.115596 systemd[1]: Reached target local-fs.target. Feb 13 06:21:02.115705 systemd[1]: Reached target remote-fs-pre.target. Feb 13 06:21:02.129774 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 06:21:02.151609 systemd[1]: Reached target remote-fs.target. Feb 13 06:21:02.165585 systemd[1]: Reached target sysinit.target. Feb 13 06:21:02.181581 systemd[1]: Reached target basic.target. Feb 13 06:21:02.195038 systemd[1]: Starting dracut-pre-mount.service... Feb 13 06:21:02.213926 systemd[1]: Starting verity-setup.service... Feb 13 06:21:02.251471 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 06:21:02.238635 systemd[1]: Finished dracut-pre-mount.service. Feb 13 06:21:02.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.307442 kernel: audit: type=1130 audit(1707805262.258:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.281490 systemd[1]: Starting systemd-fsck-root.service... Feb 13 06:21:02.308553 systemd[1]: Found device dev-mapper-usr.device. Feb 13 06:21:02.308992 systemd[1]: Mounting sysusr-usr.mount... Feb 13 06:21:02.309233 systemd[1]: Finished verity-setup.service. Feb 13 06:21:02.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.317821 systemd-fsck[722]: ROOT: clean, 631/553520 files, 112310/553472 blocks Feb 13 06:21:02.381517 kernel: audit: type=1130 audit(1707805262.307:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.371787 systemd[1]: Finished systemd-fsck-root.service. Feb 13 06:21:02.463727 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 13 06:21:02.463756 kernel: audit: type=1130 audit(1707805262.412:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.413818 systemd[1]: Mounted sysusr-usr.mount. Feb 13 06:21:02.471001 systemd[1]: Mounting sysroot.mount... Feb 13 06:21:02.526564 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 13 06:21:02.519445 systemd[1]: Mounted sysroot.mount. Feb 13 06:21:02.534621 systemd[1]: Reached target initrd-root-fs.target. Feb 13 06:21:02.550975 systemd[1]: Mounting sysroot-usr.mount... Feb 13 06:21:02.566735 systemd[1]: Mounted sysroot-usr.mount. Feb 13 06:21:02.583037 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 06:21:02.594035 systemd[1]: Starting initrd-setup-root.service... Feb 13 06:21:02.695520 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 06:21:02.695537 kernel: BTRFS info (device sda6): using free space tree Feb 13 06:21:02.695544 kernel: BTRFS info (device sda6): has skinny extents Feb 13 06:21:02.695551 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 06:21:02.664836 systemd[1]: Finished initrd-setup-root.service. Feb 13 06:21:02.754059 kernel: audit: type=1130 audit(1707805262.702:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.704668 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 06:21:02.762998 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 13 06:21:02.770715 initrd-setup-root-after-ignition[805]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 06:21:02.786707 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 13 06:21:02.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.811702 systemd[1]: Reached target ignition-subsequent.target. Feb 13 06:21:02.881637 kernel: audit: type=1130 audit(1707805262.810:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.874032 systemd[1]: Starting initrd-parse-etc.service... Feb 13 06:21:02.894535 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 06:21:03.012146 kernel: audit: type=1130 audit(1707805262.904:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.012160 kernel: audit: type=1131 audit(1707805262.904:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:02.894582 systemd[1]: Finished initrd-parse-etc.service. Feb 13 06:21:02.905739 systemd[1]: Reached target initrd-fs.target. Feb 13 06:21:03.020616 systemd[1]: Reached target initrd.target. Feb 13 06:21:03.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.020673 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 13 06:21:03.021025 systemd[1]: Starting dracut-pre-pivot.service... Feb 13 06:21:03.040730 systemd[1]: Finished dracut-pre-pivot.service. Feb 13 06:21:03.056936 systemd[1]: Starting initrd-cleanup.service... Feb 13 06:21:03.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.074807 systemd[1]: Stopped target remote-cryptsetup.target. Feb 13 06:21:03.091678 systemd[1]: Stopped target timers.target. Feb 13 06:21:03.107741 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 06:21:03.107959 systemd[1]: Stopped dracut-pre-pivot.service. Feb 13 06:21:03.124228 systemd[1]: Stopped target initrd.target. Feb 13 06:21:03.137929 systemd[1]: Stopped target basic.target. Feb 13 06:21:03.152935 systemd[1]: Stopped target ignition-subsequent.target. Feb 13 06:21:03.168941 systemd[1]: Stopped target ignition-diskful-subsequent.target. Feb 13 06:21:03.187948 systemd[1]: Stopped target initrd-root-device.target. Feb 13 06:21:03.204935 systemd[1]: Stopped target paths.target. Feb 13 06:21:03.219933 systemd[1]: Stopped target remote-fs.target. Feb 13 06:21:03.236933 systemd[1]: Stopped target remote-fs-pre.target. Feb 13 06:21:03.254933 systemd[1]: Stopped target slices.target. Feb 13 06:21:03.270930 systemd[1]: Stopped target sockets.target. Feb 13 06:21:03.288929 systemd[1]: Stopped target sysinit.target. Feb 13 06:21:03.305059 systemd[1]: Stopped target local-fs.target. Feb 13 06:21:03.321921 systemd[1]: Stopped target local-fs-pre.target. Feb 13 06:21:03.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.337911 systemd[1]: Stopped target swap.target. Feb 13 06:21:03.351899 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 06:21:03.352139 systemd[1]: Closed iscsid.socket. Feb 13 06:21:03.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.365979 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 06:21:03.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.366208 systemd[1]: Closed iscsiuio.socket. Feb 13 06:21:03.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.380947 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 06:21:03.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.381263 systemd[1]: Stopped dracut-pre-mount.service. Feb 13 06:21:03.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.397151 systemd[1]: Stopped target cryptsetup.target. Feb 13 06:21:03.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.411829 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 06:21:03.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.415628 systemd[1]: Stopped systemd-ask-password-console.path. Feb 13 06:21:03.428846 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 06:21:03.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.429182 systemd[1]: Stopped dracut-initqueue.service. Feb 13 06:21:03.446069 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 06:21:03.446425 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 13 06:21:03.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.463038 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 06:21:03.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.463360 systemd[1]: Stopped initrd-setup-root.service. Feb 13 06:21:03.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.480044 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 06:21:03.480360 systemd[1]: Stopped systemd-sysctl.service. Feb 13 06:21:03.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.497131 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 06:21:03.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.497470 systemd[1]: Stopped systemd-modules-load.service. Feb 13 06:21:03.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.515033 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 06:21:03.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.515351 systemd[1]: Stopped systemd-udev-trigger.service. Feb 13 06:21:03.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:03.531037 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 06:21:03.531355 systemd[1]: Stopped dracut-pre-trigger.service. Feb 13 06:21:03.547427 systemd[1]: Stopping systemd-udevd.service... Feb 13 06:21:03.828537 iscsid[454]: iscsid shutting down. Feb 13 06:21:03.566036 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 06:21:03.566484 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 06:21:03.566551 systemd[1]: Stopped systemd-udevd.service. Feb 13 06:21:03.572944 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 06:21:03.573005 systemd[1]: Closed systemd-udevd-control.socket. Feb 13 06:21:03.596606 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 06:21:03.596640 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 13 06:21:03.611603 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 06:21:03.611769 systemd[1]: Stopped dracut-pre-udev.service. Feb 13 06:21:03.629765 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 06:21:03.629878 systemd[1]: Stopped dracut-cmdline.service. Feb 13 06:21:03.646875 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 06:21:03.647010 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 13 06:21:03.666563 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 13 06:21:03.679588 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 06:21:03.679618 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 13 06:21:03.695701 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 06:21:03.695732 systemd[1]: Stopped kmod-static-nodes.service. Feb 13 06:21:03.710727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 06:21:03.710788 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 13 06:21:03.731647 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 06:21:03.732724 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 06:21:03.732878 systemd[1]: Finished initrd-cleanup.service. Feb 13 06:21:03.746234 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 06:21:03.746439 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 13 06:21:03.765702 systemd[1]: Reached target initrd-switch-root.target. Feb 13 06:21:03.781297 systemd[1]: Starting initrd-switch-root.service... Feb 13 06:21:03.793890 systemd[1]: Switching root. Feb 13 06:21:03.829162 systemd-journald[269]: Journal stopped Feb 13 06:21:07.817052 systemd-journald[269]: Received SIGTERM from PID 1 (n/a). Feb 13 06:21:07.817066 kernel: SELinux: Class mctp_socket not defined in policy. Feb 13 06:21:07.817075 kernel: SELinux: Class anon_inode not defined in policy. Feb 13 06:21:07.817081 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 13 06:21:07.817086 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 06:21:07.817091 kernel: SELinux: policy capability open_perms=1 Feb 13 06:21:07.817107 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 06:21:07.817112 kernel: SELinux: policy capability always_check_network=0 Feb 13 06:21:07.817118 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 06:21:07.817124 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 06:21:07.817129 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 06:21:07.817134 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 06:21:07.817140 systemd[1]: Successfully loaded SELinux policy in 312.058ms. Feb 13 06:21:07.817146 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.763ms. Feb 13 06:21:07.817154 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 06:21:07.817161 systemd[1]: Detected architecture x86-64. Feb 13 06:21:07.817167 systemd[1]: Detected first boot. Feb 13 06:21:07.817172 systemd[1]: Hostname set to . Feb 13 06:21:07.817178 systemd[1]: Initializing machine ID from random generator. Feb 13 06:21:07.817184 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 13 06:21:07.817190 systemd[1]: Populated /etc with preset unit settings. Feb 13 06:21:07.817197 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 06:21:07.817203 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 06:21:07.817210 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 06:21:07.817216 systemd[1]: iscsid.service: Deactivated successfully. Feb 13 06:21:07.817222 systemd[1]: Stopped iscsid.service. Feb 13 06:21:07.817227 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 06:21:07.817235 systemd[1]: Stopped initrd-switch-root.service. Feb 13 06:21:07.817241 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 06:21:07.817247 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 13 06:21:07.817253 systemd[1]: Created slice system-addon\x2drun.slice. Feb 13 06:21:07.817259 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 13 06:21:07.817265 systemd[1]: Created slice system-getty.slice. Feb 13 06:21:07.817282 systemd[1]: Created slice system-modprobe.slice. Feb 13 06:21:07.817289 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 13 06:21:07.817295 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 13 06:21:07.817302 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 13 06:21:07.817308 systemd[1]: Created slice user.slice. Feb 13 06:21:07.817314 systemd[1]: Started systemd-ask-password-console.path. Feb 13 06:21:07.817320 systemd[1]: Started systemd-ask-password-wall.path. Feb 13 06:21:07.817328 systemd[1]: Set up automount boot.automount. Feb 13 06:21:07.817334 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 13 06:21:07.817340 systemd[1]: Stopped target initrd-switch-root.target. Feb 13 06:21:07.817347 systemd[1]: Stopped target initrd-fs.target. Feb 13 06:21:07.817354 systemd[1]: Stopped target initrd-root-fs.target. Feb 13 06:21:07.817360 systemd[1]: Reached target integritysetup.target. Feb 13 06:21:07.817367 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 06:21:07.817373 systemd[1]: Reached target remote-fs.target. Feb 13 06:21:07.817379 systemd[1]: Reached target slices.target. Feb 13 06:21:07.817408 systemd[1]: Reached target swap.target. Feb 13 06:21:07.817414 systemd[1]: Reached target torcx.target. Feb 13 06:21:07.817421 systemd[1]: Reached target veritysetup.target. Feb 13 06:21:07.817427 systemd[1]: Listening on systemd-coredump.socket. Feb 13 06:21:07.817451 systemd[1]: Listening on systemd-initctl.socket. Feb 13 06:21:07.817458 systemd[1]: Listening on systemd-networkd.socket. Feb 13 06:21:07.817464 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 06:21:07.817471 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 06:21:07.817478 systemd[1]: Listening on systemd-userdbd.socket. Feb 13 06:21:07.817484 systemd[1]: Mounting dev-hugepages.mount... Feb 13 06:21:07.817490 systemd[1]: Mounting dev-mqueue.mount... Feb 13 06:21:07.817497 systemd[1]: Mounting media.mount... Feb 13 06:21:07.817503 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 06:21:07.817509 systemd[1]: Mounting sys-kernel-debug.mount... Feb 13 06:21:07.817516 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 13 06:21:07.817522 systemd[1]: Mounting tmp.mount... Feb 13 06:21:07.817528 systemd[1]: Starting flatcar-tmpfiles.service... Feb 13 06:21:07.817536 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 13 06:21:07.817542 systemd[1]: Starting kmod-static-nodes.service... Feb 13 06:21:07.817549 systemd[1]: Starting modprobe@configfs.service... Feb 13 06:21:07.817555 systemd[1]: Starting modprobe@dm_mod.service... Feb 13 06:21:07.817561 systemd[1]: Starting modprobe@drm.service... Feb 13 06:21:07.817568 systemd[1]: Starting modprobe@efi_pstore.service... Feb 13 06:21:07.817574 systemd[1]: Starting modprobe@fuse.service... Feb 13 06:21:07.817580 kernel: fuse: init (API version 7.34) Feb 13 06:21:07.817586 systemd[1]: Starting modprobe@loop.service... Feb 13 06:21:07.817593 kernel: loop: module loaded Feb 13 06:21:07.817600 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 06:21:07.817607 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 06:21:07.817613 systemd[1]: Stopped systemd-fsck-root.service. Feb 13 06:21:07.817619 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 06:21:07.817625 kernel: kauditd_printk_skb: 50 callbacks suppressed Feb 13 06:21:07.817632 kernel: audit: type=1131 audit(1707805267.459:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:07.817638 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 06:21:07.817645 kernel: audit: type=1131 audit(1707805267.546:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:07.817651 systemd[1]: Stopped systemd-journald.service. Feb 13 06:21:07.817657 kernel: audit: type=1130 audit(1707805267.610:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:07.817663 kernel: audit: type=1131 audit(1707805267.610:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:07.817669 kernel: audit: type=1334 audit(1707805267.695:76): prog-id=13 op=LOAD Feb 13 06:21:07.817675 kernel: audit: type=1334 audit(1707805267.714:77): prog-id=14 op=LOAD Feb 13 06:21:07.817680 kernel: audit: type=1334 audit(1707805267.732:78): prog-id=15 op=LOAD Feb 13 06:21:07.817687 kernel: audit: type=1334 audit(1707805267.750:79): prog-id=11 op=UNLOAD Feb 13 06:21:07.817693 systemd[1]: Starting systemd-journald.service... Feb 13 06:21:07.817699 kernel: audit: type=1334 audit(1707805267.750:80): prog-id=12 op=UNLOAD Feb 13 06:21:07.817705 systemd[1]: Starting systemd-modules-load.service... Feb 13 06:21:07.817711 kernel: audit: type=1305 audit(1707805267.812:81): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 06:21:07.817718 systemd-journald[947]: Journal started Feb 13 06:21:07.817743 systemd-journald[947]: Runtime Journal (/run/log/journal/98fbde3e131747fcac44c91a442fc8c9) is 8.0M, max 640.1M, 632.1M free. Feb 13 06:21:04.259000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 06:21:04.525000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 06:21:04.528000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 06:21:04.528000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 06:21:04.528000 audit: BPF prog-id=8 op=LOAD Feb 13 06:21:04.528000 audit: BPF prog-id=8 op=UNLOAD Feb 13 06:21:04.528000 audit: BPF prog-id=9 op=LOAD Feb 13 06:21:04.528000 audit: BPF prog-id=9 op=UNLOAD Feb 13 06:21:04.593000 audit[837]: AVC avc: denied { associate } for pid=837 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 13 06:21:04.593000 audit[837]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a58dc a1=c00002ce58 a2=c00002bb00 a3=32 items=0 ppid=820 pid=837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 06:21:04.593000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 06:21:04.618000 audit[837]: AVC avc: denied { associate } for pid=837 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 13 06:21:04.618000 audit[837]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a59b5 a2=1ed a3=0 items=2 ppid=820 pid=837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 06:21:04.618000 audit: CWD cwd="/" Feb 13 06:21:04.618000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:04.618000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:04.618000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 06:21:06.171000 audit: BPF prog-id=10 op=LOAD Feb 13 06:21:06.171000 audit: BPF prog-id=3 op=UNLOAD Feb 13 06:21:06.171000 audit: BPF prog-id=11 op=LOAD Feb 13 06:21:06.171000 audit: BPF prog-id=12 op=LOAD Feb 13 06:21:06.171000 audit: BPF prog-id=4 op=UNLOAD Feb 13 06:21:06.171000 audit: BPF prog-id=5 op=UNLOAD Feb 13 06:21:06.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:06.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:06.223000 audit: BPF prog-id=10 op=UNLOAD Feb 13 06:21:06.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:06.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:06.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:07.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:07.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:07.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:07.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:07.695000 audit: BPF prog-id=13 op=LOAD Feb 13 06:21:07.714000 audit: BPF prog-id=14 op=LOAD Feb 13 06:21:07.732000 audit: BPF prog-id=15 op=LOAD Feb 13 06:21:07.750000 audit: BPF prog-id=11 op=UNLOAD Feb 13 06:21:07.750000 audit: BPF prog-id=12 op=UNLOAD Feb 13 06:21:07.812000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 06:21:04.591586 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 06:21:06.170863 systemd[1]: Queued start job for default target multi-user.target. Feb 13 06:21:04.591980 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 06:21:06.170870 systemd[1]: Unnecessary job was removed for dev-sda6.device. Feb 13 06:21:04.591998 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 06:21:06.173473 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 13 06:21:04.592021 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:04Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 13 06:21:06.173538 systemd[1]: Stopped iscsiuio.service. Feb 13 06:21:04.592030 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:04Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 13 06:21:06.180973 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 06:21:04.592053 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:04Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 13 06:21:04.592063 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:04Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 13 06:21:04.592426 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:04Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 13 06:21:04.592460 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:04Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 06:21:04.592471 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:04Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 06:21:04.592956 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 13 06:21:04.592984 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:04Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 13 06:21:04.592999 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 13 06:21:04.593011 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:04Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 13 06:21:04.593024 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 13 06:21:04.593035 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:04Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 13 06:21:05.810990 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:05Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 06:21:05.811134 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:05Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 06:21:05.811190 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:05Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 06:21:05.811283 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:05Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 06:21:05.811313 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:05Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 13 06:21:05.811347 /usr/lib/systemd/system-generators/torcx-generator[837]: time="2024-02-13T06:21:05Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 13 06:21:07.812000 audit[947]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe07766310 a2=4000 a3=7ffe077663ac items=0 ppid=1 pid=947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 06:21:07.812000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 13 06:21:07.896568 systemd[1]: Starting systemd-network-generator.service... Feb 13 06:21:07.923433 systemd[1]: Starting systemd-remount-fs.service... Feb 13 06:21:07.950440 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 06:21:07.993226 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 06:21:07.993247 systemd[1]: Stopped verity-setup.service. Feb 13 06:21:07.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.038430 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 06:21:08.058566 systemd[1]: Started systemd-journald.service. Feb 13 06:21:08.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.067022 systemd[1]: Mounted dev-hugepages.mount. Feb 13 06:21:08.074645 systemd[1]: Mounted dev-mqueue.mount. Feb 13 06:21:08.081629 systemd[1]: Mounted media.mount. Feb 13 06:21:08.088631 systemd[1]: Mounted sys-kernel-debug.mount. Feb 13 06:21:08.097626 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 13 06:21:08.106621 systemd[1]: Mounted tmp.mount. Feb 13 06:21:08.113688 systemd[1]: Finished flatcar-tmpfiles.service. Feb 13 06:21:08.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.122722 systemd[1]: Finished kmod-static-nodes.service. Feb 13 06:21:08.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.130752 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 06:21:08.130865 systemd[1]: Finished modprobe@configfs.service. Feb 13 06:21:08.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.139825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 06:21:08.139958 systemd[1]: Finished modprobe@dm_mod.service. Feb 13 06:21:08.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.148873 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 06:21:08.149032 systemd[1]: Finished modprobe@drm.service. Feb 13 06:21:08.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.159068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 06:21:08.159410 systemd[1]: Finished modprobe@efi_pstore.service. Feb 13 06:21:08.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.169272 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 06:21:08.169613 systemd[1]: Finished modprobe@fuse.service. Feb 13 06:21:08.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.178226 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 06:21:08.178565 systemd[1]: Finished modprobe@loop.service. Feb 13 06:21:08.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.187278 systemd[1]: Finished systemd-modules-load.service. Feb 13 06:21:08.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.196203 systemd[1]: Finished systemd-network-generator.service. Feb 13 06:21:08.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.205200 systemd[1]: Finished systemd-remount-fs.service. Feb 13 06:21:08.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.214203 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 06:21:08.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.223873 systemd[1]: Reached target network-pre.target. Feb 13 06:21:08.235454 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 13 06:21:08.244067 systemd[1]: Mounting sys-kernel-config.mount... Feb 13 06:21:08.251588 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 06:21:08.252595 systemd[1]: Starting systemd-hwdb-update.service... Feb 13 06:21:08.260053 systemd[1]: Starting systemd-journal-flush.service... Feb 13 06:21:08.263935 systemd-journald[947]: Time spent on flushing to /var/log/journal/98fbde3e131747fcac44c91a442fc8c9 is 10.815ms for 1263 entries. Feb 13 06:21:08.263935 systemd-journald[947]: System Journal (/var/log/journal/98fbde3e131747fcac44c91a442fc8c9) is 8.0M, max 195.6M, 187.6M free. Feb 13 06:21:08.306248 systemd-journald[947]: Received client request to flush runtime journal. Feb 13 06:21:08.276497 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 06:21:08.276960 systemd[1]: Starting systemd-random-seed.service... Feb 13 06:21:08.292491 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 13 06:21:08.293192 systemd[1]: Starting systemd-sysctl.service... Feb 13 06:21:08.299965 systemd[1]: Starting systemd-sysusers.service... Feb 13 06:21:08.306961 systemd[1]: Starting systemd-udev-settle.service... Feb 13 06:21:08.315545 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 13 06:21:08.324555 systemd[1]: Mounted sys-kernel-config.mount. Feb 13 06:21:08.332601 systemd[1]: Finished systemd-journal-flush.service. Feb 13 06:21:08.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.340626 systemd[1]: Finished systemd-random-seed.service. Feb 13 06:21:08.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.348580 systemd[1]: Finished systemd-sysctl.service. Feb 13 06:21:08.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.356615 systemd[1]: Finished systemd-sysusers.service. Feb 13 06:21:08.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.365578 systemd[1]: Reached target first-boot-complete.target. Feb 13 06:21:08.374138 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 06:21:08.383590 udevadm[963]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 06:21:08.394754 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 06:21:08.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.573976 systemd[1]: Finished systemd-hwdb-update.service. Feb 13 06:21:08.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.581000 audit: BPF prog-id=16 op=LOAD Feb 13 06:21:08.581000 audit: BPF prog-id=17 op=LOAD Feb 13 06:21:08.581000 audit: BPF prog-id=6 op=UNLOAD Feb 13 06:21:08.581000 audit: BPF prog-id=7 op=UNLOAD Feb 13 06:21:08.583728 systemd[1]: Starting systemd-udevd.service... Feb 13 06:21:08.595542 systemd-udevd[966]: Using default interface naming scheme 'v252'. Feb 13 06:21:08.615087 systemd[1]: Started systemd-udevd.service. Feb 13 06:21:08.625450 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 13 06:21:08.626560 systemd[1]: Starting systemd-networkd.service... Feb 13 06:21:08.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.624000 audit: BPF prog-id=18 op=LOAD Feb 13 06:21:08.648000 audit: BPF prog-id=19 op=LOAD Feb 13 06:21:08.667335 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 06:21:08.667608 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 06:21:08.665000 audit: BPF prog-id=20 op=LOAD Feb 13 06:21:08.666000 audit: BPF prog-id=21 op=LOAD Feb 13 06:21:08.668204 systemd[1]: Starting systemd-userdbd.service... Feb 13 06:21:08.689412 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 06:21:08.706679 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 06:21:08.708388 kernel: IPMI message handler: version 39.2 Feb 13 06:21:08.708421 kernel: ACPI: button: Power Button [PWRF] Feb 13 06:21:08.745992 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 06:21:08.763475 kernel: ipmi device interface Feb 13 06:21:08.774466 systemd[1]: Started systemd-userdbd.service. Feb 13 06:21:08.666000 audit[998]: AVC avc: denied { confidentiality } for pid=998 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 06:21:08.666000 audit[998]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5644af50b840 a1=4d8bc a2=7fddebf89bc5 a3=5 items=42 ppid=966 pid=998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 06:21:08.666000 audit: CWD cwd="/" Feb 13 06:21:08.666000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=1 name=(null) inode=26525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=2 name=(null) inode=26525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=3 name=(null) inode=26526 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=4 name=(null) inode=26525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=5 name=(null) inode=26527 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=6 name=(null) inode=26525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=7 name=(null) inode=26528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=8 name=(null) inode=26528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=9 name=(null) inode=26529 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=10 name=(null) inode=26528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=11 name=(null) inode=26530 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=12 name=(null) inode=26528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=13 name=(null) inode=26531 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=14 name=(null) inode=26528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=15 name=(null) inode=26532 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=16 name=(null) inode=26528 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=17 name=(null) inode=26533 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=18 name=(null) inode=26525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=19 name=(null) inode=26534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=20 name=(null) inode=26534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=21 name=(null) inode=26535 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=22 name=(null) inode=26534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=23 name=(null) inode=26536 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=24 name=(null) inode=26534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=25 name=(null) inode=26537 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=26 name=(null) inode=26534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=27 name=(null) inode=26538 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=28 name=(null) inode=26534 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=29 name=(null) inode=26539 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=30 name=(null) inode=26525 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=31 name=(null) inode=26540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=32 name=(null) inode=26540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=33 name=(null) inode=26541 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=34 name=(null) inode=26540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=35 name=(null) inode=26542 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=36 name=(null) inode=26540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=37 name=(null) inode=26543 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=38 name=(null) inode=26540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=39 name=(null) inode=26544 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=40 name=(null) inode=26540 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PATH item=41 name=(null) inode=26545 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 06:21:08.666000 audit: PROCTITLE proctitle="(udev-worker)" Feb 13 06:21:08.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:08.817443 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 06:21:08.817651 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 06:21:08.837671 kernel: i2c i2c-0: 1/4 memory slots populated (from DMI) Feb 13 06:21:08.840390 kernel: ipmi_si: IPMI System Interface driver Feb 13 06:21:08.840416 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 06:21:08.840509 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 06:21:08.878023 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 06:21:08.938229 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 06:21:08.976316 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 06:21:08.976345 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 06:21:08.976471 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 06:21:08.976490 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 06:21:09.061147 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 06:21:09.061352 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 06:21:09.061393 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 06:21:09.142238 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 13 06:21:09.142345 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 06:21:09.142433 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 13 06:21:09.161389 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 13 06:21:09.222290 systemd-networkd[1007]: bond0: netdev ready Feb 13 06:21:09.225097 systemd-networkd[1007]: lo: Link UP Feb 13 06:21:09.225100 systemd-networkd[1007]: lo: Gained carrier Feb 13 06:21:09.225506 systemd-networkd[1007]: Enumeration completed Feb 13 06:21:09.225597 systemd[1]: Started systemd-networkd.service. Feb 13 06:21:09.225820 systemd-networkd[1007]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 06:21:09.228038 systemd-networkd[1007]: enp1s0f1np1: Configuring with /etc/systemd/network/10-b8:59:9f:de:84:bd.network. Feb 13 06:21:09.232725 kernel: intel_rapl_common: Found RAPL domain package Feb 13 06:21:09.232783 kernel: intel_rapl_common: Found RAPL domain core Feb 13 06:21:09.232802 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 06:21:09.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:09.286427 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 06:21:09.304418 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 06:21:09.309558 systemd[1]: Finished systemd-udev-settle.service. Feb 13 06:21:09.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:09.319099 systemd[1]: Starting lvm2-activation-early.service... Feb 13 06:21:09.334820 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 06:21:09.360819 systemd[1]: Finished lvm2-activation-early.service. Feb 13 06:21:09.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:09.368504 systemd[1]: Reached target cryptsetup.target. Feb 13 06:21:09.377048 systemd[1]: Starting lvm2-activation.service... Feb 13 06:21:09.379159 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 06:21:09.408805 systemd[1]: Finished lvm2-activation.service. Feb 13 06:21:09.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:09.416501 systemd[1]: Reached target local-fs-pre.target. Feb 13 06:21:09.424463 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 06:21:09.424477 systemd[1]: Reached target local-fs.target. Feb 13 06:21:09.432470 systemd[1]: Reached target machines.target. Feb 13 06:21:09.441078 systemd[1]: Starting ldconfig.service... Feb 13 06:21:09.447924 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 13 06:21:09.447945 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 06:21:09.448458 systemd[1]: Starting systemd-boot-update.service... Feb 13 06:21:09.455887 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 13 06:21:09.465920 systemd[1]: Starting systemd-machine-id-commit.service... Feb 13 06:21:09.465998 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 13 06:21:09.466021 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 13 06:21:09.466567 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 13 06:21:09.466824 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1073 (bootctl) Feb 13 06:21:09.467426 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 13 06:21:09.472570 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 13 06:21:09.474351 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 06:21:09.475386 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 06:21:09.479529 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 06:21:09.479847 systemd[1]: Finished systemd-machine-id-commit.service. Feb 13 06:21:09.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:09.486907 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 13 06:21:09.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:09.541736 systemd-fsck[1081]: fsck.fat 4.2 (2021-01-31) Feb 13 06:21:09.541736 systemd-fsck[1081]: /dev/sda1: 789 files, 115339/258078 clusters Feb 13 06:21:09.542520 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 13 06:21:09.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:09.553134 systemd[1]: Mounting boot.mount... Feb 13 06:21:09.574088 systemd[1]: Mounted boot.mount. Feb 13 06:21:09.591871 systemd[1]: Finished systemd-boot-update.service. Feb 13 06:21:09.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:09.620011 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 13 06:21:09.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 06:21:09.629329 systemd[1]: Starting audit-rules.service... Feb 13 06:21:09.636969 systemd[1]: Starting clean-ca-certificates.service... Feb 13 06:21:09.645979 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 13 06:21:09.648000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 13 06:21:09.648000 audit[1104]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc881c5560 a2=420 a3=0 items=0 ppid=1087 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 06:21:09.648000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 13 06:21:09.650457 augenrules[1104]: No rules Feb 13 06:21:09.656512 systemd[1]: Starting systemd-resolved.service... Feb 13 06:21:09.665258 systemd[1]: Starting systemd-timesyncd.service... Feb 13 06:21:09.672948 systemd[1]: Starting systemd-update-utmp.service... Feb 13 06:21:09.679685 systemd[1]: Finished audit-rules.service. Feb 13 06:21:09.686554 systemd[1]: Finished clean-ca-certificates.service. Feb 13 06:21:09.695540 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 13 06:21:09.707879 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 06:21:09.709183 systemd[1]: Finished systemd-update-utmp.service. Feb 13 06:21:09.731712 ldconfig[1072]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 06:21:09.735154 systemd[1]: Finished ldconfig.service. Feb 13 06:21:09.739388 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 06:21:09.753325 systemd[1]: Starting systemd-update-done.service... Feb 13 06:21:09.764388 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 13 06:21:09.764419 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 06:21:09.764482 systemd-networkd[1007]: enp1s0f0np0: Configuring with /etc/systemd/network/10-b8:59:9f:de:84:bc.network. Feb 13 06:21:09.790899 systemd-resolved[1109]: Positive Trust Anchors: Feb 13 06:21:09.790906 systemd-resolved[1109]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 06:21:09.790924 systemd-resolved[1109]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 06:21:09.791557 systemd[1]: Started systemd-timesyncd.service. Feb 13 06:21:09.799592 systemd[1]: Finished systemd-update-done.service. Feb 13 06:21:09.807506 systemd[1]: Reached target time-set.target. Feb 13 06:21:09.809448 systemd-resolved[1109]: Using system hostname 'ci-3510.3.2-a-8b83115c31'. Feb 13 06:21:09.914564 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 06:21:09.972427 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 06:21:09.997395 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 13 06:21:10.016428 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 13 06:21:10.017156 systemd-networkd[1007]: bond0: Link UP Feb 13 06:21:10.017536 systemd-networkd[1007]: enp1s0f1np1: Link UP Feb 13 06:21:10.017835 systemd-networkd[1007]: enp1s0f0np0: Link UP Feb 13 06:21:10.018079 systemd-networkd[1007]: enp1s0f1np1: Gained carrier Feb 13 06:21:10.018410 systemd[1]: Started systemd-resolved.service. Feb 13 06:21:10.019754 systemd-networkd[1007]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:59:9f:de:84:bc.network. Feb 13 06:21:10.026543 systemd[1]: Reached target network.target. Feb 13 06:21:10.059465 systemd[1]: Reached target nss-lookup.target. Feb 13 06:21:10.061489 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Feb 13 06:21:10.061517 kernel: bond0: active interface up! Feb 13 06:21:10.078466 systemd[1]: Reached target sysinit.target. Feb 13 06:21:10.084388 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Feb 13 06:21:10.092510 systemd[1]: Started motdgen.path. Feb 13 06:21:10.099462 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 13 06:21:10.108487 systemd[1]: Started logrotate.timer. Feb 13 06:21:10.115462 systemd[1]: Started mdadm.timer. Feb 13 06:21:10.122421 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 13 06:21:10.139416 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 06:21:10.139453 systemd[1]: Reached target paths.target. Feb 13 06:21:10.145425 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 06:21:10.151490 systemd[1]: Reached target timers.target. Feb 13 06:21:10.158543 systemd[1]: Listening on dbus.socket. Feb 13 06:21:10.165977 systemd[1]: Starting docker.socket... Feb 13 06:21:10.173773 systemd[1]: Listening on sshd.socket. Feb 13 06:21:10.180466 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 06:21:10.180689 systemd[1]: Listening on docker.socket. Feb 13 06:21:10.195071 systemd[1]: Reached target sockets.target. Feb 13 06:21:10.207389 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 06:21:10.223468 systemd[1]: Reached target basic.target. Feb 13 06:21:10.229628 systemd-networkd[1007]: bond0: Gained carrier Feb 13 06:21:10.229725 systemd-networkd[1007]: enp1s0f0np0: Gained carrier Feb 13 06:21:10.229788 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 06:21:10.230428 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 06:21:10.230470 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Feb 13 06:21:10.252614 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 06:21:10.252791 systemd-networkd[1007]: enp1s0f1np1: Link DOWN Feb 13 06:21:10.252794 systemd-networkd[1007]: enp1s0f1np1: Lost carrier Feb 13 06:21:10.253484 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 06:21:10.253498 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 06:21:10.253952 systemd[1]: Starting containerd.service... Feb 13 06:21:10.260889 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 13 06:21:10.263531 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 06:21:10.263565 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 06:21:10.263679 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 06:21:10.269908 systemd[1]: Starting coreos-metadata.service... Feb 13 06:21:10.276955 systemd[1]: Starting dbus.service... Feb 13 06:21:10.282993 systemd[1]: Starting enable-oem-cloudinit.service... Feb 13 06:21:10.288646 jq[1125]: false Feb 13 06:21:10.289944 systemd[1]: Starting extend-filesystems.service... Feb 13 06:21:10.296240 dbus-daemon[1122]: [system] SELinux support is enabled Feb 13 06:21:10.296455 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 13 06:21:10.297188 systemd[1]: Starting motdgen.service... Feb 13 06:21:10.297948 extend-filesystems[1126]: Found sda Feb 13 06:21:10.310466 extend-filesystems[1126]: Found sda1 Feb 13 06:21:10.310466 extend-filesystems[1126]: Found sda2 Feb 13 06:21:10.310466 extend-filesystems[1126]: Found sda3 Feb 13 06:21:10.310466 extend-filesystems[1126]: Found usr Feb 13 06:21:10.310466 extend-filesystems[1126]: Found sda4 Feb 13 06:21:10.310466 extend-filesystems[1126]: Found sda6 Feb 13 06:21:10.310466 extend-filesystems[1126]: Found sda7 Feb 13 06:21:10.310466 extend-filesystems[1126]: Found sda9 Feb 13 06:21:10.310466 extend-filesystems[1126]: Checking size of /dev/sda9 Feb 13 06:21:10.310466 extend-filesystems[1126]: Resized partition /dev/sda9 Feb 13 06:21:10.542443 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 13 06:21:10.542470 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 06:21:10.542606 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Feb 13 06:21:10.542627 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Feb 13 06:21:10.542643 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Feb 13 06:21:10.542659 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Feb 13 06:21:10.542674 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Feb 13 06:21:10.304099 systemd[1]: Starting prepare-cni-plugins.service... Feb 13 06:21:10.542782 coreos-metadata[1119]: Feb 13 06:21:10.319 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 06:21:10.542917 coreos-metadata[1118]: Feb 13 06:21:10.318 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 06:21:10.543038 extend-filesystems[1140]: resize2fs 1.46.5 (30-Dec-2021) Feb 13 06:21:10.325853 systemd[1]: Starting prepare-critools.service... Feb 13 06:21:10.331096 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 13 06:21:10.354949 systemd[1]: Starting sshd-keygen.service... Feb 13 06:21:10.376355 systemd[1]: Starting systemd-logind.service... Feb 13 06:21:10.392459 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 06:21:10.392989 systemd[1]: Starting tcsd.service... Feb 13 06:21:10.557924 update_engine[1156]: I0213 06:21:10.454547 1156 main.cc:92] Flatcar Update Engine starting Feb 13 06:21:10.557924 update_engine[1156]: I0213 06:21:10.457808 1156 update_check_scheduler.cc:74] Next update check in 6m21s Feb 13 06:21:10.399610 systemd-logind[1154]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 06:21:10.558216 jq[1157]: true Feb 13 06:21:10.399619 systemd-logind[1154]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 06:21:10.399628 systemd-logind[1154]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 06:21:10.399731 systemd-logind[1154]: New seat seat0. Feb 13 06:21:10.404716 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 06:21:10.405086 systemd[1]: Starting update-engine.service... Feb 13 06:21:10.423628 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 13 06:21:10.472697 systemd-networkd[1007]: enp1s0f1np1: Link UP Feb 13 06:21:10.472877 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 06:21:10.472893 systemd-networkd[1007]: enp1s0f1np1: Gained carrier Feb 13 06:21:10.472969 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 06:21:10.506491 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 06:21:10.526622 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 06:21:10.533996 systemd[1]: Started dbus.service. Feb 13 06:21:10.551163 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 06:21:10.551252 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 13 06:21:10.551395 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 06:21:10.551473 systemd[1]: Finished motdgen.service. Feb 13 06:21:10.565354 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 06:21:10.565439 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 13 06:21:10.571029 tar[1159]: ./ Feb 13 06:21:10.571029 tar[1159]: ./loopback Feb 13 06:21:10.575993 jq[1163]: false Feb 13 06:21:10.576113 tar[1160]: crictl Feb 13 06:21:10.576693 dbus-daemon[1122]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 06:21:10.577185 systemd[1]: update-ssh-keys-after-ignition.service: Skipped due to 'exec-condition'. Feb 13 06:21:10.577272 systemd[1]: Condition check resulted in update-ssh-keys-after-ignition.service being skipped. Feb 13 06:21:10.581483 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 06:21:10.581568 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 13 06:21:10.581636 systemd[1]: Started update-engine.service. Feb 13 06:21:10.584805 sshd_keygen[1153]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 06:21:10.586676 env[1164]: time="2024-02-13T06:21:10.586623377Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 13 06:21:10.594535 tar[1159]: ./bandwidth Feb 13 06:21:10.595508 env[1164]: time="2024-02-13T06:21:10.595465397Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 06:21:10.595541 env[1164]: time="2024-02-13T06:21:10.595528770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 06:21:10.596065 env[1164]: time="2024-02-13T06:21:10.596048032Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 06:21:10.596065 env[1164]: time="2024-02-13T06:21:10.596064405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 06:21:10.596185 env[1164]: time="2024-02-13T06:21:10.596173306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 06:21:10.596185 env[1164]: time="2024-02-13T06:21:10.596184114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 06:21:10.596226 env[1164]: time="2024-02-13T06:21:10.596191195Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 13 06:21:10.596226 env[1164]: time="2024-02-13T06:21:10.596196631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 06:21:10.596275 env[1164]: time="2024-02-13T06:21:10.596236962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 06:21:10.596471 env[1164]: time="2024-02-13T06:21:10.596460667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 06:21:10.596537 env[1164]: time="2024-02-13T06:21:10.596526736Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 06:21:10.596537 env[1164]: time="2024-02-13T06:21:10.596536491Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 06:21:10.596579 env[1164]: time="2024-02-13T06:21:10.596567587Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 13 06:21:10.596579 env[1164]: time="2024-02-13T06:21:10.596574707Z" level=info msg="metadata content store policy set" policy=shared Feb 13 06:21:10.597363 systemd[1]: Finished sshd-keygen.service. Feb 13 06:21:10.604516 systemd[1]: Started systemd-logind.service. Feb 13 06:21:10.614294 systemd[1]: Starting issuegen.service... Feb 13 06:21:10.614772 env[1164]: time="2024-02-13T06:21:10.614758131Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 06:21:10.614810 env[1164]: time="2024-02-13T06:21:10.614777315Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 06:21:10.614810 env[1164]: time="2024-02-13T06:21:10.614787207Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 06:21:10.614845 env[1164]: time="2024-02-13T06:21:10.614809311Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 06:21:10.614845 env[1164]: time="2024-02-13T06:21:10.614821181Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 06:21:10.614845 env[1164]: time="2024-02-13T06:21:10.614829341Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 06:21:10.614845 env[1164]: time="2024-02-13T06:21:10.614836814Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 06:21:10.614908 env[1164]: time="2024-02-13T06:21:10.614844992Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 06:21:10.614908 env[1164]: time="2024-02-13T06:21:10.614852681Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 13 06:21:10.614908 env[1164]: time="2024-02-13T06:21:10.614860238Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 06:21:10.614908 env[1164]: time="2024-02-13T06:21:10.614868486Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 06:21:10.614908 env[1164]: time="2024-02-13T06:21:10.614879939Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 06:21:10.614995 env[1164]: time="2024-02-13T06:21:10.614941904Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 06:21:10.614995 env[1164]: time="2024-02-13T06:21:10.614989076Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 06:21:10.615431 env[1164]: time="2024-02-13T06:21:10.615378146Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 06:21:10.615490 env[1164]: time="2024-02-13T06:21:10.615469276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 06:21:10.615607 env[1164]: time="2024-02-13T06:21:10.615593125Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 06:21:10.615653 env[1164]: time="2024-02-13T06:21:10.615639186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 06:21:10.615691 env[1164]: time="2024-02-13T06:21:10.615658822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 06:21:10.615691 env[1164]: time="2024-02-13T06:21:10.615672428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 06:21:10.615691 env[1164]: time="2024-02-13T06:21:10.615684743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 06:21:10.615772 env[1164]: time="2024-02-13T06:21:10.615695828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 06:21:10.615772 env[1164]: time="2024-02-13T06:21:10.615707603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 06:21:10.615772 env[1164]: time="2024-02-13T06:21:10.615719699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 06:21:10.615772 env[1164]: time="2024-02-13T06:21:10.615730145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 06:21:10.615772 env[1164]: time="2024-02-13T06:21:10.615746967Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 06:21:10.615906 env[1164]: time="2024-02-13T06:21:10.615843664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 06:21:10.615906 env[1164]: time="2024-02-13T06:21:10.615857142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 06:21:10.615906 env[1164]: time="2024-02-13T06:21:10.615869363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 06:21:10.615906 env[1164]: time="2024-02-13T06:21:10.615880673Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 06:21:10.615906 env[1164]: time="2024-02-13T06:21:10.615894219Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 13 06:21:10.616033 env[1164]: time="2024-02-13T06:21:10.615906023Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 06:21:10.616033 env[1164]: time="2024-02-13T06:21:10.615924864Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 13 06:21:10.616033 env[1164]: time="2024-02-13T06:21:10.615953527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 06:21:10.616152 env[1164]: time="2024-02-13T06:21:10.616122853Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 06:21:10.618264 env[1164]: time="2024-02-13T06:21:10.616159808Z" level=info msg="Connect containerd service" Feb 13 06:21:10.618264 env[1164]: time="2024-02-13T06:21:10.616186097Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 06:21:10.618264 env[1164]: time="2024-02-13T06:21:10.616483153Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 06:21:10.618264 env[1164]: time="2024-02-13T06:21:10.616585566Z" level=info msg="Start subscribing containerd event" Feb 13 06:21:10.618264 env[1164]: time="2024-02-13T06:21:10.616606439Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 06:21:10.618264 env[1164]: time="2024-02-13T06:21:10.616632841Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 06:21:10.618264 env[1164]: time="2024-02-13T06:21:10.616635658Z" level=info msg="Start recovering state" Feb 13 06:21:10.618264 env[1164]: time="2024-02-13T06:21:10.616656762Z" level=info msg="containerd successfully booted in 0.030400s" Feb 13 06:21:10.618264 env[1164]: time="2024-02-13T06:21:10.616675548Z" level=info msg="Start event monitor" Feb 13 06:21:10.618264 env[1164]: time="2024-02-13T06:21:10.616696096Z" level=info msg="Start snapshots syncer" Feb 13 06:21:10.618264 env[1164]: time="2024-02-13T06:21:10.616706246Z" level=info msg="Start cni network conf syncer for default" Feb 13 06:21:10.618264 env[1164]: time="2024-02-13T06:21:10.616711308Z" level=info msg="Start streaming server" Feb 13 06:21:10.623567 systemd[1]: Started locksmithd.service. Feb 13 06:21:10.631521 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 06:21:10.631613 systemd[1]: Reached target system-config.target. Feb 13 06:21:10.633584 tar[1159]: ./ptp Feb 13 06:21:10.639474 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 06:21:10.639545 systemd[1]: Reached target user-config.target. Feb 13 06:21:10.650114 systemd[1]: Started containerd.service. Feb 13 06:21:10.656660 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 06:21:10.656751 systemd[1]: Finished issuegen.service. Feb 13 06:21:10.661999 tar[1159]: ./vlan Feb 13 06:21:10.665577 systemd[1]: Starting systemd-user-sessions.service... Feb 13 06:21:10.674679 systemd[1]: Finished systemd-user-sessions.service. Feb 13 06:21:10.683292 systemd[1]: Started getty@tty1.service. Feb 13 06:21:10.690315 tar[1159]: ./host-device Feb 13 06:21:10.692256 systemd[1]: Started serial-getty@ttyS1.service. Feb 13 06:21:10.700525 systemd[1]: Reached target getty.target. Feb 13 06:21:10.703205 locksmithd[1201]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 06:21:10.715516 tar[1159]: ./tuning Feb 13 06:21:10.740644 tar[1159]: ./vrf Feb 13 06:21:10.766879 tar[1159]: ./sbr Feb 13 06:21:10.789789 tar[1159]: ./tap Feb 13 06:21:10.816200 tar[1159]: ./dhcp Feb 13 06:21:10.887011 tar[1159]: ./static Feb 13 06:21:10.895640 systemd[1]: Finished prepare-critools.service. Feb 13 06:21:10.908241 tar[1159]: ./firewall Feb 13 06:21:10.937929 tar[1159]: ./macvlan Feb 13 06:21:10.964527 tar[1159]: ./dummy Feb 13 06:21:10.987422 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 13 06:21:11.012659 extend-filesystems[1140]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 06:21:11.012659 extend-filesystems[1140]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 06:21:11.012659 extend-filesystems[1140]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 13 06:21:11.050465 extend-filesystems[1126]: Resized filesystem in /dev/sda9 Feb 13 06:21:11.050465 extend-filesystems[1126]: Found sdb Feb 13 06:21:11.013298 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 06:21:11.067503 tar[1159]: ./bridge Feb 13 06:21:11.067503 tar[1159]: ./ipvlan Feb 13 06:21:11.013392 systemd[1]: Finished extend-filesystems.service. Feb 13 06:21:11.067698 tar[1159]: ./portmap Feb 13 06:21:11.091189 tar[1159]: ./host-local Feb 13 06:21:11.115466 systemd[1]: Finished prepare-cni-plugins.service. Feb 13 06:21:11.562654 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 06:21:11.754530 systemd-networkd[1007]: bond0: Gained IPv6LL Feb 13 06:21:11.754740 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 06:21:12.768437 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 13 06:21:15.710717 login[1207]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 06:21:15.718251 systemd[1]: Created slice user-500.slice. Feb 13 06:21:15.718819 systemd[1]: Starting user-runtime-dir@500.service... Feb 13 06:21:15.719905 systemd-logind[1154]: New session 1 of user core. Feb 13 06:21:15.720339 login[1206]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 06:21:15.722357 systemd-logind[1154]: New session 2 of user core. Feb 13 06:21:15.724091 systemd[1]: Finished user-runtime-dir@500.service. Feb 13 06:21:15.724767 systemd[1]: Starting user@500.service... Feb 13 06:21:15.726495 (systemd)[1216]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:21:15.791053 systemd[1216]: Queued start job for default target default.target. Feb 13 06:21:15.791272 systemd[1216]: Reached target paths.target. Feb 13 06:21:15.791284 systemd[1216]: Reached target sockets.target. Feb 13 06:21:15.791292 systemd[1216]: Reached target timers.target. Feb 13 06:21:15.791299 systemd[1216]: Reached target basic.target. Feb 13 06:21:15.791317 systemd[1216]: Reached target default.target. Feb 13 06:21:15.791331 systemd[1216]: Startup finished in 61ms. Feb 13 06:21:15.791350 systemd[1]: Started user@500.service. Feb 13 06:21:15.791922 systemd[1]: Started session-1.scope. Feb 13 06:21:15.792252 systemd[1]: Started session-2.scope. Feb 13 06:21:16.491774 coreos-metadata[1119]: Feb 13 06:21:16.491 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 06:21:16.492546 coreos-metadata[1118]: Feb 13 06:21:16.491 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 06:21:17.492318 coreos-metadata[1119]: Feb 13 06:21:17.492 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 06:21:17.493087 coreos-metadata[1118]: Feb 13 06:21:17.492 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 06:21:18.080854 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Feb 13 06:21:18.081022 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Feb 13 06:21:18.569975 coreos-metadata[1119]: Feb 13 06:21:18.569 INFO Fetch successful Feb 13 06:21:18.571689 coreos-metadata[1118]: Feb 13 06:21:18.571 INFO Fetch successful Feb 13 06:21:18.592360 systemd[1]: Finished coreos-metadata.service. Feb 13 06:21:18.593191 systemd[1]: Started packet-phone-home.service. Feb 13 06:21:18.597587 unknown[1118]: wrote ssh authorized keys file for user: core Feb 13 06:21:18.622673 curl[1238]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 13 06:21:18.623344 curl[1238]: Dload Upload Total Spent Left Speed Feb 13 06:21:18.660744 update-ssh-keys[1239]: Updated "/home/core/.ssh/authorized_keys" Feb 13 06:21:18.661788 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 13 06:21:18.663143 systemd[1]: Reached target multi-user.target. Feb 13 06:21:18.666379 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 13 06:21:18.674156 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 13 06:21:18.674230 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 13 06:21:18.674326 systemd[1]: Startup finished in 1.902s (kernel) + 6.096s (initrd) + 14.746s (userspace) = 22.745s. Feb 13 06:21:18.821671 curl[1238]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 13 06:21:18.824138 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 13 06:21:20.178100 systemd[1]: Created slice system-sshd.slice. Feb 13 06:21:20.178733 systemd[1]: Started sshd@0-147.75.49.59:22-139.178.68.195:49634.service. Feb 13 06:21:20.223213 sshd[1243]: Accepted publickey for core from 139.178.68.195 port 49634 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:21:20.226504 sshd[1243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:21:20.236567 systemd-logind[1154]: New session 3 of user core. Feb 13 06:21:20.240318 systemd[1]: Started session-3.scope. Feb 13 06:21:20.308722 systemd[1]: Started sshd@1-147.75.49.59:22-139.178.68.195:49642.service. Feb 13 06:21:20.341207 sshd[1248]: Accepted publickey for core from 139.178.68.195 port 49642 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:21:20.341879 sshd[1248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:21:20.344113 systemd-logind[1154]: New session 4 of user core. Feb 13 06:21:20.344571 systemd[1]: Started session-4.scope. Feb 13 06:21:20.396079 sshd[1248]: pam_unix(sshd:session): session closed for user core Feb 13 06:21:20.397570 systemd[1]: sshd@1-147.75.49.59:22-139.178.68.195:49642.service: Deactivated successfully. Feb 13 06:21:20.397877 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 06:21:20.398203 systemd-logind[1154]: Session 4 logged out. Waiting for processes to exit. Feb 13 06:21:20.398710 systemd[1]: Started sshd@2-147.75.49.59:22-139.178.68.195:49656.service. Feb 13 06:21:20.399207 systemd-logind[1154]: Removed session 4. Feb 13 06:21:20.431877 sshd[1254]: Accepted publickey for core from 139.178.68.195 port 49656 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:21:20.433094 sshd[1254]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:21:20.436842 systemd-logind[1154]: New session 5 of user core. Feb 13 06:21:20.437838 systemd[1]: Started session-5.scope. Feb 13 06:21:20.493626 sshd[1254]: pam_unix(sshd:session): session closed for user core Feb 13 06:21:20.495156 systemd[1]: sshd@2-147.75.49.59:22-139.178.68.195:49656.service: Deactivated successfully. Feb 13 06:21:20.495445 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 06:21:20.495817 systemd-logind[1154]: Session 5 logged out. Waiting for processes to exit. Feb 13 06:21:20.496312 systemd[1]: Started sshd@3-147.75.49.59:22-139.178.68.195:49672.service. Feb 13 06:21:20.496823 systemd-logind[1154]: Removed session 5. Feb 13 06:21:20.530097 sshd[1260]: Accepted publickey for core from 139.178.68.195 port 49672 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:21:20.531130 sshd[1260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:21:20.535047 systemd-logind[1154]: New session 6 of user core. Feb 13 06:21:20.535861 systemd[1]: Started session-6.scope. Feb 13 06:21:20.603757 sshd[1260]: pam_unix(sshd:session): session closed for user core Feb 13 06:21:20.610376 systemd[1]: sshd@3-147.75.49.59:22-139.178.68.195:49672.service: Deactivated successfully. Feb 13 06:21:20.610897 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 06:21:20.611190 systemd-logind[1154]: Session 6 logged out. Waiting for processes to exit. Feb 13 06:21:20.611733 systemd[1]: Started sshd@4-147.75.49.59:22-139.178.68.195:49684.service. Feb 13 06:21:20.612197 systemd-logind[1154]: Removed session 6. Feb 13 06:21:20.644856 sshd[1266]: Accepted publickey for core from 139.178.68.195 port 49684 ssh2: RSA SHA256:llQCsnGK+DGQD8plqhBaBLF6Morh7a75TNnEFmu+zwc Feb 13 06:21:20.645872 sshd[1266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 06:21:20.649420 systemd-logind[1154]: New session 7 of user core. Feb 13 06:21:20.650270 systemd[1]: Started session-7.scope. Feb 13 06:21:20.716347 sudo[1269]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 06:21:20.716469 sudo[1269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 06:21:24.705419 systemd[1]: Reloading. Feb 13 06:21:24.736314 /usr/lib/systemd/system-generators/torcx-generator[1299]: time="2024-02-13T06:21:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 06:21:24.736333 /usr/lib/systemd/system-generators/torcx-generator[1299]: time="2024-02-13T06:21:24Z" level=info msg="torcx already run" Feb 13 06:21:24.797198 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 06:21:24.797209 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 06:21:24.813302 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 06:21:24.865915 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 13 06:21:24.873229 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 13 06:21:24.873487 systemd[1]: Reached target network-online.target. Feb 13 06:21:24.874150 systemd[1]: Started kubelet.service. Feb 13 06:21:25.490006 kubelet[1356]: E0213 06:21:25.489851 1356 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 06:21:25.495252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 06:21:25.495611 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 06:21:25.832320 systemd[1]: Stopped kubelet.service. Feb 13 06:21:25.873334 systemd[1]: Reloading. Feb 13 06:21:25.923834 /usr/lib/systemd/system-generators/torcx-generator[1458]: time="2024-02-13T06:21:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 06:21:25.923851 /usr/lib/systemd/system-generators/torcx-generator[1458]: time="2024-02-13T06:21:25Z" level=info msg="torcx already run" Feb 13 06:21:25.978190 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 06:21:25.978201 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 06:21:25.993503 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 06:21:26.048462 systemd[1]: Started kubelet.service. Feb 13 06:21:26.076004 kubelet[1514]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 06:21:26.076004 kubelet[1514]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 06:21:26.076004 kubelet[1514]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 06:21:26.076004 kubelet[1514]: I0213 06:21:26.075987 1514 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 06:21:26.485946 kubelet[1514]: I0213 06:21:26.485911 1514 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 13 06:21:26.485946 kubelet[1514]: I0213 06:21:26.485939 1514 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 06:21:26.486099 kubelet[1514]: I0213 06:21:26.486068 1514 server.go:895] "Client rotation is on, will bootstrap in background" Feb 13 06:21:26.497903 kubelet[1514]: I0213 06:21:26.497846 1514 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 06:21:26.533173 kubelet[1514]: I0213 06:21:26.533132 1514 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 06:21:26.534053 kubelet[1514]: I0213 06:21:26.534016 1514 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 06:21:26.534146 kubelet[1514]: I0213 06:21:26.534115 1514 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 06:21:26.534146 kubelet[1514]: I0213 06:21:26.534128 1514 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 06:21:26.534146 kubelet[1514]: I0213 06:21:26.534133 1514 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 06:21:26.534568 kubelet[1514]: I0213 06:21:26.534533 1514 state_mem.go:36] "Initialized new in-memory state store" Feb 13 06:21:26.536994 kubelet[1514]: I0213 06:21:26.536959 1514 kubelet.go:393] "Attempting to sync node with API server" Feb 13 06:21:26.536994 kubelet[1514]: I0213 06:21:26.536968 1514 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 06:21:26.537423 kubelet[1514]: I0213 06:21:26.537401 1514 kubelet.go:309] "Adding apiserver pod source" Feb 13 06:21:26.537423 kubelet[1514]: I0213 06:21:26.537411 1514 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 06:21:26.537525 kubelet[1514]: E0213 06:21:26.537485 1514 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:26.538095 kubelet[1514]: E0213 06:21:26.538058 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:26.540044 kubelet[1514]: I0213 06:21:26.540032 1514 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 06:21:26.541770 kubelet[1514]: W0213 06:21:26.541762 1514 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 06:21:26.542465 kubelet[1514]: I0213 06:21:26.542456 1514 server.go:1232] "Started kubelet" Feb 13 06:21:26.542586 kubelet[1514]: I0213 06:21:26.542537 1514 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 13 06:21:26.542633 kubelet[1514]: I0213 06:21:26.542609 1514 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 06:21:26.542687 kubelet[1514]: I0213 06:21:26.542678 1514 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 06:21:26.542733 kubelet[1514]: E0213 06:21:26.542720 1514 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 06:21:26.542759 kubelet[1514]: E0213 06:21:26.542739 1514 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 06:21:26.552287 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 13 06:21:26.552380 kubelet[1514]: I0213 06:21:26.552355 1514 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 06:21:26.552539 kubelet[1514]: I0213 06:21:26.552525 1514 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 06:21:26.552579 kubelet[1514]: I0213 06:21:26.552561 1514 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 06:21:26.552605 kubelet[1514]: I0213 06:21:26.552592 1514 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 06:21:26.553859 kubelet[1514]: I0213 06:21:26.553848 1514 server.go:462] "Adding debug handlers to kubelet server" Feb 13 06:21:26.560180 kubelet[1514]: E0213 06:21:26.560136 1514 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.80.15\" not found" node="10.67.80.15" Feb 13 06:21:26.568351 kubelet[1514]: I0213 06:21:26.568340 1514 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 06:21:26.568351 kubelet[1514]: I0213 06:21:26.568349 1514 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 06:21:26.568426 kubelet[1514]: I0213 06:21:26.568363 1514 state_mem.go:36] "Initialized new in-memory state store" Feb 13 06:21:26.569226 kubelet[1514]: I0213 06:21:26.569220 1514 policy_none.go:49] "None policy: Start" Feb 13 06:21:26.569423 kubelet[1514]: I0213 06:21:26.569416 1514 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 06:21:26.569456 kubelet[1514]: I0213 06:21:26.569430 1514 state_mem.go:35] "Initializing new in-memory state store" Feb 13 06:21:26.573427 systemd[1]: Created slice kubepods.slice. Feb 13 06:21:26.575563 systemd[1]: Created slice kubepods-burstable.slice. Feb 13 06:21:26.576878 systemd[1]: Created slice kubepods-besteffort.slice. Feb 13 06:21:26.594273 kubelet[1514]: I0213 06:21:26.594235 1514 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 06:21:26.594355 kubelet[1514]: I0213 06:21:26.594349 1514 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 06:21:26.594667 kubelet[1514]: E0213 06:21:26.594598 1514 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.80.15\" not found" Feb 13 06:21:26.628100 kubelet[1514]: I0213 06:21:26.628084 1514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 06:21:26.628649 kubelet[1514]: I0213 06:21:26.628610 1514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 06:21:26.628649 kubelet[1514]: I0213 06:21:26.628628 1514 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 06:21:26.628649 kubelet[1514]: I0213 06:21:26.628640 1514 kubelet.go:2303] "Starting kubelet main sync loop" Feb 13 06:21:26.628723 kubelet[1514]: E0213 06:21:26.628668 1514 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 06:21:26.653170 kubelet[1514]: I0213 06:21:26.653154 1514 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.15" Feb 13 06:21:26.659443 kubelet[1514]: I0213 06:21:26.659426 1514 kubelet_node_status.go:73] "Successfully registered node" node="10.67.80.15" Feb 13 06:21:26.670409 kubelet[1514]: I0213 06:21:26.670363 1514 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 06:21:26.670653 env[1164]: time="2024-02-13T06:21:26.670595748Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 06:21:26.670899 kubelet[1514]: I0213 06:21:26.670737 1514 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 06:21:27.488743 kubelet[1514]: I0213 06:21:27.488622 1514 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 06:21:27.489595 kubelet[1514]: W0213 06:21:27.489042 1514 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received Feb 13 06:21:27.489595 kubelet[1514]: W0213 06:21:27.489061 1514 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received Feb 13 06:21:27.489595 kubelet[1514]: W0213 06:21:27.489093 1514 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received Feb 13 06:21:27.538541 kubelet[1514]: I0213 06:21:27.538440 1514 apiserver.go:52] "Watching apiserver" Feb 13 06:21:27.538793 kubelet[1514]: E0213 06:21:27.538443 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:27.543318 kubelet[1514]: I0213 06:21:27.543234 1514 topology_manager.go:215] "Topology Admit Handler" podUID="3272e48c-04c1-4732-b339-06eeda0fbf9d" podNamespace="kube-system" podName="cilium-9n8dn" Feb 13 06:21:27.543723 kubelet[1514]: I0213 06:21:27.543647 1514 topology_manager.go:215] "Topology Admit Handler" podUID="d0859d87-e004-488b-b330-045733d7092a" podNamespace="kube-system" podName="kube-proxy-wtlqs" Feb 13 06:21:27.553764 kubelet[1514]: I0213 06:21:27.553711 1514 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 06:21:27.556960 kubelet[1514]: I0213 06:21:27.556932 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-cilium-run\") pod \"cilium-9n8dn\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " pod="kube-system/cilium-9n8dn" Feb 13 06:21:27.557008 kubelet[1514]: I0213 06:21:27.556973 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3272e48c-04c1-4732-b339-06eeda0fbf9d-clustermesh-secrets\") pod \"cilium-9n8dn\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " pod="kube-system/cilium-9n8dn" Feb 13 06:21:27.557008 kubelet[1514]: I0213 06:21:27.556987 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-host-proc-sys-kernel\") pod \"cilium-9n8dn\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " pod="kube-system/cilium-9n8dn" Feb 13 06:21:27.557008 kubelet[1514]: I0213 06:21:27.556998 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-cilium-cgroup\") pod \"cilium-9n8dn\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " pod="kube-system/cilium-9n8dn" Feb 13 06:21:27.557008 kubelet[1514]: I0213 06:21:27.557008 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-lib-modules\") pod \"cilium-9n8dn\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " pod="kube-system/cilium-9n8dn" Feb 13 06:21:27.557006 systemd[1]: Created slice kubepods-besteffort-podd0859d87_e004_488b_b330_045733d7092a.slice. Feb 13 06:21:27.557365 kubelet[1514]: I0213 06:21:27.557018 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3272e48c-04c1-4732-b339-06eeda0fbf9d-hubble-tls\") pod \"cilium-9n8dn\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " pod="kube-system/cilium-9n8dn" Feb 13 06:21:27.557365 kubelet[1514]: I0213 06:21:27.557028 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0859d87-e004-488b-b330-045733d7092a-lib-modules\") pod \"kube-proxy-wtlqs\" (UID: \"d0859d87-e004-488b-b330-045733d7092a\") " pod="kube-system/kube-proxy-wtlqs" Feb 13 06:21:27.557365 kubelet[1514]: I0213 06:21:27.557038 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-xtables-lock\") pod \"cilium-9n8dn\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " pod="kube-system/cilium-9n8dn" Feb 13 06:21:27.557365 kubelet[1514]: I0213 06:21:27.557051 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3272e48c-04c1-4732-b339-06eeda0fbf9d-cilium-config-path\") pod \"cilium-9n8dn\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " pod="kube-system/cilium-9n8dn" Feb 13 06:21:27.557365 kubelet[1514]: I0213 06:21:27.557315 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbmdw\" (UniqueName: \"kubernetes.io/projected/3272e48c-04c1-4732-b339-06eeda0fbf9d-kube-api-access-qbmdw\") pod \"cilium-9n8dn\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " pod="kube-system/cilium-9n8dn" Feb 13 06:21:27.557518 kubelet[1514]: I0213 06:21:27.557340 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbw67\" (UniqueName: \"kubernetes.io/projected/d0859d87-e004-488b-b330-045733d7092a-kube-api-access-mbw67\") pod \"kube-proxy-wtlqs\" (UID: \"d0859d87-e004-488b-b330-045733d7092a\") " pod="kube-system/kube-proxy-wtlqs" Feb 13 06:21:27.557518 kubelet[1514]: I0213 06:21:27.557360 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-host-proc-sys-net\") pod \"cilium-9n8dn\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " pod="kube-system/cilium-9n8dn" Feb 13 06:21:27.557518 kubelet[1514]: I0213 06:21:27.557379 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d0859d87-e004-488b-b330-045733d7092a-kube-proxy\") pod \"kube-proxy-wtlqs\" (UID: \"d0859d87-e004-488b-b330-045733d7092a\") " pod="kube-system/kube-proxy-wtlqs" Feb 13 06:21:27.557518 kubelet[1514]: I0213 06:21:27.557406 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0859d87-e004-488b-b330-045733d7092a-xtables-lock\") pod \"kube-proxy-wtlqs\" (UID: \"d0859d87-e004-488b-b330-045733d7092a\") " pod="kube-system/kube-proxy-wtlqs" Feb 13 06:21:27.557518 kubelet[1514]: I0213 06:21:27.557426 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-bpf-maps\") pod \"cilium-9n8dn\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " pod="kube-system/cilium-9n8dn" Feb 13 06:21:27.557518 kubelet[1514]: I0213 06:21:27.557444 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-hostproc\") pod \"cilium-9n8dn\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " pod="kube-system/cilium-9n8dn" Feb 13 06:21:27.557618 kubelet[1514]: I0213 06:21:27.557462 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-cni-path\") pod \"cilium-9n8dn\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " pod="kube-system/cilium-9n8dn" Feb 13 06:21:27.557618 kubelet[1514]: I0213 06:21:27.557491 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-etc-cni-netd\") pod \"cilium-9n8dn\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " pod="kube-system/cilium-9n8dn" Feb 13 06:21:27.578192 sudo[1269]: pam_unix(sudo:session): session closed for user root Feb 13 06:21:27.580280 sshd[1266]: pam_unix(sshd:session): session closed for user core Feb 13 06:21:27.582309 systemd[1]: sshd@4-147.75.49.59:22-139.178.68.195:49684.service: Deactivated successfully. Feb 13 06:21:27.582848 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 06:21:27.583335 systemd-logind[1154]: Session 7 logged out. Waiting for processes to exit. Feb 13 06:21:27.584726 systemd[1]: Created slice kubepods-burstable-pod3272e48c_04c1_4732_b339_06eeda0fbf9d.slice. Feb 13 06:21:27.585052 systemd-logind[1154]: Removed session 7. Feb 13 06:21:27.885649 env[1164]: time="2024-02-13T06:21:27.885414052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wtlqs,Uid:d0859d87-e004-488b-b330-045733d7092a,Namespace:kube-system,Attempt:0,}" Feb 13 06:21:27.902422 env[1164]: time="2024-02-13T06:21:27.902296454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9n8dn,Uid:3272e48c-04c1-4732-b339-06eeda0fbf9d,Namespace:kube-system,Attempt:0,}" Feb 13 06:21:28.539502 kubelet[1514]: E0213 06:21:28.539371 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:28.586144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676114175.mount: Deactivated successfully. Feb 13 06:21:28.587978 env[1164]: time="2024-02-13T06:21:28.587930395Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:21:28.589215 env[1164]: time="2024-02-13T06:21:28.589175434Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:21:28.589932 env[1164]: time="2024-02-13T06:21:28.589888905Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:21:28.590666 env[1164]: time="2024-02-13T06:21:28.590628054Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:21:28.591016 env[1164]: time="2024-02-13T06:21:28.590977109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:21:28.592119 env[1164]: time="2024-02-13T06:21:28.592073582Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:21:28.593272 env[1164]: time="2024-02-13T06:21:28.593233124Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:21:28.594128 env[1164]: time="2024-02-13T06:21:28.594081432Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:21:28.600460 env[1164]: time="2024-02-13T06:21:28.600395374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 06:21:28.600460 env[1164]: time="2024-02-13T06:21:28.600433097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 06:21:28.600553 env[1164]: time="2024-02-13T06:21:28.600457560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 06:21:28.600553 env[1164]: time="2024-02-13T06:21:28.600530326Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515 pid=1583 runtime=io.containerd.runc.v2 Feb 13 06:21:28.601332 env[1164]: time="2024-02-13T06:21:28.601309859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 06:21:28.601356 env[1164]: time="2024-02-13T06:21:28.601328055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 06:21:28.601356 env[1164]: time="2024-02-13T06:21:28.601334741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 06:21:28.601408 env[1164]: time="2024-02-13T06:21:28.601393329Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c085e7f10a1bc03a21ddc2c5ef95e87d3a8080a2ffcc5a92d98bf37910110b6 pid=1590 runtime=io.containerd.runc.v2 Feb 13 06:21:28.607961 systemd[1]: Started cri-containerd-04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515.scope. Feb 13 06:21:28.609119 systemd[1]: Started cri-containerd-9c085e7f10a1bc03a21ddc2c5ef95e87d3a8080a2ffcc5a92d98bf37910110b6.scope. Feb 13 06:21:28.620355 env[1164]: time="2024-02-13T06:21:28.620310600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9n8dn,Uid:3272e48c-04c1-4732-b339-06eeda0fbf9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\"" Feb 13 06:21:28.620587 env[1164]: time="2024-02-13T06:21:28.620541581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wtlqs,Uid:d0859d87-e004-488b-b330-045733d7092a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c085e7f10a1bc03a21ddc2c5ef95e87d3a8080a2ffcc5a92d98bf37910110b6\"" Feb 13 06:21:28.621544 env[1164]: time="2024-02-13T06:21:28.621504931Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 13 06:21:29.510632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2549486429.mount: Deactivated successfully. Feb 13 06:21:29.540042 kubelet[1514]: E0213 06:21:29.540002 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:29.874379 env[1164]: time="2024-02-13T06:21:29.874238031Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:21:29.875578 env[1164]: time="2024-02-13T06:21:29.875536226Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:21:29.878612 env[1164]: time="2024-02-13T06:21:29.878308442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:21:29.880360 env[1164]: time="2024-02-13T06:21:29.880295122Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:21:29.881297 env[1164]: time="2024-02-13T06:21:29.881224773Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 13 06:21:29.882060 env[1164]: time="2024-02-13T06:21:29.882023710Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 06:21:29.883731 env[1164]: time="2024-02-13T06:21:29.883675169Z" level=info msg="CreateContainer within sandbox \"9c085e7f10a1bc03a21ddc2c5ef95e87d3a8080a2ffcc5a92d98bf37910110b6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 06:21:29.894321 env[1164]: time="2024-02-13T06:21:29.894238760Z" level=info msg="CreateContainer within sandbox \"9c085e7f10a1bc03a21ddc2c5ef95e87d3a8080a2ffcc5a92d98bf37910110b6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"09439e59b670a88a0f8f2ccba105c3c1fb6f3b686813db6b34729dfcea95369d\"" Feb 13 06:21:29.894987 env[1164]: time="2024-02-13T06:21:29.894911931Z" level=info msg="StartContainer for \"09439e59b670a88a0f8f2ccba105c3c1fb6f3b686813db6b34729dfcea95369d\"" Feb 13 06:21:29.916476 systemd[1]: Started cri-containerd-09439e59b670a88a0f8f2ccba105c3c1fb6f3b686813db6b34729dfcea95369d.scope. Feb 13 06:21:29.929923 env[1164]: time="2024-02-13T06:21:29.929899489Z" level=info msg="StartContainer for \"09439e59b670a88a0f8f2ccba105c3c1fb6f3b686813db6b34729dfcea95369d\" returns successfully" Feb 13 06:21:30.540875 kubelet[1514]: E0213 06:21:30.540762 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:30.657626 kubelet[1514]: I0213 06:21:30.657535 1514 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wtlqs" podStartSLOduration=3.396995469 podCreationTimestamp="2024-02-13 06:21:26 +0000 UTC" firstStartedPulling="2024-02-13 06:21:28.621278513 +0000 UTC m=+2.571106904" lastFinishedPulling="2024-02-13 06:21:29.881733546 +0000 UTC m=+3.831561961" observedRunningTime="2024-02-13 06:21:30.656899996 +0000 UTC m=+4.606728450" watchObservedRunningTime="2024-02-13 06:21:30.657450526 +0000 UTC m=+4.607278974" Feb 13 06:21:31.541408 kubelet[1514]: E0213 06:21:31.541345 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:32.541911 kubelet[1514]: E0213 06:21:32.541837 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:33.227249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1593295881.mount: Deactivated successfully. Feb 13 06:21:33.542219 kubelet[1514]: E0213 06:21:33.542179 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:34.542988 kubelet[1514]: E0213 06:21:34.542942 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:34.891133 env[1164]: time="2024-02-13T06:21:34.891045257Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:21:34.891681 env[1164]: time="2024-02-13T06:21:34.891642201Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:21:34.892995 env[1164]: time="2024-02-13T06:21:34.892954155Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 06:21:34.893221 env[1164]: time="2024-02-13T06:21:34.893174941Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 06:21:34.894091 env[1164]: time="2024-02-13T06:21:34.894050060Z" level=info msg="CreateContainer within sandbox \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 06:21:34.897969 env[1164]: time="2024-02-13T06:21:34.897923625Z" level=info msg="CreateContainer within sandbox \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf\"" Feb 13 06:21:34.898158 env[1164]: time="2024-02-13T06:21:34.898117064Z" level=info msg="StartContainer for \"d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf\"" Feb 13 06:21:34.905943 systemd[1]: Started cri-containerd-d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf.scope. Feb 13 06:21:34.922678 systemd[1]: cri-containerd-d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf.scope: Deactivated successfully. Feb 13 06:21:34.933979 env[1164]: time="2024-02-13T06:21:34.933881397Z" level=info msg="StartContainer for \"d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf\" returns successfully" Feb 13 06:21:35.544188 kubelet[1514]: E0213 06:21:35.544103 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:35.899823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf-rootfs.mount: Deactivated successfully. Feb 13 06:21:36.295765 env[1164]: time="2024-02-13T06:21:36.295652439Z" level=info msg="shim disconnected" id=d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf Feb 13 06:21:36.296633 env[1164]: time="2024-02-13T06:21:36.295767342Z" level=warning msg="cleaning up after shim disconnected" id=d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf namespace=k8s.io Feb 13 06:21:36.296633 env[1164]: time="2024-02-13T06:21:36.295797882Z" level=info msg="cleaning up dead shim" Feb 13 06:21:36.307689 env[1164]: time="2024-02-13T06:21:36.307646774Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:21:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1871 runtime=io.containerd.runc.v2\n" Feb 13 06:21:36.545354 kubelet[1514]: E0213 06:21:36.545250 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:36.649003 env[1164]: time="2024-02-13T06:21:36.648915389Z" level=info msg="CreateContainer within sandbox \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 06:21:36.654307 env[1164]: time="2024-02-13T06:21:36.654264726Z" level=info msg="CreateContainer within sandbox \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244\"" Feb 13 06:21:36.654546 env[1164]: time="2024-02-13T06:21:36.654508063Z" level=info msg="StartContainer for \"c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244\"" Feb 13 06:21:36.663568 systemd[1]: Started cri-containerd-c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244.scope. Feb 13 06:21:36.675655 env[1164]: time="2024-02-13T06:21:36.675632506Z" level=info msg="StartContainer for \"c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244\" returns successfully" Feb 13 06:21:36.682611 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 06:21:36.682840 systemd[1]: Stopped systemd-sysctl.service. Feb 13 06:21:36.682997 systemd[1]: Stopping systemd-sysctl.service... Feb 13 06:21:36.683946 systemd[1]: Starting systemd-sysctl.service... Feb 13 06:21:36.684209 systemd[1]: cri-containerd-c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244.scope: Deactivated successfully. Feb 13 06:21:36.688324 systemd[1]: Finished systemd-sysctl.service. Feb 13 06:21:36.710414 env[1164]: time="2024-02-13T06:21:36.710300418Z" level=info msg="shim disconnected" id=c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244 Feb 13 06:21:36.710760 env[1164]: time="2024-02-13T06:21:36.710427999Z" level=warning msg="cleaning up after shim disconnected" id=c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244 namespace=k8s.io Feb 13 06:21:36.710760 env[1164]: time="2024-02-13T06:21:36.710463501Z" level=info msg="cleaning up dead shim" Feb 13 06:21:36.725319 env[1164]: time="2024-02-13T06:21:36.725241731Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:21:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1934 runtime=io.containerd.runc.v2\n" Feb 13 06:21:36.900395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244-rootfs.mount: Deactivated successfully. Feb 13 06:21:37.546297 kubelet[1514]: E0213 06:21:37.546195 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:37.658070 env[1164]: time="2024-02-13T06:21:37.657976980Z" level=info msg="CreateContainer within sandbox \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 06:21:37.675736 env[1164]: time="2024-02-13T06:21:37.675690137Z" level=info msg="CreateContainer within sandbox \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b\"" Feb 13 06:21:37.676105 env[1164]: time="2024-02-13T06:21:37.676045402Z" level=info msg="StartContainer for \"923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b\"" Feb 13 06:21:37.685192 systemd[1]: Started cri-containerd-923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b.scope. Feb 13 06:21:37.697391 env[1164]: time="2024-02-13T06:21:37.697359534Z" level=info msg="StartContainer for \"923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b\" returns successfully" Feb 13 06:21:37.698838 systemd[1]: cri-containerd-923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b.scope: Deactivated successfully. Feb 13 06:21:37.708832 env[1164]: time="2024-02-13T06:21:37.708801493Z" level=info msg="shim disconnected" id=923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b Feb 13 06:21:37.708934 env[1164]: time="2024-02-13T06:21:37.708833902Z" level=warning msg="cleaning up after shim disconnected" id=923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b namespace=k8s.io Feb 13 06:21:37.708934 env[1164]: time="2024-02-13T06:21:37.708842250Z" level=info msg="cleaning up dead shim" Feb 13 06:21:37.712173 env[1164]: time="2024-02-13T06:21:37.712133631Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:21:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1991 runtime=io.containerd.runc.v2\n" Feb 13 06:21:37.900775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b-rootfs.mount: Deactivated successfully. Feb 13 06:21:38.547181 kubelet[1514]: E0213 06:21:38.547071 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:38.666029 env[1164]: time="2024-02-13T06:21:38.665892787Z" level=info msg="CreateContainer within sandbox \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 06:21:38.680605 env[1164]: time="2024-02-13T06:21:38.680527892Z" level=info msg="CreateContainer within sandbox \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59\"" Feb 13 06:21:38.680890 env[1164]: time="2024-02-13T06:21:38.680821082Z" level=info msg="StartContainer for \"c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59\"" Feb 13 06:21:38.689484 systemd[1]: Started cri-containerd-c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59.scope. Feb 13 06:21:38.701546 env[1164]: time="2024-02-13T06:21:38.701517844Z" level=info msg="StartContainer for \"c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59\" returns successfully" Feb 13 06:21:38.701729 systemd[1]: cri-containerd-c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59.scope: Deactivated successfully. Feb 13 06:21:38.710406 env[1164]: time="2024-02-13T06:21:38.710374796Z" level=info msg="shim disconnected" id=c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59 Feb 13 06:21:38.710506 env[1164]: time="2024-02-13T06:21:38.710406029Z" level=warning msg="cleaning up after shim disconnected" id=c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59 namespace=k8s.io Feb 13 06:21:38.710506 env[1164]: time="2024-02-13T06:21:38.710414010Z" level=info msg="cleaning up dead shim" Feb 13 06:21:38.713991 env[1164]: time="2024-02-13T06:21:38.713973227Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:21:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2046 runtime=io.containerd.runc.v2\n" Feb 13 06:21:38.901379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59-rootfs.mount: Deactivated successfully. Feb 13 06:21:39.548136 kubelet[1514]: E0213 06:21:39.548016 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:39.675398 env[1164]: time="2024-02-13T06:21:39.675246944Z" level=info msg="CreateContainer within sandbox \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 06:21:39.694017 env[1164]: time="2024-02-13T06:21:39.693972006Z" level=info msg="CreateContainer within sandbox \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb\"" Feb 13 06:21:39.694280 env[1164]: time="2024-02-13T06:21:39.694243976Z" level=info msg="StartContainer for \"98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb\"" Feb 13 06:21:39.703168 systemd[1]: Started cri-containerd-98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb.scope. Feb 13 06:21:39.716508 env[1164]: time="2024-02-13T06:21:39.716479476Z" level=info msg="StartContainer for \"98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb\" returns successfully" Feb 13 06:21:39.773469 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 06:21:39.837453 kubelet[1514]: I0213 06:21:39.837374 1514 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 13 06:21:39.925423 kernel: Initializing XFRM netlink socket Feb 13 06:21:39.939454 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 06:21:40.549177 kubelet[1514]: E0213 06:21:40.549068 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:40.717082 kubelet[1514]: I0213 06:21:40.716013 1514 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9n8dn" podStartSLOduration=8.443829154 podCreationTimestamp="2024-02-13 06:21:26 +0000 UTC" firstStartedPulling="2024-02-13 06:21:28.621328532 +0000 UTC m=+2.571156918" lastFinishedPulling="2024-02-13 06:21:34.893356992 +0000 UTC m=+8.843185379" observedRunningTime="2024-02-13 06:21:40.715660085 +0000 UTC m=+14.665488568" watchObservedRunningTime="2024-02-13 06:21:40.715857615 +0000 UTC m=+14.665686093" Feb 13 06:21:41.123303 systemd-timesyncd[1110]: Network configuration changed, trying to establish connection. Feb 13 06:21:41.123502 systemd-networkd[1007]: cilium_host: Link UP Feb 13 06:21:41.123616 systemd-networkd[1007]: cilium_net: Link UP Feb 13 06:21:41.123619 systemd-networkd[1007]: cilium_net: Gained carrier Feb 13 06:21:41.123749 systemd-networkd[1007]: cilium_host: Gained carrier Feb 13 06:21:41.131419 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 13 06:21:41.131501 systemd-networkd[1007]: cilium_host: Gained IPv6LL Feb 13 06:21:41.173360 systemd-networkd[1007]: cilium_vxlan: Link UP Feb 13 06:21:41.173364 systemd-networkd[1007]: cilium_vxlan: Gained carrier Feb 13 06:21:41.181480 systemd-timesyncd[1110]: Contacted time server [2607:ff50:0:20::5ca1:ab1e]:123 (2.flatcar.pool.ntp.org). Feb 13 06:21:41.181526 systemd-timesyncd[1110]: Initial clock synchronization to Tue 2024-02-13 06:21:40.892001 UTC. Feb 13 06:21:41.303405 kernel: NET: Registered PF_ALG protocol family Feb 13 06:21:41.549251 kubelet[1514]: E0213 06:21:41.549197 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:41.814844 systemd-networkd[1007]: lxc_health: Link UP Feb 13 06:21:41.837163 systemd-networkd[1007]: lxc_health: Gained carrier Feb 13 06:21:41.837393 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 06:21:41.962540 systemd-networkd[1007]: cilium_net: Gained IPv6LL Feb 13 06:21:42.549825 kubelet[1514]: E0213 06:21:42.549712 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:42.922502 systemd-networkd[1007]: lxc_health: Gained IPv6LL Feb 13 06:21:43.051477 systemd-networkd[1007]: cilium_vxlan: Gained IPv6LL Feb 13 06:21:43.522432 kubelet[1514]: I0213 06:21:43.522373 1514 topology_manager.go:215] "Topology Admit Handler" podUID="91f172c0-edeb-4cb2-95a6-23ffcf8852f8" podNamespace="default" podName="nginx-deployment-6d5f899847-nhld7" Feb 13 06:21:43.525712 systemd[1]: Created slice kubepods-besteffort-pod91f172c0_edeb_4cb2_95a6_23ffcf8852f8.slice. Feb 13 06:21:43.550160 kubelet[1514]: E0213 06:21:43.550148 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:43.556328 kubelet[1514]: I0213 06:21:43.556318 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hndz\" (UniqueName: \"kubernetes.io/projected/91f172c0-edeb-4cb2-95a6-23ffcf8852f8-kube-api-access-2hndz\") pod \"nginx-deployment-6d5f899847-nhld7\" (UID: \"91f172c0-edeb-4cb2-95a6-23ffcf8852f8\") " pod="default/nginx-deployment-6d5f899847-nhld7" Feb 13 06:21:43.685193 kubelet[1514]: I0213 06:21:43.685155 1514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 06:21:43.828297 env[1164]: time="2024-02-13T06:21:43.828229738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-nhld7,Uid:91f172c0-edeb-4cb2-95a6-23ffcf8852f8,Namespace:default,Attempt:0,}" Feb 13 06:21:43.844235 systemd-networkd[1007]: lxc735c3715958a: Link UP Feb 13 06:21:43.866395 kernel: eth0: renamed from tmpaf840 Feb 13 06:21:43.887318 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 06:21:43.887367 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc735c3715958a: link becomes ready Feb 13 06:21:43.887377 systemd-networkd[1007]: lxc735c3715958a: Gained carrier Feb 13 06:21:44.550571 kubelet[1514]: E0213 06:21:44.550532 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:45.002793 env[1164]: time="2024-02-13T06:21:45.002638826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 06:21:45.002793 env[1164]: time="2024-02-13T06:21:45.002721785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 06:21:45.002793 env[1164]: time="2024-02-13T06:21:45.002752807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 06:21:45.003651 env[1164]: time="2024-02-13T06:21:45.003117087Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af840d7336afd6b2c8bb348c02d20ba33df5b0f5089e31e54b0f5ca2f5214fec pid=2702 runtime=io.containerd.runc.v2 Feb 13 06:21:45.014931 systemd[1]: Started cri-containerd-af840d7336afd6b2c8bb348c02d20ba33df5b0f5089e31e54b0f5ca2f5214fec.scope. Feb 13 06:21:45.036706 env[1164]: time="2024-02-13T06:21:45.036680140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-nhld7,Uid:91f172c0-edeb-4cb2-95a6-23ffcf8852f8,Namespace:default,Attempt:0,} returns sandbox id \"af840d7336afd6b2c8bb348c02d20ba33df5b0f5089e31e54b0f5ca2f5214fec\"" Feb 13 06:21:45.037390 env[1164]: time="2024-02-13T06:21:45.037375093Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 06:21:45.550779 kubelet[1514]: E0213 06:21:45.550681 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:45.674662 systemd-networkd[1007]: lxc735c3715958a: Gained IPv6LL Feb 13 06:21:46.538318 kubelet[1514]: E0213 06:21:46.538203 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:46.551708 kubelet[1514]: E0213 06:21:46.551603 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:47.552237 kubelet[1514]: E0213 06:21:47.552125 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:47.667127 kubelet[1514]: I0213 06:21:47.667022 1514 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 06:21:48.552657 kubelet[1514]: E0213 06:21:48.552588 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:49.553269 kubelet[1514]: E0213 06:21:49.553160 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:50.554436 kubelet[1514]: E0213 06:21:50.554368 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:51.555375 kubelet[1514]: E0213 06:21:51.555324 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:52.556479 kubelet[1514]: E0213 06:21:52.556380 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:52.911846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1319454135.mount: Deactivated successfully. Feb 13 06:21:53.557729 kubelet[1514]: E0213 06:21:53.557617 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:54.557967 kubelet[1514]: E0213 06:21:54.557858 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:55.558995 kubelet[1514]: E0213 06:21:55.558922 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:55.940264 update_engine[1156]: I0213 06:21:55.940038 1156 update_attempter.cc:509] Updating boot flags... Feb 13 06:21:56.559555 kubelet[1514]: E0213 06:21:56.559446 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:57.559702 kubelet[1514]: E0213 06:21:57.559634 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:58.560580 kubelet[1514]: E0213 06:21:58.560470 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:21:59.561129 kubelet[1514]: E0213 06:21:59.561017 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:00.562279 kubelet[1514]: E0213 06:22:00.562176 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:01.562550 kubelet[1514]: E0213 06:22:01.562436 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:02.563464 kubelet[1514]: E0213 06:22:02.563359 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:03.564412 kubelet[1514]: E0213 06:22:03.564278 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:04.565367 kubelet[1514]: E0213 06:22:04.565246 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:05.566098 kubelet[1514]: E0213 06:22:05.565988 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:06.538377 kubelet[1514]: E0213 06:22:06.538270 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:06.567097 kubelet[1514]: E0213 06:22:06.566992 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:07.567692 kubelet[1514]: E0213 06:22:07.567580 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:08.568129 kubelet[1514]: E0213 06:22:08.568044 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:09.569051 kubelet[1514]: E0213 06:22:09.568942 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:10.569486 kubelet[1514]: E0213 06:22:10.569412 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:11.569676 kubelet[1514]: E0213 06:22:11.569599 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:12.570645 kubelet[1514]: E0213 06:22:12.570540 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:13.571547 kubelet[1514]: E0213 06:22:13.571468 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:14.572611 kubelet[1514]: E0213 06:22:14.572492 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:15.572800 kubelet[1514]: E0213 06:22:15.572724 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:16.573164 kubelet[1514]: E0213 06:22:16.573084 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:17.574273 kubelet[1514]: E0213 06:22:17.574162 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:18.574684 kubelet[1514]: E0213 06:22:18.574567 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:19.574930 kubelet[1514]: E0213 06:22:19.574819 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:20.575828 kubelet[1514]: E0213 06:22:20.575752 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:21.576746 kubelet[1514]: E0213 06:22:21.576668 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:22.577332 kubelet[1514]: E0213 06:22:22.577259 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:23.578222 kubelet[1514]: E0213 06:22:23.578098 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:24.578416 kubelet[1514]: E0213 06:22:24.578286 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:25.579631 kubelet[1514]: E0213 06:22:25.579507 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:26.538754 kubelet[1514]: E0213 06:22:26.538655 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:26.580114 kubelet[1514]: E0213 06:22:26.580017 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:27.580347 kubelet[1514]: E0213 06:22:27.580234 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:28.580921 kubelet[1514]: E0213 06:22:28.580811 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:29.581327 kubelet[1514]: E0213 06:22:29.581215 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:30.581539 kubelet[1514]: E0213 06:22:30.581461 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:31.582096 kubelet[1514]: E0213 06:22:31.581980 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:32.583061 kubelet[1514]: E0213 06:22:32.582990 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:33.583852 kubelet[1514]: E0213 06:22:33.583743 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:34.584101 kubelet[1514]: E0213 06:22:34.584032 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:35.585100 kubelet[1514]: E0213 06:22:35.585029 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:36.585324 kubelet[1514]: E0213 06:22:36.585246 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:37.585859 kubelet[1514]: E0213 06:22:37.585733 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:38.586813 kubelet[1514]: E0213 06:22:38.586688 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:39.587847 kubelet[1514]: E0213 06:22:39.587724 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:40.588919 kubelet[1514]: E0213 06:22:40.588804 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:41.589902 kubelet[1514]: E0213 06:22:41.589822 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:42.590803 kubelet[1514]: E0213 06:22:42.590725 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:43.592099 kubelet[1514]: E0213 06:22:43.591980 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:44.593059 kubelet[1514]: E0213 06:22:44.592936 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:45.593188 kubelet[1514]: E0213 06:22:45.593088 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:46.538324 kubelet[1514]: E0213 06:22:46.538219 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:46.593979 kubelet[1514]: E0213 06:22:46.593878 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:47.594200 kubelet[1514]: E0213 06:22:47.594128 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:48.595246 kubelet[1514]: E0213 06:22:48.595124 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:49.595803 kubelet[1514]: E0213 06:22:49.595694 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:50.596750 kubelet[1514]: E0213 06:22:50.596679 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:51.597296 kubelet[1514]: E0213 06:22:51.597186 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:52.597620 kubelet[1514]: E0213 06:22:52.597547 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:53.598630 kubelet[1514]: E0213 06:22:53.598561 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:54.599765 kubelet[1514]: E0213 06:22:54.599646 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:55.599920 kubelet[1514]: E0213 06:22:55.599799 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:56.600221 kubelet[1514]: E0213 06:22:56.600103 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:57.600825 kubelet[1514]: E0213 06:22:57.600706 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:58.601066 kubelet[1514]: E0213 06:22:58.600954 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:22:59.601624 kubelet[1514]: E0213 06:22:59.601516 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:00.601815 kubelet[1514]: E0213 06:23:00.601706 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:01.602660 kubelet[1514]: E0213 06:23:01.602558 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:02.602879 kubelet[1514]: E0213 06:23:02.602769 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:03.603132 kubelet[1514]: E0213 06:23:03.603026 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:04.604085 kubelet[1514]: E0213 06:23:04.603981 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:05.605111 kubelet[1514]: E0213 06:23:05.604996 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:06.538225 kubelet[1514]: E0213 06:23:06.538119 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:06.605599 kubelet[1514]: E0213 06:23:06.605485 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:07.606827 kubelet[1514]: E0213 06:23:07.606707 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:08.607808 kubelet[1514]: E0213 06:23:08.607688 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:09.607990 kubelet[1514]: E0213 06:23:09.607870 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:10.608815 kubelet[1514]: E0213 06:23:10.608701 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:11.609720 kubelet[1514]: E0213 06:23:11.609598 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:12.610805 kubelet[1514]: E0213 06:23:12.610685 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:13.611997 kubelet[1514]: E0213 06:23:13.611881 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:14.613027 kubelet[1514]: E0213 06:23:14.612909 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:15.613239 kubelet[1514]: E0213 06:23:15.613124 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:16.613568 kubelet[1514]: E0213 06:23:16.613442 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:17.613722 kubelet[1514]: E0213 06:23:17.613610 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:18.614371 kubelet[1514]: E0213 06:23:18.614252 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:19.615141 kubelet[1514]: E0213 06:23:19.615075 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:20.616142 kubelet[1514]: E0213 06:23:20.616064 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:21.616921 kubelet[1514]: E0213 06:23:21.616851 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:22.617789 kubelet[1514]: E0213 06:23:22.617715 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:23.618068 kubelet[1514]: E0213 06:23:23.617964 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:24.618681 kubelet[1514]: E0213 06:23:24.618614 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:25.619615 kubelet[1514]: E0213 06:23:25.619546 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:26.538376 kubelet[1514]: E0213 06:23:26.538301 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:26.620667 kubelet[1514]: E0213 06:23:26.620559 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:27.620912 kubelet[1514]: E0213 06:23:27.620786 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:28.621166 kubelet[1514]: E0213 06:23:28.621054 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:29.621556 kubelet[1514]: E0213 06:23:29.621451 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:30.622439 kubelet[1514]: E0213 06:23:30.622326 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:31.623279 kubelet[1514]: E0213 06:23:31.623162 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:32.623627 kubelet[1514]: E0213 06:23:32.623524 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:33.623830 kubelet[1514]: E0213 06:23:33.623726 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:34.624832 kubelet[1514]: E0213 06:23:34.624710 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:35.625120 kubelet[1514]: E0213 06:23:35.624997 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:36.625809 kubelet[1514]: E0213 06:23:36.625689 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:37.626629 kubelet[1514]: E0213 06:23:37.626507 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:38.627136 kubelet[1514]: E0213 06:23:38.627011 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:39.628209 kubelet[1514]: E0213 06:23:39.628085 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:40.629491 kubelet[1514]: E0213 06:23:40.629367 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:41.630191 kubelet[1514]: E0213 06:23:41.630080 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:42.630684 kubelet[1514]: E0213 06:23:42.630587 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:43.631457 kubelet[1514]: E0213 06:23:43.631330 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:44.632158 kubelet[1514]: E0213 06:23:44.632029 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:45.632822 kubelet[1514]: E0213 06:23:45.632702 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:46.537716 kubelet[1514]: E0213 06:23:46.537617 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:46.632909 kubelet[1514]: E0213 06:23:46.632847 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:47.633840 kubelet[1514]: E0213 06:23:47.633716 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:48.633928 kubelet[1514]: E0213 06:23:48.633834 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:49.634722 kubelet[1514]: E0213 06:23:49.634605 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:50.635249 kubelet[1514]: E0213 06:23:50.635135 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:51.636022 kubelet[1514]: E0213 06:23:51.635900 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:52.636288 kubelet[1514]: E0213 06:23:52.636168 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:53.637430 kubelet[1514]: E0213 06:23:53.637330 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:54.638621 kubelet[1514]: E0213 06:23:54.638563 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:55.638844 kubelet[1514]: E0213 06:23:55.638720 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:56.639137 kubelet[1514]: E0213 06:23:56.639037 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:57.639431 kubelet[1514]: E0213 06:23:57.639287 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:58.640547 kubelet[1514]: E0213 06:23:58.640449 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:23:59.641458 kubelet[1514]: E0213 06:23:59.641339 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:00.641860 kubelet[1514]: E0213 06:24:00.641752 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:01.642997 kubelet[1514]: E0213 06:24:01.642917 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:02.644160 kubelet[1514]: E0213 06:24:02.644050 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:03.644707 kubelet[1514]: E0213 06:24:03.644578 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:04.645735 kubelet[1514]: E0213 06:24:04.645673 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:05.646726 kubelet[1514]: E0213 06:24:05.646610 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:06.538782 kubelet[1514]: E0213 06:24:06.538677 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:06.647676 kubelet[1514]: E0213 06:24:06.647611 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:07.105833 systemd[1]: Started sshd@5-147.75.49.59:22-104.248.146.70:52058.service. Feb 13 06:24:07.648239 kubelet[1514]: E0213 06:24:07.648130 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:08.147208 sshd[2770]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.146.70 user=root Feb 13 06:24:08.648974 kubelet[1514]: E0213 06:24:08.648907 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:09.649275 kubelet[1514]: E0213 06:24:09.649199 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:09.856008 systemd[1]: Started sshd@6-147.75.49.59:22-124.251.111.197:47594.service. Feb 13 06:24:10.205045 sshd[2770]: Failed password for root from 104.248.146.70 port 52058 ssh2 Feb 13 06:24:10.649650 kubelet[1514]: E0213 06:24:10.649549 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:10.905256 sshd[2773]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=124.251.111.197 user=root Feb 13 06:24:11.650421 kubelet[1514]: E0213 06:24:11.650304 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:12.002866 sshd[2770]: Received disconnect from 104.248.146.70 port 52058:11: Bye Bye [preauth] Feb 13 06:24:12.002866 sshd[2770]: Disconnected from authenticating user root 104.248.146.70 port 52058 [preauth] Feb 13 06:24:12.005353 systemd[1]: sshd@5-147.75.49.59:22-104.248.146.70:52058.service: Deactivated successfully. Feb 13 06:24:12.651124 kubelet[1514]: E0213 06:24:12.651005 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:12.903343 sshd[2773]: Failed password for root from 124.251.111.197 port 47594 ssh2 Feb 13 06:24:13.651779 kubelet[1514]: E0213 06:24:13.651657 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:14.652840 kubelet[1514]: E0213 06:24:14.652755 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:14.767851 sshd[2773]: Received disconnect from 124.251.111.197 port 47594:11: Bye Bye [preauth] Feb 13 06:24:14.767851 sshd[2773]: Disconnected from authenticating user root 124.251.111.197 port 47594 [preauth] Feb 13 06:24:14.770401 systemd[1]: sshd@6-147.75.49.59:22-124.251.111.197:47594.service: Deactivated successfully. Feb 13 06:24:15.654032 kubelet[1514]: E0213 06:24:15.653925 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:16.654430 kubelet[1514]: E0213 06:24:16.654292 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:17.655631 kubelet[1514]: E0213 06:24:17.655506 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:18.656686 kubelet[1514]: E0213 06:24:18.656564 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:19.656839 kubelet[1514]: E0213 06:24:19.656734 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:20.657547 kubelet[1514]: E0213 06:24:20.657440 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:21.658406 kubelet[1514]: E0213 06:24:21.658260 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:22.659602 kubelet[1514]: E0213 06:24:22.659485 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:23.660144 kubelet[1514]: E0213 06:24:23.660025 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:24.660689 kubelet[1514]: E0213 06:24:24.660571 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:25.661420 kubelet[1514]: E0213 06:24:25.661279 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:26.538578 kubelet[1514]: E0213 06:24:26.538467 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:26.661905 kubelet[1514]: E0213 06:24:26.661807 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:27.662797 kubelet[1514]: E0213 06:24:27.662683 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:28.664005 kubelet[1514]: E0213 06:24:28.663922 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:29.665051 kubelet[1514]: E0213 06:24:29.664971 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:30.665208 kubelet[1514]: E0213 06:24:30.665130 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:31.665898 kubelet[1514]: E0213 06:24:31.665823 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:32.667059 kubelet[1514]: E0213 06:24:32.666942 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:33.667697 kubelet[1514]: E0213 06:24:33.667574 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:34.667978 kubelet[1514]: E0213 06:24:34.667856 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:35.668784 kubelet[1514]: E0213 06:24:35.668675 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:36.669773 kubelet[1514]: E0213 06:24:36.669688 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:37.671056 kubelet[1514]: E0213 06:24:37.670933 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:38.671312 kubelet[1514]: E0213 06:24:38.671237 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:39.671617 kubelet[1514]: E0213 06:24:39.671502 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:40.671754 kubelet[1514]: E0213 06:24:40.671648 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:41.672007 kubelet[1514]: E0213 06:24:41.671884 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:42.672738 kubelet[1514]: E0213 06:24:42.672659 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:43.673294 kubelet[1514]: E0213 06:24:43.673227 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:44.673862 kubelet[1514]: E0213 06:24:44.673832 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:44.997213 kubelet[1514]: I0213 06:24:44.997176 1514 topology_manager.go:215] "Topology Admit Handler" podUID="c7239d9e-4aa9-4020-bdb3-280e3ee635c9" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 06:24:45.000432 systemd[1]: Created slice kubepods-besteffort-podc7239d9e_4aa9_4020_bdb3_280e3ee635c9.slice. Feb 13 06:24:45.027424 kubelet[1514]: I0213 06:24:45.027336 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c7239d9e-4aa9-4020-bdb3-280e3ee635c9-data\") pod \"nfs-server-provisioner-0\" (UID: \"c7239d9e-4aa9-4020-bdb3-280e3ee635c9\") " pod="default/nfs-server-provisioner-0" Feb 13 06:24:45.027662 kubelet[1514]: I0213 06:24:45.027561 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm2bd\" (UniqueName: \"kubernetes.io/projected/c7239d9e-4aa9-4020-bdb3-280e3ee635c9-kube-api-access-tm2bd\") pod \"nfs-server-provisioner-0\" (UID: \"c7239d9e-4aa9-4020-bdb3-280e3ee635c9\") " pod="default/nfs-server-provisioner-0" Feb 13 06:24:45.304145 env[1164]: time="2024-02-13T06:24:45.303875151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c7239d9e-4aa9-4020-bdb3-280e3ee635c9,Namespace:default,Attempt:0,}" Feb 13 06:24:45.332831 systemd-networkd[1007]: lxcf9fa8b5d899c: Link UP Feb 13 06:24:45.352508 kernel: eth0: renamed from tmpc70ce Feb 13 06:24:45.374913 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 06:24:45.375047 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf9fa8b5d899c: link becomes ready Feb 13 06:24:45.375216 systemd-networkd[1007]: lxcf9fa8b5d899c: Gained carrier Feb 13 06:24:45.616134 env[1164]: time="2024-02-13T06:24:45.616026792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 06:24:45.616134 env[1164]: time="2024-02-13T06:24:45.616072188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 06:24:45.616134 env[1164]: time="2024-02-13T06:24:45.616092542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 06:24:45.616321 env[1164]: time="2024-02-13T06:24:45.616234549Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c70ce9102c804a0bf22b231e27a0a5469cd60f73e16e191c3423abae4d4ac882 pid=2848 runtime=io.containerd.runc.v2 Feb 13 06:24:45.622517 systemd[1]: Started cri-containerd-c70ce9102c804a0bf22b231e27a0a5469cd60f73e16e191c3423abae4d4ac882.scope. Feb 13 06:24:45.646453 env[1164]: time="2024-02-13T06:24:45.646376532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c7239d9e-4aa9-4020-bdb3-280e3ee635c9,Namespace:default,Attempt:0,} returns sandbox id \"c70ce9102c804a0bf22b231e27a0a5469cd60f73e16e191c3423abae4d4ac882\"" Feb 13 06:24:45.674587 kubelet[1514]: E0213 06:24:45.674535 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:46.537960 kubelet[1514]: E0213 06:24:46.537847 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:46.675038 kubelet[1514]: E0213 06:24:46.674914 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:46.858985 systemd-networkd[1007]: lxcf9fa8b5d899c: Gained IPv6LL Feb 13 06:24:47.676280 kubelet[1514]: E0213 06:24:47.676163 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:48.676937 kubelet[1514]: E0213 06:24:48.676830 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:49.677955 kubelet[1514]: E0213 06:24:49.677823 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:50.679051 kubelet[1514]: E0213 06:24:50.678935 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:51.679486 kubelet[1514]: E0213 06:24:51.679370 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:52.680605 kubelet[1514]: E0213 06:24:52.680486 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:53.681026 kubelet[1514]: E0213 06:24:53.680910 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:54.681156 kubelet[1514]: E0213 06:24:54.681088 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:55.682258 kubelet[1514]: E0213 06:24:55.682139 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:56.682524 kubelet[1514]: E0213 06:24:56.682413 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:57.683107 kubelet[1514]: E0213 06:24:57.682991 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:58.683780 kubelet[1514]: E0213 06:24:58.683651 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:24:59.684043 kubelet[1514]: E0213 06:24:59.683923 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:00.684508 kubelet[1514]: E0213 06:25:00.684408 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:01.685645 kubelet[1514]: E0213 06:25:01.685524 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:02.685781 kubelet[1514]: E0213 06:25:02.685707 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:03.686970 kubelet[1514]: E0213 06:25:03.686896 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:04.687734 kubelet[1514]: E0213 06:25:04.687616 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:05.688759 kubelet[1514]: E0213 06:25:05.688675 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:06.537648 kubelet[1514]: E0213 06:25:06.537537 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:06.688955 kubelet[1514]: E0213 06:25:06.688892 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:07.689177 kubelet[1514]: E0213 06:25:07.689099 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:08.690424 kubelet[1514]: E0213 06:25:08.690304 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:09.691567 kubelet[1514]: E0213 06:25:09.691455 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:10.692335 kubelet[1514]: E0213 06:25:10.692261 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:11.693050 kubelet[1514]: E0213 06:25:11.692932 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:12.693595 kubelet[1514]: E0213 06:25:12.693483 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:13.694503 kubelet[1514]: E0213 06:25:13.694377 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:14.695424 kubelet[1514]: E0213 06:25:14.695317 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:15.486626 systemd[1]: Started sshd@7-147.75.49.59:22-99.35.129.114:20368.service. Feb 13 06:25:15.695948 kubelet[1514]: E0213 06:25:15.695878 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:16.648858 sshd[2884]: Invalid user admin from 99.35.129.114 port 20368 Feb 13 06:25:16.654977 sshd[2884]: pam_faillock(sshd:auth): User unknown Feb 13 06:25:16.655752 sshd[2884]: pam_unix(sshd:auth): check pass; user unknown Feb 13 06:25:16.655769 sshd[2884]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=99.35.129.114 Feb 13 06:25:16.656029 sshd[2884]: pam_faillock(sshd:auth): User unknown Feb 13 06:25:16.696615 kubelet[1514]: E0213 06:25:16.696555 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:17.697588 kubelet[1514]: E0213 06:25:17.697512 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:18.698571 kubelet[1514]: E0213 06:25:18.698447 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:19.049514 sshd[2884]: Failed password for invalid user admin from 99.35.129.114 port 20368 ssh2 Feb 13 06:25:19.615054 sshd[2887]: pam_faillock(sshd:auth): User unknown Feb 13 06:25:19.617862 sshd[2884]: Postponed keyboard-interactive for invalid user admin from 99.35.129.114 port 20368 ssh2 [preauth] Feb 13 06:25:19.699773 kubelet[1514]: E0213 06:25:19.699693 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:19.860321 sshd[2887]: pam_unix(sshd:auth): check pass; user unknown Feb 13 06:25:19.861344 sshd[2887]: pam_faillock(sshd:auth): User unknown Feb 13 06:25:20.700249 kubelet[1514]: E0213 06:25:20.700170 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:21.667812 sshd[2884]: PAM: Permission denied for illegal user admin from 99.35.129.114 Feb 13 06:25:21.668655 sshd[2884]: Failed keyboard-interactive/pam for invalid user admin from 99.35.129.114 port 20368 ssh2 Feb 13 06:25:21.701495 kubelet[1514]: E0213 06:25:21.701420 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:21.918129 sshd[2884]: Connection closed by invalid user admin 99.35.129.114 port 20368 [preauth] Feb 13 06:25:21.920725 systemd[1]: sshd@7-147.75.49.59:22-99.35.129.114:20368.service: Deactivated successfully. Feb 13 06:25:22.702817 kubelet[1514]: E0213 06:25:22.702695 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:23.703417 kubelet[1514]: E0213 06:25:23.703278 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:24.704357 kubelet[1514]: E0213 06:25:24.704240 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:25.705561 kubelet[1514]: E0213 06:25:25.705457 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:26.538401 kubelet[1514]: E0213 06:25:26.538262 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:26.706593 kubelet[1514]: E0213 06:25:26.706524 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:27.707264 kubelet[1514]: E0213 06:25:27.707146 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:28.708101 kubelet[1514]: E0213 06:25:28.708028 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:29.709176 kubelet[1514]: E0213 06:25:29.709049 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:30.709490 kubelet[1514]: E0213 06:25:30.709356 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:31.710692 kubelet[1514]: E0213 06:25:31.710567 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:32.711198 kubelet[1514]: E0213 06:25:32.711080 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:33.712230 kubelet[1514]: E0213 06:25:33.712124 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:34.712638 kubelet[1514]: E0213 06:25:34.712528 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:35.713564 kubelet[1514]: E0213 06:25:35.713527 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:36.713939 kubelet[1514]: E0213 06:25:36.713825 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:37.715024 kubelet[1514]: E0213 06:25:37.714906 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:38.082969 systemd[1]: Started sshd@8-147.75.49.59:22-218.92.0.27:25097.service. Feb 13 06:25:38.715728 kubelet[1514]: E0213 06:25:38.715614 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:39.116276 sshd[2894]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.27 user=root Feb 13 06:25:39.716816 kubelet[1514]: E0213 06:25:39.716695 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:40.717296 kubelet[1514]: E0213 06:25:40.717179 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:40.997627 sshd[2894]: Failed password for root from 218.92.0.27 port 25097 ssh2 Feb 13 06:25:41.717690 kubelet[1514]: E0213 06:25:41.717572 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:42.718360 kubelet[1514]: E0213 06:25:42.718233 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:43.719124 kubelet[1514]: E0213 06:25:43.719002 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:44.719410 kubelet[1514]: E0213 06:25:44.719285 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:45.582108 sshd[2894]: Failed password for root from 218.92.0.27 port 25097 ssh2 Feb 13 06:25:45.719749 kubelet[1514]: E0213 06:25:45.719626 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:46.537660 kubelet[1514]: E0213 06:25:46.537588 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:46.720855 kubelet[1514]: E0213 06:25:46.720742 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:46.788913 sshd[2894]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 06:25:47.721027 kubelet[1514]: E0213 06:25:47.720928 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:48.722228 kubelet[1514]: E0213 06:25:48.722115 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:48.966940 sshd[2894]: Failed password for root from 218.92.0.27 port 25097 ssh2 Feb 13 06:25:49.722879 kubelet[1514]: E0213 06:25:49.722770 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:50.624537 sshd[2894]: Received disconnect from 218.92.0.27 port 25097:11: [preauth] Feb 13 06:25:50.624537 sshd[2894]: Disconnected from authenticating user root 218.92.0.27 port 25097 [preauth] Feb 13 06:25:50.625070 sshd[2894]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.27 user=root Feb 13 06:25:50.627140 systemd[1]: sshd@8-147.75.49.59:22-218.92.0.27:25097.service: Deactivated successfully. Feb 13 06:25:50.723797 kubelet[1514]: E0213 06:25:50.723688 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:50.782594 systemd[1]: Started sshd@9-147.75.49.59:22-218.92.0.27:42050.service. Feb 13 06:25:51.724268 kubelet[1514]: E0213 06:25:51.724146 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:51.791188 sshd[2898]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.27 user=root Feb 13 06:25:52.725315 kubelet[1514]: E0213 06:25:52.725198 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:53.726426 kubelet[1514]: E0213 06:25:53.726308 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:53.989464 sshd[2898]: Failed password for root from 218.92.0.27 port 42050 ssh2 Feb 13 06:25:54.727301 kubelet[1514]: E0213 06:25:54.727182 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:55.728285 kubelet[1514]: E0213 06:25:55.728165 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:56.729446 kubelet[1514]: E0213 06:25:56.729291 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:57.370313 sshd[2898]: Failed password for root from 218.92.0.27 port 42050 ssh2 Feb 13 06:25:57.730230 kubelet[1514]: E0213 06:25:57.730002 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:58.731256 kubelet[1514]: E0213 06:25:58.731147 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:25:59.638589 sshd[2898]: Failed password for root from 218.92.0.27 port 42050 ssh2 Feb 13 06:25:59.732233 kubelet[1514]: E0213 06:25:59.732130 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:00.732573 kubelet[1514]: E0213 06:26:00.732445 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:01.448166 sshd[2898]: Received disconnect from 218.92.0.27 port 42050:11: [preauth] Feb 13 06:26:01.448166 sshd[2898]: Disconnected from authenticating user root 218.92.0.27 port 42050 [preauth] Feb 13 06:26:01.448716 sshd[2898]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.27 user=root Feb 13 06:26:01.450797 systemd[1]: sshd@9-147.75.49.59:22-218.92.0.27:42050.service: Deactivated successfully. Feb 13 06:26:01.603562 systemd[1]: Started sshd@10-147.75.49.59:22-218.92.0.27:42949.service. Feb 13 06:26:01.733533 kubelet[1514]: E0213 06:26:01.733317 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:02.594353 sshd[2904]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.27 user=root Feb 13 06:26:02.733820 kubelet[1514]: E0213 06:26:02.733709 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:03.734297 kubelet[1514]: E0213 06:26:03.734174 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:04.301182 sshd[2904]: Failed password for root from 218.92.0.27 port 42949 ssh2 Feb 13 06:26:04.735583 kubelet[1514]: E0213 06:26:04.735362 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:05.736185 kubelet[1514]: E0213 06:26:05.736064 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:06.538512 kubelet[1514]: E0213 06:26:06.538369 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:06.737360 kubelet[1514]: E0213 06:26:06.737263 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:06.902203 sshd[2904]: Failed password for root from 218.92.0.27 port 42949 ssh2 Feb 13 06:26:07.738603 kubelet[1514]: E0213 06:26:07.738475 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:08.739085 kubelet[1514]: E0213 06:26:08.738971 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:09.740360 kubelet[1514]: E0213 06:26:09.740236 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:10.611904 sshd[2904]: Failed password for root from 218.92.0.27 port 42949 ssh2 Feb 13 06:26:10.740817 kubelet[1514]: E0213 06:26:10.740701 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:11.741094 kubelet[1514]: E0213 06:26:11.740951 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:12.242834 sshd[2904]: Received disconnect from 218.92.0.27 port 42949:11: [preauth] Feb 13 06:26:12.242834 sshd[2904]: Disconnected from authenticating user root 218.92.0.27 port 42949 [preauth] Feb 13 06:26:12.243376 sshd[2904]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.27 user=root Feb 13 06:26:12.245461 systemd[1]: sshd@10-147.75.49.59:22-218.92.0.27:42949.service: Deactivated successfully. Feb 13 06:26:12.742351 kubelet[1514]: E0213 06:26:12.742250 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:13.743605 kubelet[1514]: E0213 06:26:13.743497 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:14.744819 kubelet[1514]: E0213 06:26:14.744699 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:15.745691 kubelet[1514]: E0213 06:26:15.745570 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:16.746502 kubelet[1514]: E0213 06:26:16.746400 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:17.747065 kubelet[1514]: E0213 06:26:17.746945 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:18.747635 kubelet[1514]: E0213 06:26:18.747518 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:19.748350 kubelet[1514]: E0213 06:26:19.748226 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:20.749346 kubelet[1514]: E0213 06:26:20.749225 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:21.750219 kubelet[1514]: E0213 06:26:21.750112 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:22.750690 kubelet[1514]: E0213 06:26:22.750572 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:23.751649 kubelet[1514]: E0213 06:26:23.751575 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:24.752229 kubelet[1514]: E0213 06:26:24.752153 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:25.752373 kubelet[1514]: E0213 06:26:25.752295 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:26.538225 kubelet[1514]: E0213 06:26:26.538108 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:26.753298 kubelet[1514]: E0213 06:26:26.753177 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:27.753427 kubelet[1514]: E0213 06:26:27.753316 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:28.754165 kubelet[1514]: E0213 06:26:28.754049 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:29.754762 kubelet[1514]: E0213 06:26:29.754690 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:30.755864 kubelet[1514]: E0213 06:26:30.755750 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:31.756778 kubelet[1514]: E0213 06:26:31.756664 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:32.756962 kubelet[1514]: E0213 06:26:32.756847 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:33.757454 kubelet[1514]: E0213 06:26:33.757333 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:34.757655 kubelet[1514]: E0213 06:26:34.757552 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:35.758504 kubelet[1514]: E0213 06:26:35.758398 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:36.759375 kubelet[1514]: E0213 06:26:36.759256 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:37.760654 kubelet[1514]: E0213 06:26:37.760534 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:38.760947 kubelet[1514]: E0213 06:26:38.760830 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:39.761189 kubelet[1514]: E0213 06:26:39.761068 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:40.761820 kubelet[1514]: E0213 06:26:40.761702 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:41.762009 kubelet[1514]: E0213 06:26:41.761907 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:42.762334 kubelet[1514]: E0213 06:26:42.762219 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:43.763518 kubelet[1514]: E0213 06:26:43.763413 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:44.764614 kubelet[1514]: E0213 06:26:44.764496 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:45.765405 kubelet[1514]: E0213 06:26:45.765262 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:46.537801 kubelet[1514]: E0213 06:26:46.537682 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:46.766560 kubelet[1514]: E0213 06:26:46.766442 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:47.767114 kubelet[1514]: E0213 06:26:47.767005 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:48.768194 kubelet[1514]: E0213 06:26:48.768073 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:49.769157 kubelet[1514]: E0213 06:26:49.769039 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:50.770093 kubelet[1514]: E0213 06:26:50.769974 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:51.771195 kubelet[1514]: E0213 06:26:51.771094 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:52.771514 kubelet[1514]: E0213 06:26:52.771406 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:53.772812 kubelet[1514]: E0213 06:26:53.772690 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:54.773225 kubelet[1514]: E0213 06:26:54.773117 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:55.774242 kubelet[1514]: E0213 06:26:55.774120 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:56.775175 kubelet[1514]: E0213 06:26:56.775048 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:57.775539 kubelet[1514]: E0213 06:26:57.775460 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:58.776041 kubelet[1514]: E0213 06:26:58.775923 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:26:59.776280 kubelet[1514]: E0213 06:26:59.776204 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:00.777567 kubelet[1514]: E0213 06:27:00.777457 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:01.777812 kubelet[1514]: E0213 06:27:01.777695 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:02.778105 kubelet[1514]: E0213 06:27:02.777984 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:03.779340 kubelet[1514]: E0213 06:27:03.779266 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:04.779944 kubelet[1514]: E0213 06:27:04.779834 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:05.780362 kubelet[1514]: E0213 06:27:05.780242 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:06.538432 kubelet[1514]: E0213 06:27:06.538315 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:06.781530 kubelet[1514]: E0213 06:27:06.781424 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:07.782657 kubelet[1514]: E0213 06:27:07.782537 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:08.783870 kubelet[1514]: E0213 06:27:08.783751 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:09.785081 kubelet[1514]: E0213 06:27:09.784962 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:10.785595 kubelet[1514]: E0213 06:27:10.785486 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:11.786083 kubelet[1514]: E0213 06:27:11.785980 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:12.786924 kubelet[1514]: E0213 06:27:12.786817 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:13.787406 kubelet[1514]: E0213 06:27:13.787269 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:14.788561 kubelet[1514]: E0213 06:27:14.788493 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:15.788800 kubelet[1514]: E0213 06:27:15.788697 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:16.789892 kubelet[1514]: E0213 06:27:16.789786 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:17.790207 kubelet[1514]: E0213 06:27:17.790085 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:18.790684 kubelet[1514]: E0213 06:27:18.790575 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:19.791818 kubelet[1514]: E0213 06:27:19.791697 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:20.792978 kubelet[1514]: E0213 06:27:20.792867 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:21.793958 kubelet[1514]: E0213 06:27:21.793837 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:22.795067 kubelet[1514]: E0213 06:27:22.794943 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:23.795314 kubelet[1514]: E0213 06:27:23.795193 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:24.796443 kubelet[1514]: E0213 06:27:24.796335 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:25.797508 kubelet[1514]: E0213 06:27:25.797372 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:26.537823 kubelet[1514]: E0213 06:27:26.537703 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:26.797803 kubelet[1514]: E0213 06:27:26.797554 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:27.798777 kubelet[1514]: E0213 06:27:27.798665 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:28.798931 kubelet[1514]: E0213 06:27:28.798810 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:29.799115 kubelet[1514]: E0213 06:27:29.799046 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:30.799999 kubelet[1514]: E0213 06:27:30.799928 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:31.800229 kubelet[1514]: E0213 06:27:31.800153 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:32.012048 update_engine[1156]: I0213 06:27:32.011962 1156 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 06:27:32.012048 update_engine[1156]: I0213 06:27:32.012058 1156 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 06:27:32.013886 update_engine[1156]: I0213 06:27:32.013836 1156 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 06:27:32.014862 update_engine[1156]: I0213 06:27:32.014813 1156 omaha_request_params.cc:62] Current group set to lts Feb 13 06:27:32.015233 update_engine[1156]: I0213 06:27:32.015188 1156 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 06:27:32.015233 update_engine[1156]: I0213 06:27:32.015213 1156 update_attempter.cc:643] Scheduling an action processor start. Feb 13 06:27:32.015636 update_engine[1156]: I0213 06:27:32.015257 1156 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 06:27:32.015636 update_engine[1156]: I0213 06:27:32.015341 1156 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 06:27:32.015636 update_engine[1156]: I0213 06:27:32.015581 1156 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 13 06:27:32.015636 update_engine[1156]: I0213 06:27:32.015609 1156 omaha_request_action.cc:271] Request: Feb 13 06:27:32.015636 update_engine[1156]: Feb 13 06:27:32.015636 update_engine[1156]: Feb 13 06:27:32.015636 update_engine[1156]: Feb 13 06:27:32.015636 update_engine[1156]: Feb 13 06:27:32.015636 update_engine[1156]: Feb 13 06:27:32.015636 update_engine[1156]: Feb 13 06:27:32.015636 update_engine[1156]: Feb 13 06:27:32.015636 update_engine[1156]: Feb 13 06:27:32.015636 update_engine[1156]: I0213 06:27:32.015625 1156 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 06:27:32.017581 locksmithd[1201]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 06:27:32.019095 update_engine[1156]: I0213 06:27:32.019085 1156 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 06:27:32.019156 update_engine[1156]: E0213 06:27:32.019147 1156 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 06:27:32.019192 update_engine[1156]: I0213 06:27:32.019186 1156 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 06:27:32.801034 kubelet[1514]: E0213 06:27:32.800918 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:33.801980 kubelet[1514]: E0213 06:27:33.801868 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:34.724034 systemd[1]: Started sshd@11-147.75.49.59:22-104.248.146.70:34460.service. Feb 13 06:27:34.802543 kubelet[1514]: E0213 06:27:34.802473 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:35.706266 sshd[2922]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.146.70 user=root Feb 13 06:27:35.803365 kubelet[1514]: E0213 06:27:35.803266 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:36.804539 kubelet[1514]: E0213 06:27:36.804435 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:37.804826 kubelet[1514]: E0213 06:27:37.804713 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:37.849415 sshd[2922]: Failed password for root from 104.248.146.70 port 34460 ssh2 Feb 13 06:27:38.805216 kubelet[1514]: E0213 06:27:38.805110 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:39.553831 sshd[2922]: Received disconnect from 104.248.146.70 port 34460:11: Bye Bye [preauth] Feb 13 06:27:39.553831 sshd[2922]: Disconnected from authenticating user root 104.248.146.70 port 34460 [preauth] Feb 13 06:27:39.556364 systemd[1]: sshd@11-147.75.49.59:22-104.248.146.70:34460.service: Deactivated successfully. Feb 13 06:27:39.806429 kubelet[1514]: E0213 06:27:39.806214 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:40.806876 kubelet[1514]: E0213 06:27:40.806754 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:41.808126 kubelet[1514]: E0213 06:27:41.808012 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:41.939670 update_engine[1156]: I0213 06:27:41.939553 1156 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 06:27:41.940513 update_engine[1156]: I0213 06:27:41.940037 1156 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 06:27:41.940513 update_engine[1156]: E0213 06:27:41.940239 1156 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 06:27:41.940513 update_engine[1156]: I0213 06:27:41.940448 1156 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 06:27:42.808638 kubelet[1514]: E0213 06:27:42.808524 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:43.809203 kubelet[1514]: E0213 06:27:43.809083 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:44.810298 kubelet[1514]: E0213 06:27:44.810201 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:45.811105 kubelet[1514]: E0213 06:27:45.810998 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:46.537745 kubelet[1514]: E0213 06:27:46.537640 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:46.631271 env[1164]: time="2024-02-13T06:27:46.631219164Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 06:27:46.633942 env[1164]: time="2024-02-13T06:27:46.633910171Z" level=info msg="StopContainer for \"98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb\" with timeout 2 (s)" Feb 13 06:27:46.634050 env[1164]: time="2024-02-13T06:27:46.634014984Z" level=info msg="Stop container \"98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb\" with signal terminated" Feb 13 06:27:46.637014 systemd-networkd[1007]: lxc_health: Link DOWN Feb 13 06:27:46.637018 systemd-networkd[1007]: lxc_health: Lost carrier Feb 13 06:27:46.679777 systemd[1]: cri-containerd-98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb.scope: Deactivated successfully. Feb 13 06:27:46.679942 systemd[1]: cri-containerd-98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb.scope: Consumed 6.456s CPU time. Feb 13 06:27:46.689746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb-rootfs.mount: Deactivated successfully. Feb 13 06:27:46.706978 env[1164]: time="2024-02-13T06:27:46.706933942Z" level=info msg="shim disconnected" id=98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb Feb 13 06:27:46.707113 env[1164]: time="2024-02-13T06:27:46.706978892Z" level=warning msg="cleaning up after shim disconnected" id=98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb namespace=k8s.io Feb 13 06:27:46.707113 env[1164]: time="2024-02-13T06:27:46.706990815Z" level=info msg="cleaning up dead shim" Feb 13 06:27:46.712616 kubelet[1514]: E0213 06:27:46.712565 1514 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 06:27:46.713336 env[1164]: time="2024-02-13T06:27:46.713282311Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:27:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2968 runtime=io.containerd.runc.v2\n" Feb 13 06:27:46.714557 env[1164]: time="2024-02-13T06:27:46.714501153Z" level=info msg="StopContainer for \"98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb\" returns successfully" Feb 13 06:27:46.715032 env[1164]: time="2024-02-13T06:27:46.714970914Z" level=info msg="StopPodSandbox for \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\"" Feb 13 06:27:46.715125 env[1164]: time="2024-02-13T06:27:46.715035539Z" level=info msg="Container to stop \"d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 06:27:46.715125 env[1164]: time="2024-02-13T06:27:46.715056665Z" level=info msg="Container to stop \"c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 06:27:46.715125 env[1164]: time="2024-02-13T06:27:46.715072272Z" level=info msg="Container to stop \"923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 06:27:46.715125 env[1164]: time="2024-02-13T06:27:46.715086621Z" level=info msg="Container to stop \"c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 06:27:46.715125 env[1164]: time="2024-02-13T06:27:46.715101021Z" level=info msg="Container to stop \"98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 06:27:46.717162 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515-shm.mount: Deactivated successfully. Feb 13 06:27:46.722018 systemd[1]: cri-containerd-04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515.scope: Deactivated successfully. Feb 13 06:27:46.737620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515-rootfs.mount: Deactivated successfully. Feb 13 06:27:46.757943 env[1164]: time="2024-02-13T06:27:46.757861521Z" level=info msg="shim disconnected" id=04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515 Feb 13 06:27:46.757943 env[1164]: time="2024-02-13T06:27:46.757903744Z" level=warning msg="cleaning up after shim disconnected" id=04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515 namespace=k8s.io Feb 13 06:27:46.757943 env[1164]: time="2024-02-13T06:27:46.757916082Z" level=info msg="cleaning up dead shim" Feb 13 06:27:46.763516 env[1164]: time="2024-02-13T06:27:46.763489713Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:27:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3000 runtime=io.containerd.runc.v2\n" Feb 13 06:27:46.763767 env[1164]: time="2024-02-13T06:27:46.763719364Z" level=info msg="TearDown network for sandbox \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\" successfully" Feb 13 06:27:46.763767 env[1164]: time="2024-02-13T06:27:46.763740115Z" level=info msg="StopPodSandbox for \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\" returns successfully" Feb 13 06:27:46.811442 kubelet[1514]: E0213 06:27:46.811226 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:46.850685 kubelet[1514]: I0213 06:27:46.850578 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbmdw\" (UniqueName: \"kubernetes.io/projected/3272e48c-04c1-4732-b339-06eeda0fbf9d-kube-api-access-qbmdw\") pod \"3272e48c-04c1-4732-b339-06eeda0fbf9d\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " Feb 13 06:27:46.850685 kubelet[1514]: I0213 06:27:46.850679 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-hostproc\") pod \"3272e48c-04c1-4732-b339-06eeda0fbf9d\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " Feb 13 06:27:46.851214 kubelet[1514]: I0213 06:27:46.850737 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-etc-cni-netd\") pod \"3272e48c-04c1-4732-b339-06eeda0fbf9d\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " Feb 13 06:27:46.851214 kubelet[1514]: I0213 06:27:46.850793 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-cilium-run\") pod \"3272e48c-04c1-4732-b339-06eeda0fbf9d\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " Feb 13 06:27:46.851214 kubelet[1514]: I0213 06:27:46.850850 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-cilium-cgroup\") pod \"3272e48c-04c1-4732-b339-06eeda0fbf9d\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " Feb 13 06:27:46.851214 kubelet[1514]: I0213 06:27:46.850925 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3272e48c-04c1-4732-b339-06eeda0fbf9d-hubble-tls\") pod \"3272e48c-04c1-4732-b339-06eeda0fbf9d\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " Feb 13 06:27:46.851214 kubelet[1514]: I0213 06:27:46.850887 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-hostproc" (OuterVolumeSpecName: "hostproc") pod "3272e48c-04c1-4732-b339-06eeda0fbf9d" (UID: "3272e48c-04c1-4732-b339-06eeda0fbf9d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.851214 kubelet[1514]: I0213 06:27:46.850935 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3272e48c-04c1-4732-b339-06eeda0fbf9d" (UID: "3272e48c-04c1-4732-b339-06eeda0fbf9d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.852289 kubelet[1514]: I0213 06:27:46.850985 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-host-proc-sys-net\") pod \"3272e48c-04c1-4732-b339-06eeda0fbf9d\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " Feb 13 06:27:46.852289 kubelet[1514]: I0213 06:27:46.850987 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3272e48c-04c1-4732-b339-06eeda0fbf9d" (UID: "3272e48c-04c1-4732-b339-06eeda0fbf9d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.852289 kubelet[1514]: I0213 06:27:46.851053 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-host-proc-sys-kernel\") pod \"3272e48c-04c1-4732-b339-06eeda0fbf9d\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " Feb 13 06:27:46.852289 kubelet[1514]: I0213 06:27:46.851063 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3272e48c-04c1-4732-b339-06eeda0fbf9d" (UID: "3272e48c-04c1-4732-b339-06eeda0fbf9d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.852289 kubelet[1514]: I0213 06:27:46.851052 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3272e48c-04c1-4732-b339-06eeda0fbf9d" (UID: "3272e48c-04c1-4732-b339-06eeda0fbf9d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.852906 kubelet[1514]: I0213 06:27:46.851117 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-lib-modules\") pod \"3272e48c-04c1-4732-b339-06eeda0fbf9d\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " Feb 13 06:27:46.852906 kubelet[1514]: I0213 06:27:46.851131 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3272e48c-04c1-4732-b339-06eeda0fbf9d" (UID: "3272e48c-04c1-4732-b339-06eeda0fbf9d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.852906 kubelet[1514]: I0213 06:27:46.851173 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-xtables-lock\") pod \"3272e48c-04c1-4732-b339-06eeda0fbf9d\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " Feb 13 06:27:46.852906 kubelet[1514]: I0213 06:27:46.851232 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-cni-path\") pod \"3272e48c-04c1-4732-b339-06eeda0fbf9d\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " Feb 13 06:27:46.852906 kubelet[1514]: I0213 06:27:46.851232 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3272e48c-04c1-4732-b339-06eeda0fbf9d" (UID: "3272e48c-04c1-4732-b339-06eeda0fbf9d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.853452 kubelet[1514]: I0213 06:27:46.851253 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3272e48c-04c1-4732-b339-06eeda0fbf9d" (UID: "3272e48c-04c1-4732-b339-06eeda0fbf9d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.853452 kubelet[1514]: I0213 06:27:46.851299 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3272e48c-04c1-4732-b339-06eeda0fbf9d-cilium-config-path\") pod \"3272e48c-04c1-4732-b339-06eeda0fbf9d\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " Feb 13 06:27:46.853452 kubelet[1514]: I0213 06:27:46.851334 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-cni-path" (OuterVolumeSpecName: "cni-path") pod "3272e48c-04c1-4732-b339-06eeda0fbf9d" (UID: "3272e48c-04c1-4732-b339-06eeda0fbf9d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.853452 kubelet[1514]: I0213 06:27:46.851370 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3272e48c-04c1-4732-b339-06eeda0fbf9d-clustermesh-secrets\") pod \"3272e48c-04c1-4732-b339-06eeda0fbf9d\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " Feb 13 06:27:46.853452 kubelet[1514]: I0213 06:27:46.851475 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-bpf-maps\") pod \"3272e48c-04c1-4732-b339-06eeda0fbf9d\" (UID: \"3272e48c-04c1-4732-b339-06eeda0fbf9d\") " Feb 13 06:27:46.853452 kubelet[1514]: I0213 06:27:46.851558 1514 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-host-proc-sys-net\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:46.854356 kubelet[1514]: I0213 06:27:46.851556 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3272e48c-04c1-4732-b339-06eeda0fbf9d" (UID: "3272e48c-04c1-4732-b339-06eeda0fbf9d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:46.854356 kubelet[1514]: I0213 06:27:46.851599 1514 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-host-proc-sys-kernel\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:46.854356 kubelet[1514]: I0213 06:27:46.851631 1514 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-lib-modules\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:46.854356 kubelet[1514]: I0213 06:27:46.851659 1514 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-xtables-lock\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:46.854356 kubelet[1514]: I0213 06:27:46.851691 1514 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-cni-path\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:46.854356 kubelet[1514]: I0213 06:27:46.851720 1514 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-hostproc\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:46.854356 kubelet[1514]: I0213 06:27:46.851750 1514 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-etc-cni-netd\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:46.854356 kubelet[1514]: I0213 06:27:46.851779 1514 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-cilium-run\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:46.855185 kubelet[1514]: I0213 06:27:46.851808 1514 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-cilium-cgroup\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:46.856603 kubelet[1514]: I0213 06:27:46.856457 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3272e48c-04c1-4732-b339-06eeda0fbf9d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3272e48c-04c1-4732-b339-06eeda0fbf9d" (UID: "3272e48c-04c1-4732-b339-06eeda0fbf9d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 06:27:46.856846 kubelet[1514]: I0213 06:27:46.856831 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3272e48c-04c1-4732-b339-06eeda0fbf9d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3272e48c-04c1-4732-b339-06eeda0fbf9d" (UID: "3272e48c-04c1-4732-b339-06eeda0fbf9d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 06:27:46.856891 kubelet[1514]: I0213 06:27:46.856841 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3272e48c-04c1-4732-b339-06eeda0fbf9d-kube-api-access-qbmdw" (OuterVolumeSpecName: "kube-api-access-qbmdw") pod "3272e48c-04c1-4732-b339-06eeda0fbf9d" (UID: "3272e48c-04c1-4732-b339-06eeda0fbf9d"). InnerVolumeSpecName "kube-api-access-qbmdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 06:27:46.856891 kubelet[1514]: I0213 06:27:46.856857 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3272e48c-04c1-4732-b339-06eeda0fbf9d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3272e48c-04c1-4732-b339-06eeda0fbf9d" (UID: "3272e48c-04c1-4732-b339-06eeda0fbf9d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 06:27:46.857477 systemd[1]: var-lib-kubelet-pods-3272e48c\x2d04c1\x2d4732\x2db339\x2d06eeda0fbf9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqbmdw.mount: Deactivated successfully. Feb 13 06:27:46.857533 systemd[1]: var-lib-kubelet-pods-3272e48c\x2d04c1\x2d4732\x2db339\x2d06eeda0fbf9d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 06:27:46.857569 systemd[1]: var-lib-kubelet-pods-3272e48c\x2d04c1\x2d4732\x2db339\x2d06eeda0fbf9d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 06:27:46.952816 kubelet[1514]: I0213 06:27:46.952712 1514 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3272e48c-04c1-4732-b339-06eeda0fbf9d-bpf-maps\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:46.952816 kubelet[1514]: I0213 06:27:46.952793 1514 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qbmdw\" (UniqueName: \"kubernetes.io/projected/3272e48c-04c1-4732-b339-06eeda0fbf9d-kube-api-access-qbmdw\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:46.952816 kubelet[1514]: I0213 06:27:46.952828 1514 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3272e48c-04c1-4732-b339-06eeda0fbf9d-hubble-tls\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:46.953356 kubelet[1514]: I0213 06:27:46.952859 1514 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3272e48c-04c1-4732-b339-06eeda0fbf9d-cilium-config-path\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:46.953356 kubelet[1514]: I0213 06:27:46.952891 1514 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3272e48c-04c1-4732-b339-06eeda0fbf9d-clustermesh-secrets\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:47.693457 kubelet[1514]: I0213 06:27:47.693376 1514 scope.go:117] "RemoveContainer" containerID="98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb" Feb 13 06:27:47.696454 env[1164]: time="2024-02-13T06:27:47.696349638Z" level=info msg="RemoveContainer for \"98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb\"" Feb 13 06:27:47.699096 env[1164]: time="2024-02-13T06:27:47.699055228Z" level=info msg="RemoveContainer for \"98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb\" returns successfully" Feb 13 06:27:47.699255 kubelet[1514]: I0213 06:27:47.699221 1514 scope.go:117] "RemoveContainer" containerID="c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59" Feb 13 06:27:47.700096 env[1164]: time="2024-02-13T06:27:47.700080526Z" level=info msg="RemoveContainer for \"c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59\"" Feb 13 06:27:47.700232 systemd[1]: Removed slice kubepods-burstable-pod3272e48c_04c1_4732_b339_06eeda0fbf9d.slice. Feb 13 06:27:47.700282 systemd[1]: kubepods-burstable-pod3272e48c_04c1_4732_b339_06eeda0fbf9d.slice: Consumed 6.507s CPU time. Feb 13 06:27:47.701094 env[1164]: time="2024-02-13T06:27:47.701065096Z" level=info msg="RemoveContainer for \"c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59\" returns successfully" Feb 13 06:27:47.701206 kubelet[1514]: I0213 06:27:47.701197 1514 scope.go:117] "RemoveContainer" containerID="923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b" Feb 13 06:27:47.701795 env[1164]: time="2024-02-13T06:27:47.701749564Z" level=info msg="RemoveContainer for \"923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b\"" Feb 13 06:27:47.702732 env[1164]: time="2024-02-13T06:27:47.702719275Z" level=info msg="RemoveContainer for \"923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b\" returns successfully" Feb 13 06:27:47.702819 kubelet[1514]: I0213 06:27:47.702812 1514 scope.go:117] "RemoveContainer" containerID="c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244" Feb 13 06:27:47.703235 env[1164]: time="2024-02-13T06:27:47.703223875Z" level=info msg="RemoveContainer for \"c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244\"" Feb 13 06:27:47.704220 env[1164]: time="2024-02-13T06:27:47.704184811Z" level=info msg="RemoveContainer for \"c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244\" returns successfully" Feb 13 06:27:47.704256 kubelet[1514]: I0213 06:27:47.704233 1514 scope.go:117] "RemoveContainer" containerID="d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf" Feb 13 06:27:47.704602 env[1164]: time="2024-02-13T06:27:47.704590702Z" level=info msg="RemoveContainer for \"d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf\"" Feb 13 06:27:47.705385 env[1164]: time="2024-02-13T06:27:47.705372022Z" level=info msg="RemoveContainer for \"d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf\" returns successfully" Feb 13 06:27:47.705516 kubelet[1514]: I0213 06:27:47.705478 1514 scope.go:117] "RemoveContainer" containerID="98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb" Feb 13 06:27:47.705618 env[1164]: time="2024-02-13T06:27:47.705555829Z" level=error msg="ContainerStatus for \"98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb\": not found" Feb 13 06:27:47.705703 kubelet[1514]: E0213 06:27:47.705670 1514 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb\": not found" containerID="98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb" Feb 13 06:27:47.705734 kubelet[1514]: I0213 06:27:47.705719 1514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb"} err="failed to get container status \"98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"98b6110219ac29ffb3b2624f9517b69b3026d96cd0ae9bbe240815c92ee0b9cb\": not found" Feb 13 06:27:47.705734 kubelet[1514]: I0213 06:27:47.705731 1514 scope.go:117] "RemoveContainer" containerID="c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59" Feb 13 06:27:47.705852 env[1164]: time="2024-02-13T06:27:47.705827839Z" level=error msg="ContainerStatus for \"c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59\": not found" Feb 13 06:27:47.705903 kubelet[1514]: E0213 06:27:47.705897 1514 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59\": not found" containerID="c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59" Feb 13 06:27:47.705924 kubelet[1514]: I0213 06:27:47.705913 1514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59"} err="failed to get container status \"c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59\": rpc error: code = NotFound desc = an error occurred when try to find container \"c149710a00f818e5e527a0bf5b8ae25a5ade35d6be7a01d45ba6aeff961c1c59\": not found" Feb 13 06:27:47.705924 kubelet[1514]: I0213 06:27:47.705919 1514 scope.go:117] "RemoveContainer" containerID="923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b" Feb 13 06:27:47.706005 env[1164]: time="2024-02-13T06:27:47.705983794Z" level=error msg="ContainerStatus for \"923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b\": not found" Feb 13 06:27:47.706048 kubelet[1514]: E0213 06:27:47.706042 1514 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b\": not found" containerID="923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b" Feb 13 06:27:47.706073 kubelet[1514]: I0213 06:27:47.706057 1514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b"} err="failed to get container status \"923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b\": rpc error: code = NotFound desc = an error occurred when try to find container \"923ac8985c3730e697fad3585c1816de9d3951db2389259de2e9aa306fff512b\": not found" Feb 13 06:27:47.706073 kubelet[1514]: I0213 06:27:47.706063 1514 scope.go:117] "RemoveContainer" containerID="c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244" Feb 13 06:27:47.706145 env[1164]: time="2024-02-13T06:27:47.706124701Z" level=error msg="ContainerStatus for \"c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244\": not found" Feb 13 06:27:47.706185 kubelet[1514]: E0213 06:27:47.706180 1514 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244\": not found" containerID="c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244" Feb 13 06:27:47.706207 kubelet[1514]: I0213 06:27:47.706192 1514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244"} err="failed to get container status \"c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8ad6fd5381c88612514a52732086ff0a10e832bf7f3a64eb67c988b1c658244\": not found" Feb 13 06:27:47.706207 kubelet[1514]: I0213 06:27:47.706197 1514 scope.go:117] "RemoveContainer" containerID="d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf" Feb 13 06:27:47.706285 env[1164]: time="2024-02-13T06:27:47.706264865Z" level=error msg="ContainerStatus for \"d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf\": not found" Feb 13 06:27:47.706324 kubelet[1514]: E0213 06:27:47.706319 1514 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf\": not found" containerID="d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf" Feb 13 06:27:47.706344 kubelet[1514]: I0213 06:27:47.706331 1514 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf"} err="failed to get container status \"d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf\": rpc error: code = NotFound desc = an error occurred when try to find container \"d93977289fd4f69b53f6c1a2d3a92d3d8c89b98b537e4cf3ff42bc205d2f6fbf\": not found" Feb 13 06:27:47.812252 kubelet[1514]: E0213 06:27:47.812141 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:48.634483 kubelet[1514]: I0213 06:27:48.634368 1514 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3272e48c-04c1-4732-b339-06eeda0fbf9d" path="/var/lib/kubelet/pods/3272e48c-04c1-4732-b339-06eeda0fbf9d/volumes" Feb 13 06:27:48.813217 kubelet[1514]: E0213 06:27:48.813099 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:48.975175 kubelet[1514]: I0213 06:27:48.974957 1514 topology_manager.go:215] "Topology Admit Handler" podUID="a847599d-43d5-47ab-91a7-ef65771f5809" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-69gtt" Feb 13 06:27:48.975175 kubelet[1514]: E0213 06:27:48.975057 1514 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3272e48c-04c1-4732-b339-06eeda0fbf9d" containerName="apply-sysctl-overwrites" Feb 13 06:27:48.975175 kubelet[1514]: E0213 06:27:48.975086 1514 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3272e48c-04c1-4732-b339-06eeda0fbf9d" containerName="mount-bpf-fs" Feb 13 06:27:48.975175 kubelet[1514]: E0213 06:27:48.975109 1514 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3272e48c-04c1-4732-b339-06eeda0fbf9d" containerName="clean-cilium-state" Feb 13 06:27:48.975175 kubelet[1514]: E0213 06:27:48.975128 1514 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3272e48c-04c1-4732-b339-06eeda0fbf9d" containerName="cilium-agent" Feb 13 06:27:48.975175 kubelet[1514]: E0213 06:27:48.975148 1514 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3272e48c-04c1-4732-b339-06eeda0fbf9d" containerName="mount-cgroup" Feb 13 06:27:48.975175 kubelet[1514]: I0213 06:27:48.975193 1514 memory_manager.go:346] "RemoveStaleState removing state" podUID="3272e48c-04c1-4732-b339-06eeda0fbf9d" containerName="cilium-agent" Feb 13 06:27:48.976477 kubelet[1514]: I0213 06:27:48.976451 1514 topology_manager.go:215] "Topology Admit Handler" podUID="52d11eeb-a3ed-40fc-bb2a-25763568758e" podNamespace="kube-system" podName="cilium-5lvlk" Feb 13 06:27:48.992193 systemd[1]: Created slice kubepods-besteffort-poda847599d_43d5_47ab_91a7_ef65771f5809.slice. Feb 13 06:27:49.000624 systemd[1]: Created slice kubepods-burstable-pod52d11eeb_a3ed_40fc_bb2a_25763568758e.slice. Feb 13 06:27:49.067727 kubelet[1514]: I0213 06:27:49.067608 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a847599d-43d5-47ab-91a7-ef65771f5809-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-69gtt\" (UID: \"a847599d-43d5-47ab-91a7-ef65771f5809\") " pod="kube-system/cilium-operator-6bc8ccdb58-69gtt" Feb 13 06:27:49.068055 kubelet[1514]: I0213 06:27:49.067789 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk4xp\" (UniqueName: \"kubernetes.io/projected/a847599d-43d5-47ab-91a7-ef65771f5809-kube-api-access-jk4xp\") pod \"cilium-operator-6bc8ccdb58-69gtt\" (UID: \"a847599d-43d5-47ab-91a7-ef65771f5809\") " pod="kube-system/cilium-operator-6bc8ccdb58-69gtt" Feb 13 06:27:49.068055 kubelet[1514]: I0213 06:27:49.067904 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-cni-path\") pod \"cilium-5lvlk\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " pod="kube-system/cilium-5lvlk" Feb 13 06:27:49.068055 kubelet[1514]: I0213 06:27:49.068021 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-lib-modules\") pod \"cilium-5lvlk\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " pod="kube-system/cilium-5lvlk" Feb 13 06:27:49.068436 kubelet[1514]: I0213 06:27:49.068121 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q87gb\" (UniqueName: \"kubernetes.io/projected/52d11eeb-a3ed-40fc-bb2a-25763568758e-kube-api-access-q87gb\") pod \"cilium-5lvlk\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " pod="kube-system/cilium-5lvlk" Feb 13 06:27:49.068436 kubelet[1514]: I0213 06:27:49.068205 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52d11eeb-a3ed-40fc-bb2a-25763568758e-clustermesh-secrets\") pod \"cilium-5lvlk\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " pod="kube-system/cilium-5lvlk" Feb 13 06:27:49.068436 kubelet[1514]: I0213 06:27:49.068310 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52d11eeb-a3ed-40fc-bb2a-25763568758e-cilium-config-path\") pod \"cilium-5lvlk\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " pod="kube-system/cilium-5lvlk" Feb 13 06:27:49.068775 kubelet[1514]: I0213 06:27:49.068448 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52d11eeb-a3ed-40fc-bb2a-25763568758e-hubble-tls\") pod \"cilium-5lvlk\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " pod="kube-system/cilium-5lvlk" Feb 13 06:27:49.068775 kubelet[1514]: I0213 06:27:49.068549 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-cilium-run\") pod \"cilium-5lvlk\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " pod="kube-system/cilium-5lvlk" Feb 13 06:27:49.068775 kubelet[1514]: I0213 06:27:49.068616 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-bpf-maps\") pod \"cilium-5lvlk\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " pod="kube-system/cilium-5lvlk" Feb 13 06:27:49.068775 kubelet[1514]: I0213 06:27:49.068702 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-hostproc\") pod \"cilium-5lvlk\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " pod="kube-system/cilium-5lvlk" Feb 13 06:27:49.068775 kubelet[1514]: I0213 06:27:49.068763 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-etc-cni-netd\") pod \"cilium-5lvlk\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " pod="kube-system/cilium-5lvlk" Feb 13 06:27:49.069270 kubelet[1514]: I0213 06:27:49.068885 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-xtables-lock\") pod \"cilium-5lvlk\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " pod="kube-system/cilium-5lvlk" Feb 13 06:27:49.069270 kubelet[1514]: I0213 06:27:49.069040 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-host-proc-sys-kernel\") pod \"cilium-5lvlk\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " pod="kube-system/cilium-5lvlk" Feb 13 06:27:49.069270 kubelet[1514]: I0213 06:27:49.069193 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-cilium-cgroup\") pod \"cilium-5lvlk\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " pod="kube-system/cilium-5lvlk" Feb 13 06:27:49.069609 kubelet[1514]: I0213 06:27:49.069282 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/52d11eeb-a3ed-40fc-bb2a-25763568758e-cilium-ipsec-secrets\") pod \"cilium-5lvlk\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " pod="kube-system/cilium-5lvlk" Feb 13 06:27:49.069609 kubelet[1514]: I0213 06:27:49.069400 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-host-proc-sys-net\") pod \"cilium-5lvlk\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " pod="kube-system/cilium-5lvlk" Feb 13 06:27:49.118194 kubelet[1514]: E0213 06:27:49.118093 1514 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-q87gb lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-5lvlk" podUID="52d11eeb-a3ed-40fc-bb2a-25763568758e" Feb 13 06:27:49.298678 env[1164]: time="2024-02-13T06:27:49.298539879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-69gtt,Uid:a847599d-43d5-47ab-91a7-ef65771f5809,Namespace:kube-system,Attempt:0,}" Feb 13 06:27:49.314120 env[1164]: time="2024-02-13T06:27:49.314060624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 06:27:49.314120 env[1164]: time="2024-02-13T06:27:49.314080558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 06:27:49.314120 env[1164]: time="2024-02-13T06:27:49.314087119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 06:27:49.314254 env[1164]: time="2024-02-13T06:27:49.314185940Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/88e328429d816be7559f0d82c85c21f2578ce845a0111143e8ec2251070a45f3 pid=3027 runtime=io.containerd.runc.v2 Feb 13 06:27:49.320236 systemd[1]: Started cri-containerd-88e328429d816be7559f0d82c85c21f2578ce845a0111143e8ec2251070a45f3.scope. Feb 13 06:27:49.344102 env[1164]: time="2024-02-13T06:27:49.344073411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-69gtt,Uid:a847599d-43d5-47ab-91a7-ef65771f5809,Namespace:kube-system,Attempt:0,} returns sandbox id \"88e328429d816be7559f0d82c85c21f2578ce845a0111143e8ec2251070a45f3\"" Feb 13 06:27:49.774740 kubelet[1514]: I0213 06:27:49.774626 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-cni-path\") pod \"52d11eeb-a3ed-40fc-bb2a-25763568758e\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " Feb 13 06:27:49.774740 kubelet[1514]: I0213 06:27:49.774741 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q87gb\" (UniqueName: \"kubernetes.io/projected/52d11eeb-a3ed-40fc-bb2a-25763568758e-kube-api-access-q87gb\") pod \"52d11eeb-a3ed-40fc-bb2a-25763568758e\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " Feb 13 06:27:49.775200 kubelet[1514]: I0213 06:27:49.774753 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-cni-path" (OuterVolumeSpecName: "cni-path") pod "52d11eeb-a3ed-40fc-bb2a-25763568758e" (UID: "52d11eeb-a3ed-40fc-bb2a-25763568758e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.775200 kubelet[1514]: I0213 06:27:49.774801 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-cilium-run\") pod \"52d11eeb-a3ed-40fc-bb2a-25763568758e\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " Feb 13 06:27:49.775200 kubelet[1514]: I0213 06:27:49.774855 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-bpf-maps\") pod \"52d11eeb-a3ed-40fc-bb2a-25763568758e\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " Feb 13 06:27:49.775200 kubelet[1514]: I0213 06:27:49.774924 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/52d11eeb-a3ed-40fc-bb2a-25763568758e-cilium-ipsec-secrets\") pod \"52d11eeb-a3ed-40fc-bb2a-25763568758e\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " Feb 13 06:27:49.775200 kubelet[1514]: I0213 06:27:49.774902 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "52d11eeb-a3ed-40fc-bb2a-25763568758e" (UID: "52d11eeb-a3ed-40fc-bb2a-25763568758e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.776139 kubelet[1514]: I0213 06:27:49.774943 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "52d11eeb-a3ed-40fc-bb2a-25763568758e" (UID: "52d11eeb-a3ed-40fc-bb2a-25763568758e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.776139 kubelet[1514]: I0213 06:27:49.774987 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52d11eeb-a3ed-40fc-bb2a-25763568758e-clustermesh-secrets\") pod \"52d11eeb-a3ed-40fc-bb2a-25763568758e\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " Feb 13 06:27:49.776139 kubelet[1514]: I0213 06:27:49.775050 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52d11eeb-a3ed-40fc-bb2a-25763568758e-hubble-tls\") pod \"52d11eeb-a3ed-40fc-bb2a-25763568758e\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " Feb 13 06:27:49.776139 kubelet[1514]: I0213 06:27:49.775106 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-hostproc\") pod \"52d11eeb-a3ed-40fc-bb2a-25763568758e\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " Feb 13 06:27:49.776139 kubelet[1514]: I0213 06:27:49.775159 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-etc-cni-netd\") pod \"52d11eeb-a3ed-40fc-bb2a-25763568758e\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " Feb 13 06:27:49.776139 kubelet[1514]: I0213 06:27:49.775210 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-xtables-lock\") pod \"52d11eeb-a3ed-40fc-bb2a-25763568758e\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " Feb 13 06:27:49.777048 kubelet[1514]: I0213 06:27:49.775248 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-hostproc" (OuterVolumeSpecName: "hostproc") pod "52d11eeb-a3ed-40fc-bb2a-25763568758e" (UID: "52d11eeb-a3ed-40fc-bb2a-25763568758e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.777048 kubelet[1514]: I0213 06:27:49.775310 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-lib-modules\") pod \"52d11eeb-a3ed-40fc-bb2a-25763568758e\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " Feb 13 06:27:49.777048 kubelet[1514]: I0213 06:27:49.775309 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "52d11eeb-a3ed-40fc-bb2a-25763568758e" (UID: "52d11eeb-a3ed-40fc-bb2a-25763568758e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.777048 kubelet[1514]: I0213 06:27:49.775365 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "52d11eeb-a3ed-40fc-bb2a-25763568758e" (UID: "52d11eeb-a3ed-40fc-bb2a-25763568758e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.777048 kubelet[1514]: I0213 06:27:49.775424 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "52d11eeb-a3ed-40fc-bb2a-25763568758e" (UID: "52d11eeb-a3ed-40fc-bb2a-25763568758e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.777628 kubelet[1514]: I0213 06:27:49.775365 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-cilium-cgroup\") pod \"52d11eeb-a3ed-40fc-bb2a-25763568758e\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " Feb 13 06:27:49.777628 kubelet[1514]: I0213 06:27:49.775487 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "52d11eeb-a3ed-40fc-bb2a-25763568758e" (UID: "52d11eeb-a3ed-40fc-bb2a-25763568758e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.777628 kubelet[1514]: I0213 06:27:49.775621 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52d11eeb-a3ed-40fc-bb2a-25763568758e-cilium-config-path\") pod \"52d11eeb-a3ed-40fc-bb2a-25763568758e\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " Feb 13 06:27:49.777628 kubelet[1514]: I0213 06:27:49.775750 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-host-proc-sys-kernel\") pod \"52d11eeb-a3ed-40fc-bb2a-25763568758e\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " Feb 13 06:27:49.777628 kubelet[1514]: I0213 06:27:49.775795 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "52d11eeb-a3ed-40fc-bb2a-25763568758e" (UID: "52d11eeb-a3ed-40fc-bb2a-25763568758e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.778144 kubelet[1514]: I0213 06:27:49.775870 1514 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-host-proc-sys-net\") pod \"52d11eeb-a3ed-40fc-bb2a-25763568758e\" (UID: \"52d11eeb-a3ed-40fc-bb2a-25763568758e\") " Feb 13 06:27:49.778144 kubelet[1514]: I0213 06:27:49.775945 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "52d11eeb-a3ed-40fc-bb2a-25763568758e" (UID: "52d11eeb-a3ed-40fc-bb2a-25763568758e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 06:27:49.778144 kubelet[1514]: I0213 06:27:49.775991 1514 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-cni-path\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:49.778144 kubelet[1514]: I0213 06:27:49.776060 1514 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-cilium-run\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:49.778144 kubelet[1514]: I0213 06:27:49.776117 1514 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-bpf-maps\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:49.778144 kubelet[1514]: I0213 06:27:49.776166 1514 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-hostproc\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:49.778144 kubelet[1514]: I0213 06:27:49.776218 1514 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-etc-cni-netd\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:49.778863 kubelet[1514]: I0213 06:27:49.776276 1514 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-xtables-lock\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:49.778863 kubelet[1514]: I0213 06:27:49.776331 1514 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-lib-modules\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:49.778863 kubelet[1514]: I0213 06:27:49.776399 1514 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-cilium-cgroup\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:49.778863 kubelet[1514]: I0213 06:27:49.776463 1514 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-host-proc-sys-kernel\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:49.779722 kubelet[1514]: I0213 06:27:49.779688 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d11eeb-a3ed-40fc-bb2a-25763568758e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "52d11eeb-a3ed-40fc-bb2a-25763568758e" (UID: "52d11eeb-a3ed-40fc-bb2a-25763568758e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 06:27:49.780085 kubelet[1514]: I0213 06:27:49.780049 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d11eeb-a3ed-40fc-bb2a-25763568758e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "52d11eeb-a3ed-40fc-bb2a-25763568758e" (UID: "52d11eeb-a3ed-40fc-bb2a-25763568758e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 06:27:49.780085 kubelet[1514]: I0213 06:27:49.780058 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d11eeb-a3ed-40fc-bb2a-25763568758e-kube-api-access-q87gb" (OuterVolumeSpecName: "kube-api-access-q87gb") pod "52d11eeb-a3ed-40fc-bb2a-25763568758e" (UID: "52d11eeb-a3ed-40fc-bb2a-25763568758e"). InnerVolumeSpecName "kube-api-access-q87gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 06:27:49.780152 kubelet[1514]: I0213 06:27:49.780100 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d11eeb-a3ed-40fc-bb2a-25763568758e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "52d11eeb-a3ed-40fc-bb2a-25763568758e" (UID: "52d11eeb-a3ed-40fc-bb2a-25763568758e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 06:27:49.780219 kubelet[1514]: I0213 06:27:49.780183 1514 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d11eeb-a3ed-40fc-bb2a-25763568758e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "52d11eeb-a3ed-40fc-bb2a-25763568758e" (UID: "52d11eeb-a3ed-40fc-bb2a-25763568758e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 06:27:49.813518 kubelet[1514]: E0213 06:27:49.813488 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:49.877077 kubelet[1514]: I0213 06:27:49.876973 1514 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52d11eeb-a3ed-40fc-bb2a-25763568758e-host-proc-sys-net\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:49.877077 kubelet[1514]: I0213 06:27:49.877045 1514 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/52d11eeb-a3ed-40fc-bb2a-25763568758e-cilium-ipsec-secrets\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:49.877077 kubelet[1514]: I0213 06:27:49.877079 1514 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q87gb\" (UniqueName: \"kubernetes.io/projected/52d11eeb-a3ed-40fc-bb2a-25763568758e-kube-api-access-q87gb\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:49.877662 kubelet[1514]: I0213 06:27:49.877109 1514 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52d11eeb-a3ed-40fc-bb2a-25763568758e-hubble-tls\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:49.877662 kubelet[1514]: I0213 06:27:49.877141 1514 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52d11eeb-a3ed-40fc-bb2a-25763568758e-clustermesh-secrets\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:49.877662 kubelet[1514]: I0213 06:27:49.877170 1514 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52d11eeb-a3ed-40fc-bb2a-25763568758e-cilium-config-path\") on node \"10.67.80.15\" DevicePath \"\"" Feb 13 06:27:50.180062 systemd[1]: var-lib-kubelet-pods-52d11eeb\x2da3ed\x2d40fc\x2dbb2a\x2d25763568758e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq87gb.mount: Deactivated successfully. Feb 13 06:27:50.180338 systemd[1]: var-lib-kubelet-pods-52d11eeb\x2da3ed\x2d40fc\x2dbb2a\x2d25763568758e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 06:27:50.180567 systemd[1]: var-lib-kubelet-pods-52d11eeb\x2da3ed\x2d40fc\x2dbb2a\x2d25763568758e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 06:27:50.180752 systemd[1]: var-lib-kubelet-pods-52d11eeb\x2da3ed\x2d40fc\x2dbb2a\x2d25763568758e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 13 06:27:50.639988 systemd[1]: Removed slice kubepods-burstable-pod52d11eeb_a3ed_40fc_bb2a_25763568758e.slice. Feb 13 06:27:50.744075 kubelet[1514]: I0213 06:27:50.744021 1514 topology_manager.go:215] "Topology Admit Handler" podUID="5e1d46c8-d3b1-4ff0-b2ab-08c71b030532" podNamespace="kube-system" podName="cilium-pcdsw" Feb 13 06:27:50.758189 systemd[1]: Created slice kubepods-burstable-pod5e1d46c8_d3b1_4ff0_b2ab_08c71b030532.slice. Feb 13 06:27:50.783652 kubelet[1514]: I0213 06:27:50.783542 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e1d46c8-d3b1-4ff0-b2ab-08c71b030532-etc-cni-netd\") pod \"cilium-pcdsw\" (UID: \"5e1d46c8-d3b1-4ff0-b2ab-08c71b030532\") " pod="kube-system/cilium-pcdsw" Feb 13 06:27:50.783969 kubelet[1514]: I0213 06:27:50.783741 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e1d46c8-d3b1-4ff0-b2ab-08c71b030532-clustermesh-secrets\") pod \"cilium-pcdsw\" (UID: \"5e1d46c8-d3b1-4ff0-b2ab-08c71b030532\") " pod="kube-system/cilium-pcdsw" Feb 13 06:27:50.783969 kubelet[1514]: I0213 06:27:50.783879 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e1d46c8-d3b1-4ff0-b2ab-08c71b030532-cilium-run\") pod \"cilium-pcdsw\" (UID: \"5e1d46c8-d3b1-4ff0-b2ab-08c71b030532\") " pod="kube-system/cilium-pcdsw" Feb 13 06:27:50.783969 kubelet[1514]: I0213 06:27:50.783947 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e1d46c8-d3b1-4ff0-b2ab-08c71b030532-hubble-tls\") pod \"cilium-pcdsw\" (UID: \"5e1d46c8-d3b1-4ff0-b2ab-08c71b030532\") " pod="kube-system/cilium-pcdsw" Feb 13 06:27:50.784347 kubelet[1514]: I0213 06:27:50.784010 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdjvs\" (UniqueName: \"kubernetes.io/projected/5e1d46c8-d3b1-4ff0-b2ab-08c71b030532-kube-api-access-gdjvs\") pod \"cilium-pcdsw\" (UID: \"5e1d46c8-d3b1-4ff0-b2ab-08c71b030532\") " pod="kube-system/cilium-pcdsw" Feb 13 06:27:50.784347 kubelet[1514]: I0213 06:27:50.784172 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e1d46c8-d3b1-4ff0-b2ab-08c71b030532-bpf-maps\") pod \"cilium-pcdsw\" (UID: \"5e1d46c8-d3b1-4ff0-b2ab-08c71b030532\") " pod="kube-system/cilium-pcdsw" Feb 13 06:27:50.784609 kubelet[1514]: I0213 06:27:50.784340 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e1d46c8-d3b1-4ff0-b2ab-08c71b030532-cni-path\") pod \"cilium-pcdsw\" (UID: \"5e1d46c8-d3b1-4ff0-b2ab-08c71b030532\") " pod="kube-system/cilium-pcdsw" Feb 13 06:27:50.784609 kubelet[1514]: I0213 06:27:50.784468 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e1d46c8-d3b1-4ff0-b2ab-08c71b030532-host-proc-sys-net\") pod \"cilium-pcdsw\" (UID: \"5e1d46c8-d3b1-4ff0-b2ab-08c71b030532\") " pod="kube-system/cilium-pcdsw" Feb 13 06:27:50.784609 kubelet[1514]: I0213 06:27:50.784544 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e1d46c8-d3b1-4ff0-b2ab-08c71b030532-host-proc-sys-kernel\") pod \"cilium-pcdsw\" (UID: \"5e1d46c8-d3b1-4ff0-b2ab-08c71b030532\") " pod="kube-system/cilium-pcdsw" Feb 13 06:27:50.784919 kubelet[1514]: I0213 06:27:50.784668 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e1d46c8-d3b1-4ff0-b2ab-08c71b030532-cilium-config-path\") pod \"cilium-pcdsw\" (UID: \"5e1d46c8-d3b1-4ff0-b2ab-08c71b030532\") " pod="kube-system/cilium-pcdsw" Feb 13 06:27:50.784919 kubelet[1514]: I0213 06:27:50.784762 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e1d46c8-d3b1-4ff0-b2ab-08c71b030532-hostproc\") pod \"cilium-pcdsw\" (UID: \"5e1d46c8-d3b1-4ff0-b2ab-08c71b030532\") " pod="kube-system/cilium-pcdsw" Feb 13 06:27:50.784919 kubelet[1514]: I0213 06:27:50.784823 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e1d46c8-d3b1-4ff0-b2ab-08c71b030532-cilium-cgroup\") pod \"cilium-pcdsw\" (UID: \"5e1d46c8-d3b1-4ff0-b2ab-08c71b030532\") " pod="kube-system/cilium-pcdsw" Feb 13 06:27:50.784919 kubelet[1514]: I0213 06:27:50.784882 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e1d46c8-d3b1-4ff0-b2ab-08c71b030532-lib-modules\") pod \"cilium-pcdsw\" (UID: \"5e1d46c8-d3b1-4ff0-b2ab-08c71b030532\") " pod="kube-system/cilium-pcdsw" Feb 13 06:27:50.785338 kubelet[1514]: I0213 06:27:50.784945 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e1d46c8-d3b1-4ff0-b2ab-08c71b030532-xtables-lock\") pod \"cilium-pcdsw\" (UID: \"5e1d46c8-d3b1-4ff0-b2ab-08c71b030532\") " pod="kube-system/cilium-pcdsw" Feb 13 06:27:50.785338 kubelet[1514]: I0213 06:27:50.785078 1514 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5e1d46c8-d3b1-4ff0-b2ab-08c71b030532-cilium-ipsec-secrets\") pod \"cilium-pcdsw\" (UID: \"5e1d46c8-d3b1-4ff0-b2ab-08c71b030532\") " pod="kube-system/cilium-pcdsw" Feb 13 06:27:50.814038 kubelet[1514]: E0213 06:27:50.813963 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:51.074835 env[1164]: time="2024-02-13T06:27:51.074707517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pcdsw,Uid:5e1d46c8-d3b1-4ff0-b2ab-08c71b030532,Namespace:kube-system,Attempt:0,}" Feb 13 06:27:51.084117 env[1164]: time="2024-02-13T06:27:51.084083249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 06:27:51.084117 env[1164]: time="2024-02-13T06:27:51.084103888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 06:27:51.084117 env[1164]: time="2024-02-13T06:27:51.084112006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 06:27:51.084244 env[1164]: time="2024-02-13T06:27:51.084226166Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9772741f352d610372029bfd03b5c2daf23359423fb11159590650c5a9240750 pid=3075 runtime=io.containerd.runc.v2 Feb 13 06:27:51.089923 systemd[1]: Started cri-containerd-9772741f352d610372029bfd03b5c2daf23359423fb11159590650c5a9240750.scope. Feb 13 06:27:51.100901 env[1164]: time="2024-02-13T06:27:51.100870849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pcdsw,Uid:5e1d46c8-d3b1-4ff0-b2ab-08c71b030532,Namespace:kube-system,Attempt:0,} returns sandbox id \"9772741f352d610372029bfd03b5c2daf23359423fb11159590650c5a9240750\"" Feb 13 06:27:51.102118 env[1164]: time="2024-02-13T06:27:51.102102587Z" level=info msg="CreateContainer within sandbox \"9772741f352d610372029bfd03b5c2daf23359423fb11159590650c5a9240750\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 06:27:51.106992 env[1164]: time="2024-02-13T06:27:51.106933454Z" level=info msg="CreateContainer within sandbox \"9772741f352d610372029bfd03b5c2daf23359423fb11159590650c5a9240750\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7ca6eeca35af97e929b416fc5c998224b0a87c357a0bb7ea6c4bb77a6df1dd92\"" Feb 13 06:27:51.107184 env[1164]: time="2024-02-13T06:27:51.107171391Z" level=info msg="StartContainer for \"7ca6eeca35af97e929b416fc5c998224b0a87c357a0bb7ea6c4bb77a6df1dd92\"" Feb 13 06:27:51.115096 systemd[1]: Started cri-containerd-7ca6eeca35af97e929b416fc5c998224b0a87c357a0bb7ea6c4bb77a6df1dd92.scope. Feb 13 06:27:51.129280 env[1164]: time="2024-02-13T06:27:51.129250290Z" level=info msg="StartContainer for \"7ca6eeca35af97e929b416fc5c998224b0a87c357a0bb7ea6c4bb77a6df1dd92\" returns successfully" Feb 13 06:27:51.135886 systemd[1]: cri-containerd-7ca6eeca35af97e929b416fc5c998224b0a87c357a0bb7ea6c4bb77a6df1dd92.scope: Deactivated successfully. Feb 13 06:27:51.175554 env[1164]: time="2024-02-13T06:27:51.175374461Z" level=info msg="shim disconnected" id=7ca6eeca35af97e929b416fc5c998224b0a87c357a0bb7ea6c4bb77a6df1dd92 Feb 13 06:27:51.175554 env[1164]: time="2024-02-13T06:27:51.175515977Z" level=warning msg="cleaning up after shim disconnected" id=7ca6eeca35af97e929b416fc5c998224b0a87c357a0bb7ea6c4bb77a6df1dd92 namespace=k8s.io Feb 13 06:27:51.175554 env[1164]: time="2024-02-13T06:27:51.175547983Z" level=info msg="cleaning up dead shim" Feb 13 06:27:51.192048 env[1164]: time="2024-02-13T06:27:51.191919829Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:27:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3158 runtime=io.containerd.runc.v2\n" Feb 13 06:27:51.713593 kubelet[1514]: E0213 06:27:51.713492 1514 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 06:27:51.720632 env[1164]: time="2024-02-13T06:27:51.720518090Z" level=info msg="CreateContainer within sandbox \"9772741f352d610372029bfd03b5c2daf23359423fb11159590650c5a9240750\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 06:27:51.734020 env[1164]: time="2024-02-13T06:27:51.733899937Z" level=info msg="CreateContainer within sandbox \"9772741f352d610372029bfd03b5c2daf23359423fb11159590650c5a9240750\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6924d600108957cc1d45d041f45910ad9c91c530fe509364fecd22782eef9baa\"" Feb 13 06:27:51.734605 env[1164]: time="2024-02-13T06:27:51.734588325Z" level=info msg="StartContainer for \"6924d600108957cc1d45d041f45910ad9c91c530fe509364fecd22782eef9baa\"" Feb 13 06:27:51.743904 systemd[1]: Started cri-containerd-6924d600108957cc1d45d041f45910ad9c91c530fe509364fecd22782eef9baa.scope. Feb 13 06:27:51.755459 env[1164]: time="2024-02-13T06:27:51.755426009Z" level=info msg="StartContainer for \"6924d600108957cc1d45d041f45910ad9c91c530fe509364fecd22782eef9baa\" returns successfully" Feb 13 06:27:51.758863 systemd[1]: cri-containerd-6924d600108957cc1d45d041f45910ad9c91c530fe509364fecd22782eef9baa.scope: Deactivated successfully. Feb 13 06:27:51.768856 env[1164]: time="2024-02-13T06:27:51.768814339Z" level=info msg="shim disconnected" id=6924d600108957cc1d45d041f45910ad9c91c530fe509364fecd22782eef9baa Feb 13 06:27:51.768967 env[1164]: time="2024-02-13T06:27:51.768858418Z" level=warning msg="cleaning up after shim disconnected" id=6924d600108957cc1d45d041f45910ad9c91c530fe509364fecd22782eef9baa namespace=k8s.io Feb 13 06:27:51.768967 env[1164]: time="2024-02-13T06:27:51.768869344Z" level=info msg="cleaning up dead shim" Feb 13 06:27:51.773091 env[1164]: time="2024-02-13T06:27:51.773048717Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:27:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3219 runtime=io.containerd.runc.v2\n" Feb 13 06:27:51.814824 kubelet[1514]: E0213 06:27:51.814742 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:51.929868 update_engine[1156]: I0213 06:27:51.929750 1156 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 06:27:51.930703 update_engine[1156]: I0213 06:27:51.930224 1156 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 06:27:51.930703 update_engine[1156]: E0213 06:27:51.930470 1156 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 06:27:51.930703 update_engine[1156]: I0213 06:27:51.930635 1156 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 06:27:52.180651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6924d600108957cc1d45d041f45910ad9c91c530fe509364fecd22782eef9baa-rootfs.mount: Deactivated successfully. Feb 13 06:27:52.635859 kubelet[1514]: I0213 06:27:52.635754 1514 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="52d11eeb-a3ed-40fc-bb2a-25763568758e" path="/var/lib/kubelet/pods/52d11eeb-a3ed-40fc-bb2a-25763568758e/volumes" Feb 13 06:27:52.728219 env[1164]: time="2024-02-13T06:27:52.728086759Z" level=info msg="CreateContainer within sandbox \"9772741f352d610372029bfd03b5c2daf23359423fb11159590650c5a9240750\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 06:27:52.746877 env[1164]: time="2024-02-13T06:27:52.746831874Z" level=info msg="CreateContainer within sandbox \"9772741f352d610372029bfd03b5c2daf23359423fb11159590650c5a9240750\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e5f228e2535d5489cc790d8cb5dcb1f11ec38874bd662f0ff204a2f758f9543b\"" Feb 13 06:27:52.747240 env[1164]: time="2024-02-13T06:27:52.747165403Z" level=info msg="StartContainer for \"e5f228e2535d5489cc790d8cb5dcb1f11ec38874bd662f0ff204a2f758f9543b\"" Feb 13 06:27:52.756638 systemd[1]: Started cri-containerd-e5f228e2535d5489cc790d8cb5dcb1f11ec38874bd662f0ff204a2f758f9543b.scope. Feb 13 06:27:52.770739 env[1164]: time="2024-02-13T06:27:52.770715868Z" level=info msg="StartContainer for \"e5f228e2535d5489cc790d8cb5dcb1f11ec38874bd662f0ff204a2f758f9543b\" returns successfully" Feb 13 06:27:52.771976 systemd[1]: cri-containerd-e5f228e2535d5489cc790d8cb5dcb1f11ec38874bd662f0ff204a2f758f9543b.scope: Deactivated successfully. Feb 13 06:27:52.800855 env[1164]: time="2024-02-13T06:27:52.800795841Z" level=info msg="shim disconnected" id=e5f228e2535d5489cc790d8cb5dcb1f11ec38874bd662f0ff204a2f758f9543b Feb 13 06:27:52.800855 env[1164]: time="2024-02-13T06:27:52.800825087Z" level=warning msg="cleaning up after shim disconnected" id=e5f228e2535d5489cc790d8cb5dcb1f11ec38874bd662f0ff204a2f758f9543b namespace=k8s.io Feb 13 06:27:52.800855 env[1164]: time="2024-02-13T06:27:52.800832002Z" level=info msg="cleaning up dead shim" Feb 13 06:27:52.805538 env[1164]: time="2024-02-13T06:27:52.805489142Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:27:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3274 runtime=io.containerd.runc.v2\n" Feb 13 06:27:52.815289 kubelet[1514]: E0213 06:27:52.815244 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:53.180403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5f228e2535d5489cc790d8cb5dcb1f11ec38874bd662f0ff204a2f758f9543b-rootfs.mount: Deactivated successfully. Feb 13 06:27:53.736971 env[1164]: time="2024-02-13T06:27:53.736843142Z" level=info msg="CreateContainer within sandbox \"9772741f352d610372029bfd03b5c2daf23359423fb11159590650c5a9240750\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 06:27:53.750871 env[1164]: time="2024-02-13T06:27:53.750832544Z" level=info msg="CreateContainer within sandbox \"9772741f352d610372029bfd03b5c2daf23359423fb11159590650c5a9240750\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"358d3c9adb275d0ae6a4ac7bcc94b4eb305c09c68fa9b1f9862a4a4f02904f58\"" Feb 13 06:27:53.751137 env[1164]: time="2024-02-13T06:27:53.751122875Z" level=info msg="StartContainer for \"358d3c9adb275d0ae6a4ac7bcc94b4eb305c09c68fa9b1f9862a4a4f02904f58\"" Feb 13 06:27:53.759673 systemd[1]: Started cri-containerd-358d3c9adb275d0ae6a4ac7bcc94b4eb305c09c68fa9b1f9862a4a4f02904f58.scope. Feb 13 06:27:53.771322 env[1164]: time="2024-02-13T06:27:53.771269265Z" level=info msg="StartContainer for \"358d3c9adb275d0ae6a4ac7bcc94b4eb305c09c68fa9b1f9862a4a4f02904f58\" returns successfully" Feb 13 06:27:53.771656 systemd[1]: cri-containerd-358d3c9adb275d0ae6a4ac7bcc94b4eb305c09c68fa9b1f9862a4a4f02904f58.scope: Deactivated successfully. Feb 13 06:27:53.780765 env[1164]: time="2024-02-13T06:27:53.780711162Z" level=info msg="shim disconnected" id=358d3c9adb275d0ae6a4ac7bcc94b4eb305c09c68fa9b1f9862a4a4f02904f58 Feb 13 06:27:53.780765 env[1164]: time="2024-02-13T06:27:53.780736830Z" level=warning msg="cleaning up after shim disconnected" id=358d3c9adb275d0ae6a4ac7bcc94b4eb305c09c68fa9b1f9862a4a4f02904f58 namespace=k8s.io Feb 13 06:27:53.780765 env[1164]: time="2024-02-13T06:27:53.780743340Z" level=info msg="cleaning up dead shim" Feb 13 06:27:53.784448 env[1164]: time="2024-02-13T06:27:53.784388274Z" level=warning msg="cleanup warnings time=\"2024-02-13T06:27:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3329 runtime=io.containerd.runc.v2\n" Feb 13 06:27:53.816300 kubelet[1514]: E0213 06:27:53.816276 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:54.180813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-358d3c9adb275d0ae6a4ac7bcc94b4eb305c09c68fa9b1f9862a4a4f02904f58-rootfs.mount: Deactivated successfully. Feb 13 06:27:54.746874 env[1164]: time="2024-02-13T06:27:54.746746194Z" level=info msg="CreateContainer within sandbox \"9772741f352d610372029bfd03b5c2daf23359423fb11159590650c5a9240750\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 06:27:54.766326 env[1164]: time="2024-02-13T06:27:54.766281813Z" level=info msg="CreateContainer within sandbox \"9772741f352d610372029bfd03b5c2daf23359423fb11159590650c5a9240750\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9a065fb8879bc878467b8d7deb01aa81364c2ecf017f44d33134899c9346f1a0\"" Feb 13 06:27:54.766551 env[1164]: time="2024-02-13T06:27:54.766535321Z" level=info msg="StartContainer for \"9a065fb8879bc878467b8d7deb01aa81364c2ecf017f44d33134899c9346f1a0\"" Feb 13 06:27:54.774500 systemd[1]: Started cri-containerd-9a065fb8879bc878467b8d7deb01aa81364c2ecf017f44d33134899c9346f1a0.scope. Feb 13 06:27:54.787299 env[1164]: time="2024-02-13T06:27:54.787246500Z" level=info msg="StartContainer for \"9a065fb8879bc878467b8d7deb01aa81364c2ecf017f44d33134899c9346f1a0\" returns successfully" Feb 13 06:27:54.817065 kubelet[1514]: E0213 06:27:54.817049 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:54.930430 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 06:27:55.056520 kubelet[1514]: I0213 06:27:55.056331 1514 setters.go:552] "Node became not ready" node="10.67.80.15" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-13T06:27:55Z","lastTransitionTime":"2024-02-13T06:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 06:27:55.785371 kubelet[1514]: I0213 06:27:55.785274 1514 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pcdsw" podStartSLOduration=5.785186928 podCreationTimestamp="2024-02-13 06:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 06:27:55.784981602 +0000 UTC m=+389.734810055" watchObservedRunningTime="2024-02-13 06:27:55.785186928 +0000 UTC m=+389.735015383" Feb 13 06:27:55.817806 kubelet[1514]: E0213 06:27:55.817699 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:56.817959 kubelet[1514]: E0213 06:27:56.817892 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:57.786319 systemd-networkd[1007]: lxc_health: Link UP Feb 13 06:27:57.809091 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 06:27:57.808763 systemd-networkd[1007]: lxc_health: Gained carrier Feb 13 06:27:57.818309 kubelet[1514]: E0213 06:27:57.818256 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:58.819060 kubelet[1514]: E0213 06:27:58.819012 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:27:59.754547 systemd-networkd[1007]: lxc_health: Gained IPv6LL Feb 13 06:27:59.819454 kubelet[1514]: E0213 06:27:59.819435 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:00.820235 kubelet[1514]: E0213 06:28:00.820160 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:01.820641 kubelet[1514]: E0213 06:28:01.820569 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:01.936514 update_engine[1156]: I0213 06:28:01.936436 1156 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 06:28:01.937302 update_engine[1156]: I0213 06:28:01.936939 1156 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 06:28:01.937302 update_engine[1156]: E0213 06:28:01.937147 1156 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 06:28:01.937302 update_engine[1156]: I0213 06:28:01.937294 1156 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 06:28:01.937647 update_engine[1156]: I0213 06:28:01.937309 1156 omaha_request_action.cc:621] Omaha request response: Feb 13 06:28:01.937647 update_engine[1156]: E0213 06:28:01.937485 1156 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 13 06:28:01.937647 update_engine[1156]: I0213 06:28:01.937515 1156 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 06:28:01.937647 update_engine[1156]: I0213 06:28:01.937525 1156 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 06:28:01.937647 update_engine[1156]: I0213 06:28:01.937534 1156 update_attempter.cc:306] Processing Done. Feb 13 06:28:01.937647 update_engine[1156]: E0213 06:28:01.937561 1156 update_attempter.cc:619] Update failed. Feb 13 06:28:01.937647 update_engine[1156]: I0213 06:28:01.937569 1156 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 06:28:01.937647 update_engine[1156]: I0213 06:28:01.937578 1156 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 06:28:01.937647 update_engine[1156]: I0213 06:28:01.937588 1156 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 06:28:01.938514 update_engine[1156]: I0213 06:28:01.937743 1156 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 06:28:01.938514 update_engine[1156]: I0213 06:28:01.937795 1156 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 13 06:28:01.938514 update_engine[1156]: I0213 06:28:01.937805 1156 omaha_request_action.cc:271] Request: Feb 13 06:28:01.938514 update_engine[1156]: Feb 13 06:28:01.938514 update_engine[1156]: Feb 13 06:28:01.938514 update_engine[1156]: Feb 13 06:28:01.938514 update_engine[1156]: Feb 13 06:28:01.938514 update_engine[1156]: Feb 13 06:28:01.938514 update_engine[1156]: Feb 13 06:28:01.938514 update_engine[1156]: I0213 06:28:01.937815 1156 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 06:28:01.938514 update_engine[1156]: I0213 06:28:01.938138 1156 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 06:28:01.938514 update_engine[1156]: E0213 06:28:01.938299 1156 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 06:28:01.938514 update_engine[1156]: I0213 06:28:01.938448 1156 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 06:28:01.938514 update_engine[1156]: I0213 06:28:01.938463 1156 omaha_request_action.cc:621] Omaha request response: Feb 13 06:28:01.938514 update_engine[1156]: I0213 06:28:01.938473 1156 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 06:28:01.938514 update_engine[1156]: I0213 06:28:01.938482 1156 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 06:28:01.938514 update_engine[1156]: I0213 06:28:01.938490 1156 update_attempter.cc:306] Processing Done. Feb 13 06:28:01.938514 update_engine[1156]: I0213 06:28:01.938498 1156 update_attempter.cc:310] Error event sent. Feb 13 06:28:01.938514 update_engine[1156]: I0213 06:28:01.938519 1156 update_check_scheduler.cc:74] Next update check in 47m0s Feb 13 06:28:01.940179 locksmithd[1201]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 06:28:01.940179 locksmithd[1201]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 06:28:02.821743 kubelet[1514]: E0213 06:28:02.821683 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:03.821995 kubelet[1514]: E0213 06:28:03.821847 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:04.467335 systemd[1]: Started sshd@12-147.75.49.59:22-218.92.0.22:30221.service. Feb 13 06:28:04.822703 kubelet[1514]: E0213 06:28:04.822628 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:05.485985 sshd[4178]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 13 06:28:05.823113 kubelet[1514]: E0213 06:28:05.823001 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:06.538294 kubelet[1514]: E0213 06:28:06.538185 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:06.823608 kubelet[1514]: E0213 06:28:06.823417 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:07.413568 sshd[4178]: Failed password for root from 218.92.0.22 port 30221 ssh2 Feb 13 06:28:07.824417 kubelet[1514]: E0213 06:28:07.824323 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:08.824911 kubelet[1514]: E0213 06:28:08.824800 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:09.825295 kubelet[1514]: E0213 06:28:09.825214 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:10.795614 sshd[4178]: Failed password for root from 218.92.0.22 port 30221 ssh2 Feb 13 06:28:10.826087 kubelet[1514]: E0213 06:28:10.826016 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:11.826809 kubelet[1514]: E0213 06:28:11.826684 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:12.827515 kubelet[1514]: E0213 06:28:12.827403 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:13.400348 sshd[4178]: Failed password for root from 218.92.0.22 port 30221 ssh2 Feb 13 06:28:13.827926 kubelet[1514]: E0213 06:28:13.827858 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:14.828421 kubelet[1514]: E0213 06:28:14.828336 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:15.154470 sshd[4178]: Received disconnect from 218.92.0.22 port 30221:11: [preauth] Feb 13 06:28:15.154470 sshd[4178]: Disconnected from authenticating user root 218.92.0.22 port 30221 [preauth] Feb 13 06:28:15.154903 sshd[4178]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 13 06:28:15.157033 systemd[1]: sshd@12-147.75.49.59:22-218.92.0.22:30221.service: Deactivated successfully. Feb 13 06:28:15.309049 systemd[1]: Started sshd@13-147.75.49.59:22-218.92.0.22:50950.service. Feb 13 06:28:15.829521 kubelet[1514]: E0213 06:28:15.829407 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:16.307765 sshd[4326]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 13 06:28:16.308007 sshd[4326]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 13 06:28:16.829930 kubelet[1514]: E0213 06:28:16.829792 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:17.830729 kubelet[1514]: E0213 06:28:17.830605 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:18.079276 sshd[4326]: Failed password for root from 218.92.0.22 port 50950 ssh2 Feb 13 06:28:18.830993 kubelet[1514]: E0213 06:28:18.830912 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:19.831221 kubelet[1514]: E0213 06:28:19.831143 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:20.676665 sshd[4326]: Failed password for root from 218.92.0.22 port 50950 ssh2 Feb 13 06:28:20.832157 kubelet[1514]: E0213 06:28:20.832047 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:21.832957 kubelet[1514]: E0213 06:28:21.832830 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:22.834140 kubelet[1514]: E0213 06:28:22.834061 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:23.834456 kubelet[1514]: E0213 06:28:23.834325 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:24.056286 sshd[4326]: Failed password for root from 218.92.0.22 port 50950 ssh2 Feb 13 06:28:24.835361 kubelet[1514]: E0213 06:28:24.835278 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:25.836268 kubelet[1514]: E0213 06:28:25.836207 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:26.335901 sshd[4326]: Received disconnect from 218.92.0.22 port 50950:11: [preauth] Feb 13 06:28:26.335901 sshd[4326]: Disconnected from authenticating user root 218.92.0.22 port 50950 [preauth] Feb 13 06:28:26.336478 sshd[4326]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 13 06:28:26.338591 systemd[1]: sshd@13-147.75.49.59:22-218.92.0.22:50950.service: Deactivated successfully. Feb 13 06:28:26.488199 systemd[1]: Started sshd@14-147.75.49.59:22-218.92.0.22:10114.service. Feb 13 06:28:26.538425 kubelet[1514]: E0213 06:28:26.538294 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:26.580143 env[1164]: time="2024-02-13T06:28:26.580039567Z" level=info msg="StopPodSandbox for \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\"" Feb 13 06:28:26.581139 env[1164]: time="2024-02-13T06:28:26.580248146Z" level=info msg="TearDown network for sandbox \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\" successfully" Feb 13 06:28:26.581139 env[1164]: time="2024-02-13T06:28:26.580342690Z" level=info msg="StopPodSandbox for \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\" returns successfully" Feb 13 06:28:26.581435 env[1164]: time="2024-02-13T06:28:26.581356209Z" level=info msg="RemovePodSandbox for \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\"" Feb 13 06:28:26.581553 env[1164]: time="2024-02-13T06:28:26.581441901Z" level=info msg="Forcibly stopping sandbox \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\"" Feb 13 06:28:26.581672 env[1164]: time="2024-02-13T06:28:26.581633891Z" level=info msg="TearDown network for sandbox \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\" successfully" Feb 13 06:28:26.585717 env[1164]: time="2024-02-13T06:28:26.585652215Z" level=info msg="RemovePodSandbox \"04d569a14c169beb5886a3e3aa76b18bb167223f4801a6479781947f283e0515\" returns successfully" Feb 13 06:28:26.837412 kubelet[1514]: E0213 06:28:26.837315 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:27.483689 sshd[4483]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 13 06:28:27.837529 kubelet[1514]: E0213 06:28:27.837476 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:28.837932 kubelet[1514]: E0213 06:28:28.837808 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:29.431252 sshd[4483]: Failed password for root from 218.92.0.22 port 10114 ssh2 Feb 13 06:28:29.838300 kubelet[1514]: E0213 06:28:29.838229 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:30.839094 kubelet[1514]: E0213 06:28:30.839019 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:31.839535 kubelet[1514]: E0213 06:28:31.839455 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:32.809609 sshd[4483]: Failed password for root from 218.92.0.22 port 10114 ssh2 Feb 13 06:28:32.840362 kubelet[1514]: E0213 06:28:32.840285 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:33.841333 kubelet[1514]: E0213 06:28:33.841215 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:34.841553 kubelet[1514]: E0213 06:28:34.841473 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:35.074674 sshd[4483]: Failed password for root from 218.92.0.22 port 10114 ssh2 Feb 13 06:28:35.308843 sshd[4483]: Received disconnect from 218.92.0.22 port 10114:11: [preauth] Feb 13 06:28:35.308843 sshd[4483]: Disconnected from authenticating user root 218.92.0.22 port 10114 [preauth] Feb 13 06:28:35.309367 sshd[4483]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.22 user=root Feb 13 06:28:35.311618 systemd[1]: sshd@14-147.75.49.59:22-218.92.0.22:10114.service: Deactivated successfully. Feb 13 06:28:35.842319 kubelet[1514]: E0213 06:28:35.842200 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:36.843045 kubelet[1514]: E0213 06:28:36.843018 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:37.843831 kubelet[1514]: E0213 06:28:37.843667 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:38.844271 kubelet[1514]: E0213 06:28:38.844197 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:39.845370 kubelet[1514]: E0213 06:28:39.845262 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:40.845910 kubelet[1514]: E0213 06:28:40.845833 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:41.846574 kubelet[1514]: E0213 06:28:41.846457 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:42.846727 kubelet[1514]: E0213 06:28:42.846619 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:43.846886 kubelet[1514]: E0213 06:28:43.846767 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:44.847066 kubelet[1514]: E0213 06:28:44.846950 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:45.847683 kubelet[1514]: E0213 06:28:45.847559 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:46.538687 kubelet[1514]: E0213 06:28:46.538570 1514 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:46.848662 kubelet[1514]: E0213 06:28:46.848430 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:47.849339 kubelet[1514]: E0213 06:28:47.849217 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:48.850073 kubelet[1514]: E0213 06:28:48.849997 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:49.851290 kubelet[1514]: E0213 06:28:49.851210 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:50.852305 kubelet[1514]: E0213 06:28:50.852232 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:51.853121 kubelet[1514]: E0213 06:28:51.853045 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 06:28:52.853524 kubelet[1514]: E0213 06:28:52.853442 1514 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"