Dec 13 13:29:48.935510 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:29:48.935534 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:29:48.935547 kernel: BIOS-provided physical RAM map: Dec 13 13:29:48.935554 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 13:29:48.935577 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 13:29:48.935584 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 13:29:48.935591 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 13:29:48.935598 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 13:29:48.935604 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 13 13:29:48.935611 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 13 13:29:48.935617 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Dec 13 13:29:48.935627 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Dec 13 13:29:48.935634 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Dec 13 13:29:48.935640 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Dec 13 13:29:48.935648 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Dec 13 13:29:48.935655 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 13:29:48.935665 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Dec 13 13:29:48.935672 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Dec 13 13:29:48.935679 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Dec 13 13:29:48.935685 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Dec 13 13:29:48.935692 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Dec 13 13:29:48.935699 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 13:29:48.935706 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 13:29:48.935713 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 13:29:48.935720 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 13 13:29:48.935727 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 13:29:48.935734 kernel: NX (Execute Disable) protection: active Dec 13 13:29:48.935743 kernel: APIC: Static calls initialized Dec 13 13:29:48.935750 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Dec 13 13:29:48.935758 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Dec 13 13:29:48.935765 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Dec 13 13:29:48.935772 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Dec 13 13:29:48.935778 kernel: extended physical RAM map: Dec 13 13:29:48.935785 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 13:29:48.935793 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 13:29:48.935800 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 13:29:48.935807 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 13:29:48.935814 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 13:29:48.935821 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 13 13:29:48.935831 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 13 13:29:48.935841 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Dec 13 13:29:48.935849 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Dec 13 13:29:48.935856 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Dec 13 13:29:48.935863 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Dec 13 13:29:48.935870 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Dec 13 13:29:48.935880 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Dec 13 13:29:48.935887 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Dec 13 13:29:48.935894 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Dec 13 13:29:48.935901 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Dec 13 13:29:48.935908 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 13:29:48.935916 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Dec 13 13:29:48.935923 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Dec 13 13:29:48.935930 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Dec 13 13:29:48.935938 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Dec 13 13:29:48.935947 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Dec 13 13:29:48.935955 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 13:29:48.935962 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 13:29:48.935969 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 13:29:48.935976 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 13 13:29:48.935984 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 13:29:48.935991 kernel: efi: EFI v2.7 by EDK II Dec 13 13:29:48.935998 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Dec 13 13:29:48.936005 kernel: random: crng init done Dec 13 13:29:48.936013 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Dec 13 13:29:48.936020 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Dec 13 13:29:48.936027 kernel: secureboot: Secure boot disabled Dec 13 13:29:48.936036 kernel: SMBIOS 2.8 present. Dec 13 13:29:48.936044 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 13 13:29:48.936051 kernel: Hypervisor detected: KVM Dec 13 13:29:48.936058 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 13:29:48.936066 kernel: kvm-clock: using sched offset of 2630610217 cycles Dec 13 13:29:48.936073 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 13:29:48.936081 kernel: tsc: Detected 2794.748 MHz processor Dec 13 13:29:48.936089 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:29:48.936096 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:29:48.936104 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Dec 13 13:29:48.936115 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 13:29:48.936122 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:29:48.936129 kernel: Using GB pages for direct mapping Dec 13 13:29:48.936137 kernel: ACPI: Early table checksum verification disabled Dec 13 13:29:48.936144 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 13:29:48.936152 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 13:29:48.936159 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:29:48.936167 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:29:48.936174 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 13:29:48.936184 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:29:48.936191 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:29:48.936199 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:29:48.936206 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:29:48.936213 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 13:29:48.936221 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 13:29:48.936228 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 13:29:48.936242 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 13:29:48.936250 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 13:29:48.936259 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 13:29:48.936267 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 13:29:48.936274 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 13:29:48.936281 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 13:29:48.936289 kernel: No NUMA configuration found Dec 13 13:29:48.936296 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Dec 13 13:29:48.936303 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Dec 13 13:29:48.936311 kernel: Zone ranges: Dec 13 13:29:48.936318 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:29:48.936328 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Dec 13 13:29:48.936335 kernel: Normal empty Dec 13 13:29:48.936343 kernel: Movable zone start for each node Dec 13 13:29:48.936350 kernel: Early memory node ranges Dec 13 13:29:48.936357 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 13:29:48.936364 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 13:29:48.936383 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 13:29:48.936392 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Dec 13 13:29:48.936399 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Dec 13 13:29:48.936410 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Dec 13 13:29:48.936425 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Dec 13 13:29:48.936440 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Dec 13 13:29:48.936462 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Dec 13 13:29:48.936470 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:29:48.936478 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 13:29:48.936509 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 13:29:48.936519 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:29:48.936527 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Dec 13 13:29:48.936536 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Dec 13 13:29:48.936545 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 13:29:48.936554 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 13 13:29:48.936689 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Dec 13 13:29:48.936700 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 13:29:48.936708 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 13:29:48.936716 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:29:48.936724 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 13:29:48.936734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 13:29:48.936742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:29:48.936750 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 13:29:48.936757 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 13:29:48.936765 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:29:48.936773 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 13:29:48.936780 kernel: TSC deadline timer available Dec 13 13:29:48.936788 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 13:29:48.936796 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 13:29:48.936803 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 13:29:48.936813 kernel: kvm-guest: setup PV sched yield Dec 13 13:29:48.936821 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 13 13:29:48.936828 kernel: Booting paravirtualized kernel on KVM Dec 13 13:29:48.936836 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:29:48.936844 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 13:29:48.936852 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 13:29:48.936860 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 13:29:48.936867 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 13:29:48.936875 kernel: kvm-guest: PV spinlocks enabled Dec 13 13:29:48.936885 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 13:29:48.936894 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:29:48.936902 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:29:48.936910 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:29:48.936918 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:29:48.936926 kernel: Fallback order for Node 0: 0 Dec 13 13:29:48.936933 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Dec 13 13:29:48.936941 kernel: Policy zone: DMA32 Dec 13 13:29:48.936951 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:29:48.936959 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 177824K reserved, 0K cma-reserved) Dec 13 13:29:48.936967 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 13:29:48.936975 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:29:48.936982 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:29:48.936990 kernel: Dynamic Preempt: voluntary Dec 13 13:29:48.936998 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:29:48.937006 kernel: rcu: RCU event tracing is enabled. Dec 13 13:29:48.937014 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 13:29:48.937024 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:29:48.937032 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:29:48.937040 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:29:48.937048 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:29:48.937055 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 13:29:48.937063 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 13:29:48.937071 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:29:48.937078 kernel: Console: colour dummy device 80x25 Dec 13 13:29:48.937086 kernel: printk: console [ttyS0] enabled Dec 13 13:29:48.937096 kernel: ACPI: Core revision 20230628 Dec 13 13:29:48.937104 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 13:29:48.937112 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:29:48.937119 kernel: x2apic enabled Dec 13 13:29:48.937127 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:29:48.937135 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 13:29:48.937143 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 13:29:48.937150 kernel: kvm-guest: setup PV IPIs Dec 13 13:29:48.937158 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 13:29:48.937168 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 13:29:48.937176 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 13:29:48.937183 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 13:29:48.937191 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 13:29:48.937199 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 13:29:48.937206 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:29:48.937214 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:29:48.937222 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:29:48.937230 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:29:48.937248 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 13:29:48.937257 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 13:29:48.937265 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 13:29:48.937273 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 13:29:48.937280 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 13:29:48.937289 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 13:29:48.937297 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 13:29:48.937304 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:29:48.937315 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:29:48.937322 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:29:48.937330 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:29:48.937338 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 13:29:48.937345 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:29:48.937353 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:29:48.937361 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:29:48.937368 kernel: landlock: Up and running. Dec 13 13:29:48.937376 kernel: SELinux: Initializing. Dec 13 13:29:48.937386 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:29:48.937394 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:29:48.937402 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 13:29:48.937410 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:29:48.937418 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:29:48.937426 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:29:48.937433 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 13:29:48.937441 kernel: ... version: 0 Dec 13 13:29:48.937449 kernel: ... bit width: 48 Dec 13 13:29:48.937459 kernel: ... generic registers: 6 Dec 13 13:29:48.937466 kernel: ... value mask: 0000ffffffffffff Dec 13 13:29:48.937474 kernel: ... max period: 00007fffffffffff Dec 13 13:29:48.937482 kernel: ... fixed-purpose events: 0 Dec 13 13:29:48.937489 kernel: ... event mask: 000000000000003f Dec 13 13:29:48.937497 kernel: signal: max sigframe size: 1776 Dec 13 13:29:48.937504 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:29:48.937512 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:29:48.937520 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:29:48.937530 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:29:48.937538 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 13:29:48.937545 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 13:29:48.937553 kernel: smpboot: Max logical packages: 1 Dec 13 13:29:48.937571 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 13:29:48.937579 kernel: devtmpfs: initialized Dec 13 13:29:48.937587 kernel: x86/mm: Memory block size: 128MB Dec 13 13:29:48.937594 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 13:29:48.937602 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 13:29:48.937610 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Dec 13 13:29:48.937620 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 13:29:48.937628 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Dec 13 13:29:48.937636 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 13:29:48.937644 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:29:48.937651 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 13:29:48.937659 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:29:48.937667 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:29:48.937674 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:29:48.937685 kernel: audit: type=2000 audit(1734096588.848:1): state=initialized audit_enabled=0 res=1 Dec 13 13:29:48.937692 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:29:48.937700 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:29:48.937708 kernel: cpuidle: using governor menu Dec 13 13:29:48.937715 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:29:48.937723 kernel: dca service started, version 1.12.1 Dec 13 13:29:48.937731 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Dec 13 13:29:48.937739 kernel: PCI: Using configuration type 1 for base access Dec 13 13:29:48.937746 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:29:48.937757 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:29:48.937764 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:29:48.937772 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:29:48.937780 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:29:48.937787 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:29:48.937795 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:29:48.937803 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:29:48.937810 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:29:48.937818 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:29:48.937828 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:29:48.937835 kernel: ACPI: Interpreter enabled Dec 13 13:29:48.937843 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 13:29:48.937858 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:29:48.937873 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:29:48.937891 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 13:29:48.937906 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 13:29:48.937924 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:29:48.938199 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:29:48.938391 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 13:29:48.938521 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 13:29:48.938532 kernel: PCI host bridge to bus 0000:00 Dec 13 13:29:48.938679 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:29:48.938793 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 13:29:48.938906 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:29:48.939026 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 13 13:29:48.939136 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 13 13:29:48.939254 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 13 13:29:48.939367 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:29:48.939510 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 13:29:48.939745 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 13:29:48.939895 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 13:29:48.940022 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 13:29:48.940142 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 13:29:48.940274 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 13:29:48.940397 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 13:29:48.940536 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:29:48.940673 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 13:29:48.940814 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 13:29:48.940936 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Dec 13 13:29:48.941066 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 13:29:48.941190 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 13:29:48.941456 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 13:29:48.941644 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Dec 13 13:29:48.941795 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 13:29:48.941934 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 13:29:48.942063 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 13:29:48.942191 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 13 13:29:48.942328 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 13:29:48.942465 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 13:29:48.943363 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 13:29:48.943506 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 13:29:48.943655 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 13:29:48.943777 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 13:29:48.943908 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 13:29:48.944031 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 13:29:48.944042 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 13:29:48.944050 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 13:29:48.944059 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:29:48.944071 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 13:29:48.944079 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 13:29:48.944088 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 13:29:48.944096 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 13:29:48.944103 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 13:29:48.944111 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 13:29:48.944119 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 13:29:48.944127 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 13:29:48.944135 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 13:29:48.944145 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 13:29:48.944154 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 13:29:48.944162 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 13:29:48.944170 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 13:29:48.944178 kernel: iommu: Default domain type: Translated Dec 13 13:29:48.944186 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:29:48.944194 kernel: efivars: Registered efivars operations Dec 13 13:29:48.944202 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:29:48.944210 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:29:48.944218 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 13:29:48.944228 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Dec 13 13:29:48.944244 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Dec 13 13:29:48.944252 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Dec 13 13:29:48.944260 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Dec 13 13:29:48.944268 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Dec 13 13:29:48.944276 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Dec 13 13:29:48.944284 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Dec 13 13:29:48.944414 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 13:29:48.944548 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 13:29:48.944765 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 13:29:48.944778 kernel: vgaarb: loaded Dec 13 13:29:48.944786 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 13:29:48.944795 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 13:29:48.944803 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 13:29:48.944811 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:29:48.944820 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:29:48.944828 kernel: pnp: PnP ACPI init Dec 13 13:29:48.944964 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 13 13:29:48.944976 kernel: pnp: PnP ACPI: found 6 devices Dec 13 13:29:48.944984 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:29:48.944993 kernel: NET: Registered PF_INET protocol family Dec 13 13:29:48.945021 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:29:48.945032 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:29:48.945040 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:29:48.945048 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:29:48.945059 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:29:48.945067 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:29:48.945075 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:29:48.945084 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:29:48.945092 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:29:48.945100 kernel: NET: Registered PF_XDP protocol family Dec 13 13:29:48.945226 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 13:29:48.946474 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 13:29:48.946617 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 13:29:48.946738 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 13:29:48.946852 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 13:29:48.946963 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 13 13:29:48.947073 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 13 13:29:48.947186 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 13 13:29:48.947197 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:29:48.947205 kernel: Initialise system trusted keyrings Dec 13 13:29:48.947218 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:29:48.947226 kernel: Key type asymmetric registered Dec 13 13:29:48.947257 kernel: Asymmetric key parser 'x509' registered Dec 13 13:29:48.947265 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:29:48.947273 kernel: io scheduler mq-deadline registered Dec 13 13:29:48.947282 kernel: io scheduler kyber registered Dec 13 13:29:48.947290 kernel: io scheduler bfq registered Dec 13 13:29:48.947298 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:29:48.947307 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 13:29:48.947318 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 13:29:48.947329 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 13:29:48.947337 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:29:48.947345 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:29:48.947354 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 13:29:48.947362 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:29:48.947373 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:29:48.947502 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 13:29:48.947515 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 13:29:48.947759 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 13:29:48.947874 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T13:29:48 UTC (1734096588) Dec 13 13:29:48.947992 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 13:29:48.948002 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 13:29:48.948011 kernel: efifb: probing for efifb Dec 13 13:29:48.949302 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 13 13:29:48.949312 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 13 13:29:48.949321 kernel: efifb: scrolling: redraw Dec 13 13:29:48.949329 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 13:29:48.949337 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 13:29:48.949345 kernel: fb0: EFI VGA frame buffer device Dec 13 13:29:48.949354 kernel: pstore: Using crash dump compression: deflate Dec 13 13:29:48.949362 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 13:29:48.949370 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:29:48.949382 kernel: Segment Routing with IPv6 Dec 13 13:29:48.949391 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:29:48.949399 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:29:48.949407 kernel: Key type dns_resolver registered Dec 13 13:29:48.949417 kernel: IPI shorthand broadcast: enabled Dec 13 13:29:48.949426 kernel: sched_clock: Marking stable (583002551, 161880355)->(799098264, -54215358) Dec 13 13:29:48.949434 kernel: registered taskstats version 1 Dec 13 13:29:48.949442 kernel: Loading compiled-in X.509 certificates Dec 13 13:29:48.949450 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:29:48.949461 kernel: Key type .fscrypt registered Dec 13 13:29:48.949469 kernel: Key type fscrypt-provisioning registered Dec 13 13:29:48.949478 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:29:48.949486 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:29:48.949494 kernel: ima: No architecture policies found Dec 13 13:29:48.949502 kernel: clk: Disabling unused clocks Dec 13 13:29:48.949510 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:29:48.949518 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:29:48.949527 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:29:48.949537 kernel: Run /init as init process Dec 13 13:29:48.949545 kernel: with arguments: Dec 13 13:29:48.949553 kernel: /init Dec 13 13:29:48.949573 kernel: with environment: Dec 13 13:29:48.949581 kernel: HOME=/ Dec 13 13:29:48.949589 kernel: TERM=linux Dec 13 13:29:48.949597 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:29:48.949609 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:29:48.949622 systemd[1]: Detected virtualization kvm. Dec 13 13:29:48.949631 systemd[1]: Detected architecture x86-64. Dec 13 13:29:48.949640 systemd[1]: Running in initrd. Dec 13 13:29:48.949648 systemd[1]: No hostname configured, using default hostname. Dec 13 13:29:48.949657 systemd[1]: Hostname set to . Dec 13 13:29:48.949666 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:29:48.949674 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:29:48.949683 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:29:48.949694 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:29:48.949704 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:29:48.949713 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:29:48.949722 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:29:48.949731 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:29:48.949741 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:29:48.949752 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:29:48.949761 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:29:48.949770 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:29:48.949779 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:29:48.949788 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:29:48.949797 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:29:48.949805 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:29:48.949814 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:29:48.949823 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:29:48.949834 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:29:48.949843 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:29:48.949852 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:29:48.949861 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:29:48.949870 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:29:48.949878 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:29:48.949887 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:29:48.949896 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:29:48.949905 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:29:48.949916 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:29:48.949925 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:29:48.949934 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:29:48.949942 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:29:48.949951 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:29:48.949960 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:29:48.949969 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:29:48.950003 systemd-journald[192]: Collecting audit messages is disabled. Dec 13 13:29:48.950028 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:29:48.950037 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:48.950046 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:29:48.950055 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:29:48.950064 systemd-journald[192]: Journal started Dec 13 13:29:48.950084 systemd-journald[192]: Runtime Journal (/run/log/journal/e87f7dc526c147889af879fb22964b1e) is 6.0M, max 48.2M, 42.2M free. Dec 13 13:29:48.938386 systemd-modules-load[193]: Inserted module 'overlay' Dec 13 13:29:48.957535 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:29:48.957551 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:29:48.957729 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:29:48.965782 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:29:48.971582 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:29:48.974357 systemd-modules-load[193]: Inserted module 'br_netfilter' Dec 13 13:29:48.975290 kernel: Bridge firewalling registered Dec 13 13:29:48.975754 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:29:48.978214 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:29:48.980939 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:29:48.995705 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:29:48.998420 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:29:49.009176 dracut-cmdline[223]: dracut-dracut-053 Dec 13 13:29:49.011672 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:29:49.013708 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:29:49.022772 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:29:49.057795 systemd-resolved[243]: Positive Trust Anchors: Dec 13 13:29:49.057816 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:29:49.057847 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:29:49.060825 systemd-resolved[243]: Defaulting to hostname 'linux'. Dec 13 13:29:49.062100 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:29:49.067594 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:29:49.103595 kernel: SCSI subsystem initialized Dec 13 13:29:49.112587 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:29:49.123594 kernel: iscsi: registered transport (tcp) Dec 13 13:29:49.144586 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:29:49.144609 kernel: QLogic iSCSI HBA Driver Dec 13 13:29:49.194490 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:29:49.205753 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:29:49.230884 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:29:49.230909 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:29:49.231931 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:29:49.275594 kernel: raid6: avx2x4 gen() 25580 MB/s Dec 13 13:29:49.292589 kernel: raid6: avx2x2 gen() 26099 MB/s Dec 13 13:29:49.309852 kernel: raid6: avx2x1 gen() 21703 MB/s Dec 13 13:29:49.309872 kernel: raid6: using algorithm avx2x2 gen() 26099 MB/s Dec 13 13:29:49.327725 kernel: raid6: .... xor() 19584 MB/s, rmw enabled Dec 13 13:29:49.327744 kernel: raid6: using avx2x2 recovery algorithm Dec 13 13:29:49.347581 kernel: xor: automatically using best checksumming function avx Dec 13 13:29:49.498593 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:29:49.512051 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:29:49.521729 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:29:49.534480 systemd-udevd[414]: Using default interface naming scheme 'v255'. Dec 13 13:29:49.539299 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:29:49.545828 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:29:49.559527 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Dec 13 13:29:49.596055 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:29:49.612760 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:29:49.673368 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:29:49.678704 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:29:49.692871 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:29:49.696520 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:29:49.699236 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:29:49.701596 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:29:49.712768 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:29:49.716904 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 13:29:49.753352 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:29:49.753368 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 13:29:49.753524 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 13:29:49.753536 kernel: AES CTR mode by8 optimization enabled Dec 13 13:29:49.753546 kernel: libata version 3.00 loaded. Dec 13 13:29:49.753557 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:29:49.753583 kernel: GPT:9289727 != 19775487 Dec 13 13:29:49.753594 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:29:49.753604 kernel: GPT:9289727 != 19775487 Dec 13 13:29:49.753614 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:29:49.753624 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:29:49.753638 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 13:29:49.784232 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 13:29:49.784254 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 13:29:49.784406 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 13:29:49.784544 kernel: scsi host0: ahci Dec 13 13:29:49.784710 kernel: scsi host1: ahci Dec 13 13:29:49.784853 kernel: scsi host2: ahci Dec 13 13:29:49.785000 kernel: scsi host3: ahci Dec 13 13:29:49.785140 kernel: scsi host4: ahci Dec 13 13:29:49.785295 kernel: scsi host5: ahci Dec 13 13:29:49.785435 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 13:29:49.785446 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 13:29:49.785457 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 13:29:49.785467 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 13:29:49.785481 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 13:29:49.785492 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 13:29:49.785505 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (457) Dec 13 13:29:49.727166 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:29:49.750189 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:29:49.750480 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:29:49.791799 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (476) Dec 13 13:29:49.752063 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:29:49.754034 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:29:49.754250 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:49.757458 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:29:49.766917 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:29:49.789682 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:29:49.792134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:49.800346 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:29:49.811664 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:29:49.812929 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:29:49.819945 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:29:49.838672 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:29:49.840482 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:29:49.848539 disk-uuid[558]: Primary Header is updated. Dec 13 13:29:49.848539 disk-uuid[558]: Secondary Entries is updated. Dec 13 13:29:49.848539 disk-uuid[558]: Secondary Header is updated. Dec 13 13:29:49.852584 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:29:49.857589 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:29:49.859184 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:29:50.085590 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 13:29:50.085643 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 13:29:50.093578 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 13:29:50.093600 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 13:29:50.094588 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 13:29:50.094600 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 13:29:50.095592 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 13:29:50.096855 kernel: ata3.00: applying bridge limits Dec 13 13:29:50.096867 kernel: ata3.00: configured for UDMA/100 Dec 13 13:29:50.097591 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 13:29:50.146591 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 13:29:50.160234 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 13:29:50.160248 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 13:29:50.858521 disk-uuid[562]: The operation has completed successfully. Dec 13 13:29:50.860030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:29:50.890583 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:29:50.890713 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:29:50.916750 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:29:50.920401 sh[592]: Success Dec 13 13:29:50.933594 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 13:29:50.968784 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:29:50.986019 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:29:50.988867 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:29:51.000704 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:29:51.000737 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:29:51.000749 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:29:51.001721 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:29:51.003072 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:29:51.007116 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:29:51.009404 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:29:51.020773 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:29:51.023333 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:29:51.034684 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:29:51.034718 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:29:51.034729 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:29:51.037578 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:29:51.046982 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:29:51.048704 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:29:51.059577 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:29:51.065710 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:29:51.126163 ignition[692]: Ignition 2.20.0 Dec 13 13:29:51.126183 ignition[692]: Stage: fetch-offline Dec 13 13:29:51.126218 ignition[692]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:51.126228 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:29:51.126314 ignition[692]: parsed url from cmdline: "" Dec 13 13:29:51.126318 ignition[692]: no config URL provided Dec 13 13:29:51.126323 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:29:51.126332 ignition[692]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:29:51.126359 ignition[692]: op(1): [started] loading QEMU firmware config module Dec 13 13:29:51.132956 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:29:51.126364 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 13:29:51.140431 ignition[692]: op(1): [finished] loading QEMU firmware config module Dec 13 13:29:51.140460 ignition[692]: QEMU firmware config was not found. Ignoring... Dec 13 13:29:51.145742 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:29:51.173752 systemd-networkd[780]: lo: Link UP Dec 13 13:29:51.173763 systemd-networkd[780]: lo: Gained carrier Dec 13 13:29:51.175485 systemd-networkd[780]: Enumeration completed Dec 13 13:29:51.175616 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:29:51.175893 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:29:51.175897 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:29:51.176806 systemd-networkd[780]: eth0: Link UP Dec 13 13:29:51.176810 systemd-networkd[780]: eth0: Gained carrier Dec 13 13:29:51.176817 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:29:51.178276 systemd[1]: Reached target network.target - Network. Dec 13 13:29:51.197117 ignition[692]: parsing config with SHA512: d24cb073740b261642b65b31a03204418680d0e782f0deab618cfb5d00911752470633f678847a4aee6b3d119d148adce63adf141239598055f9ca7828fa11b4 Dec 13 13:29:51.201261 unknown[692]: fetched base config from "system" Dec 13 13:29:51.201767 ignition[692]: fetch-offline: fetch-offline passed Dec 13 13:29:51.201277 unknown[692]: fetched user config from "qemu" Dec 13 13:29:51.201896 ignition[692]: Ignition finished successfully Dec 13 13:29:51.201649 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.121/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:29:51.208672 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:29:51.211155 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:29:51.218702 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:29:51.233754 ignition[784]: Ignition 2.20.0 Dec 13 13:29:51.233766 ignition[784]: Stage: kargs Dec 13 13:29:51.233929 ignition[784]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:51.233940 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:29:51.237632 ignition[784]: kargs: kargs passed Dec 13 13:29:51.237683 ignition[784]: Ignition finished successfully Dec 13 13:29:51.241820 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:29:51.254683 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:29:51.265843 ignition[793]: Ignition 2.20.0 Dec 13 13:29:51.265853 ignition[793]: Stage: disks Dec 13 13:29:51.266017 ignition[793]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:51.266028 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:29:51.266860 ignition[793]: disks: disks passed Dec 13 13:29:51.269149 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:29:51.266902 ignition[793]: Ignition finished successfully Dec 13 13:29:51.270454 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:29:51.272021 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:29:51.274224 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:29:51.275247 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:29:51.275302 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:29:51.285758 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:29:51.299481 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 13:29:51.305929 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:29:51.318660 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:29:51.400730 kernel: EXT4-fs (vda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:29:51.401856 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:29:51.403718 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:29:51.416639 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:29:51.418500 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:29:51.427107 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (811) Dec 13 13:29:51.427157 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:29:51.427184 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:29:51.427200 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:29:51.420799 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:29:51.420841 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:29:51.420863 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:29:51.431085 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:29:51.434466 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:29:51.439929 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:29:51.440731 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:29:51.474854 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:29:51.480338 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:29:51.484404 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:29:51.488281 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:29:51.575979 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:29:51.597656 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:29:51.599329 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:29:51.605585 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:29:51.623971 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:29:51.626611 ignition[924]: INFO : Ignition 2.20.0 Dec 13 13:29:51.626611 ignition[924]: INFO : Stage: mount Dec 13 13:29:51.626611 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:51.626611 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:29:51.630406 ignition[924]: INFO : mount: mount passed Dec 13 13:29:51.630406 ignition[924]: INFO : Ignition finished successfully Dec 13 13:29:51.633601 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:29:51.647638 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:29:52.000240 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:29:52.012707 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:29:52.019950 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (938) Dec 13 13:29:52.019981 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:29:52.019992 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:29:52.021577 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:29:52.023586 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:29:52.025408 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:29:52.044386 ignition[955]: INFO : Ignition 2.20.0 Dec 13 13:29:52.044386 ignition[955]: INFO : Stage: files Dec 13 13:29:52.046203 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:52.046203 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:29:52.046203 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:29:52.046203 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:29:52.046203 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:29:52.052679 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:29:52.052679 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:29:52.052679 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:29:52.052679 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:29:52.052679 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 13:29:52.050367 unknown[955]: wrote ssh authorized keys file for user: core Dec 13 13:29:52.089715 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:29:52.178425 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:29:52.180442 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:29:52.180442 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 13:29:52.469168 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 13:29:52.711294 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:29:52.713343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:29:52.713343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:29:52.713343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:29:52.713343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:29:52.713343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:29:52.713343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:29:52.713343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:29:52.713343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:29:52.713343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:29:52.713343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:29:52.713343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:29:52.713343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:29:52.713343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:29:52.713343 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 13:29:53.062688 systemd-networkd[780]: eth0: Gained IPv6LL Dec 13 13:29:53.127986 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 13:29:53.468758 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:29:53.468758 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 13:29:53.472410 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:29:53.472410 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:29:53.472410 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 13:29:53.472410 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 13:29:53.472410 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:29:53.472410 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:29:53.472410 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 13:29:53.472410 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:29:53.493480 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:29:53.498430 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:29:53.499985 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:29:53.499985 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:29:53.499985 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:29:53.499985 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:29:53.499985 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:29:53.499985 ignition[955]: INFO : files: files passed Dec 13 13:29:53.499985 ignition[955]: INFO : Ignition finished successfully Dec 13 13:29:53.501415 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:29:53.511702 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:29:53.513481 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:29:53.515303 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:29:53.515452 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:29:53.522794 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 13:29:53.525903 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:29:53.525903 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:29:53.529046 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:29:53.532467 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:29:53.533876 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:29:53.545703 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:29:53.570830 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:29:53.570958 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:29:53.572131 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:29:53.574515 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:29:53.577398 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:29:53.578181 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:29:53.596369 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:29:53.609718 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:29:53.619178 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:29:53.620470 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:29:53.622718 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:29:53.624724 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:29:53.624841 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:29:53.627162 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:29:53.628724 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:29:53.630762 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:29:53.632807 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:29:53.634820 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:29:53.636972 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:29:53.639091 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:29:53.641491 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:29:53.643441 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:29:53.645623 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:29:53.647378 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:29:53.647520 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:29:53.649907 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:29:53.651436 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:29:53.653535 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:29:53.653686 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:29:53.655785 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:29:53.655911 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:29:53.658263 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:29:53.658387 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:29:53.660247 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:29:53.662022 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:29:53.667638 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:29:53.669680 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:29:53.671619 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:29:53.673967 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:29:53.674058 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:29:53.675812 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:29:53.675899 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:29:53.677769 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:29:53.677898 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:29:53.679753 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:29:53.679874 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:29:53.691702 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:29:53.693332 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:29:53.694472 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:29:53.694637 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:29:53.696796 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:29:53.697047 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:29:53.702989 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:29:53.703207 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:29:53.706500 ignition[1010]: INFO : Ignition 2.20.0 Dec 13 13:29:53.706500 ignition[1010]: INFO : Stage: umount Dec 13 13:29:53.706500 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:29:53.706500 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:29:53.706500 ignition[1010]: INFO : umount: umount passed Dec 13 13:29:53.706500 ignition[1010]: INFO : Ignition finished successfully Dec 13 13:29:53.706474 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:29:53.706607 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:29:53.709084 systemd[1]: Stopped target network.target - Network. Dec 13 13:29:53.710063 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:29:53.710133 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:29:53.712100 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:29:53.712154 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:29:53.714299 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:29:53.714345 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:29:53.716087 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:29:53.716141 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:29:53.718213 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:29:53.720239 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:29:53.721607 systemd-networkd[780]: eth0: DHCPv6 lease lost Dec 13 13:29:53.723328 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:29:53.723936 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:29:53.724065 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:29:53.724960 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:29:53.725025 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:29:53.733666 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:29:53.734961 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:29:53.735019 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:29:53.737413 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:29:53.739914 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:29:53.740041 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:29:53.746660 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:29:53.746725 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:29:53.748093 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:29:53.748149 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:29:53.750143 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:29:53.750192 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:29:53.753819 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:29:53.753934 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:29:53.756389 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:29:53.756551 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:29:53.759347 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:29:53.759407 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:29:53.760581 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:29:53.760625 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:29:53.762508 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:29:53.762557 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:29:53.764680 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:29:53.764725 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:29:53.766685 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:29:53.766733 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:29:53.773703 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:29:53.775056 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:29:53.775118 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:29:53.788546 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:29:53.788607 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:29:53.790765 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:29:53.790814 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:29:53.793066 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:29:53.793120 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:53.795550 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:29:53.795666 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:29:53.981337 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:29:53.981500 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:29:53.983979 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:29:53.985256 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:29:53.985324 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:29:53.995709 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:29:54.004556 systemd[1]: Switching root. Dec 13 13:29:54.049233 systemd-journald[192]: Journal stopped Dec 13 13:29:55.211099 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Dec 13 13:29:55.211170 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:29:55.211184 kernel: SELinux: policy capability open_perms=1 Dec 13 13:29:55.211197 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:29:55.211208 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:29:55.211220 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:29:55.211235 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:29:55.211246 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:29:55.211257 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:29:55.211268 kernel: audit: type=1403 audit(1734096594.503:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:29:55.211285 systemd[1]: Successfully loaded SELinux policy in 38.603ms. Dec 13 13:29:55.211308 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.666ms. Dec 13 13:29:55.211322 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:29:55.211334 systemd[1]: Detected virtualization kvm. Dec 13 13:29:55.211346 systemd[1]: Detected architecture x86-64. Dec 13 13:29:55.211360 systemd[1]: Detected first boot. Dec 13 13:29:55.211372 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:29:55.211384 zram_generator::config[1055]: No configuration found. Dec 13 13:29:55.211397 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:29:55.211410 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:29:55.211422 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:29:55.211434 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:29:55.211447 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:29:55.211461 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:29:55.211475 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:29:55.211486 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:29:55.211499 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:29:55.211516 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:29:55.211533 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:29:55.211545 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:29:55.211557 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:29:55.211941 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:29:55.211954 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:29:55.211966 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:29:55.211978 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:29:55.211991 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:29:55.212004 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:29:55.212016 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:29:55.212028 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:29:55.212041 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:29:55.212055 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:29:55.212073 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:29:55.212087 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:29:55.212099 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:29:55.212110 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:29:55.212124 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:29:55.212135 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:29:55.212148 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:29:55.212162 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:29:55.212174 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:29:55.212186 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:29:55.212198 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:29:55.212210 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:29:55.212222 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:29:55.212234 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:29:55.212246 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:29:55.212258 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:29:55.212273 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:29:55.212285 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:29:55.212298 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:29:55.212310 systemd[1]: Reached target machines.target - Containers. Dec 13 13:29:55.212322 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:29:55.212334 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:29:55.212347 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:29:55.212361 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:29:55.212379 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:29:55.212392 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:29:55.212404 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:29:55.212416 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:29:55.212428 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:29:55.212441 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:29:55.212453 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:29:55.212465 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:29:55.212477 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:29:55.212492 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:29:55.212504 kernel: fuse: init (API version 7.39) Dec 13 13:29:55.212516 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:29:55.212528 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:29:55.212540 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:29:55.212552 kernel: ACPI: bus type drm_connector registered Dec 13 13:29:55.212576 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:29:55.212588 kernel: loop: module loaded Dec 13 13:29:55.212599 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:29:55.212614 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:29:55.212626 systemd[1]: Stopped verity-setup.service. Dec 13 13:29:55.212656 systemd-journald[1132]: Collecting audit messages is disabled. Dec 13 13:29:55.212679 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:29:55.212692 systemd-journald[1132]: Journal started Dec 13 13:29:55.212717 systemd-journald[1132]: Runtime Journal (/run/log/journal/e87f7dc526c147889af879fb22964b1e) is 6.0M, max 48.2M, 42.2M free. Dec 13 13:29:54.998897 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:29:55.015259 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:29:55.015734 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:29:55.215593 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:29:55.216367 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:29:55.217502 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:29:55.218747 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:29:55.219886 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:29:55.221056 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:29:55.222249 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:29:55.223449 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:29:55.224878 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:29:55.226386 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:29:55.226557 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:29:55.228015 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:29:55.228189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:29:55.229776 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:29:55.229948 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:29:55.231362 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:29:55.231535 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:29:55.233111 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:29:55.233276 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:29:55.234638 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:29:55.234803 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:29:55.236146 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:29:55.237500 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:29:55.238980 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:29:55.252319 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:29:55.260684 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:29:55.262908 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:29:55.264104 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:29:55.264139 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:29:55.266188 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:29:55.268552 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:29:55.271830 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:29:55.273012 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:29:55.275766 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:29:55.281671 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:29:55.283076 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:29:55.285017 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:29:55.286393 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:29:55.289385 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:29:55.292768 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:29:55.302327 systemd-journald[1132]: Time spent on flushing to /var/log/journal/e87f7dc526c147889af879fb22964b1e is 26.914ms for 1044 entries. Dec 13 13:29:55.302327 systemd-journald[1132]: System Journal (/var/log/journal/e87f7dc526c147889af879fb22964b1e) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:29:55.352642 systemd-journald[1132]: Received client request to flush runtime journal. Dec 13 13:29:55.352689 kernel: loop0: detected capacity change from 0 to 138184 Dec 13 13:29:55.296752 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:29:55.299642 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:29:55.301014 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:29:55.303693 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:29:55.320032 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:29:55.321464 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:29:55.328921 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:29:55.330347 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:29:55.350852 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:29:55.352778 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Dec 13 13:29:55.352792 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Dec 13 13:29:55.358622 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:29:55.353983 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:29:55.365837 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:29:55.367474 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:29:55.371276 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:29:55.378229 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:29:55.380931 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:29:55.386358 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 13:29:55.395790 kernel: loop1: detected capacity change from 0 to 141000 Dec 13 13:29:55.406226 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:29:55.415721 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:29:55.432585 kernel: loop2: detected capacity change from 0 to 210664 Dec 13 13:29:55.433948 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Dec 13 13:29:55.433967 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Dec 13 13:29:55.439791 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:29:55.470589 kernel: loop3: detected capacity change from 0 to 138184 Dec 13 13:29:55.481597 kernel: loop4: detected capacity change from 0 to 141000 Dec 13 13:29:55.492596 kernel: loop5: detected capacity change from 0 to 210664 Dec 13 13:29:55.503475 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 13:29:55.504073 (sd-merge)[1198]: Merged extensions into '/usr'. Dec 13 13:29:55.509248 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:29:55.509262 systemd[1]: Reloading... Dec 13 13:29:55.557591 zram_generator::config[1223]: No configuration found. Dec 13 13:29:55.609580 ldconfig[1164]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:29:55.685022 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:29:55.734253 systemd[1]: Reloading finished in 224 ms. Dec 13 13:29:55.769595 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:29:55.771427 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:29:55.783710 systemd[1]: Starting ensure-sysext.service... Dec 13 13:29:55.785730 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:29:55.793435 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:29:55.793457 systemd[1]: Reloading... Dec 13 13:29:55.826172 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:29:55.826463 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:29:55.827472 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:29:55.829804 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Dec 13 13:29:55.829882 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Dec 13 13:29:55.835075 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:29:55.835091 systemd-tmpfiles[1262]: Skipping /boot Dec 13 13:29:55.841663 zram_generator::config[1289]: No configuration found. Dec 13 13:29:55.849105 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:29:55.849120 systemd-tmpfiles[1262]: Skipping /boot Dec 13 13:29:55.953947 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:29:56.002652 systemd[1]: Reloading finished in 208 ms. Dec 13 13:29:56.024867 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:29:56.037145 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:29:56.046144 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:29:56.048434 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:29:56.050782 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:29:56.054660 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:29:56.059952 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:29:56.062914 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:29:56.067128 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:29:56.067296 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:29:56.069888 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:29:56.077797 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:29:56.080159 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:29:56.081334 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:29:56.084298 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:29:56.085442 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:29:56.086470 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:29:56.086909 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Dec 13 13:29:56.087006 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:29:56.089149 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:29:56.089897 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:29:56.091802 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:29:56.092352 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:29:56.101338 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:29:56.106039 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:29:56.106294 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:29:56.113293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:29:56.116300 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:29:56.116825 augenrules[1363]: No rules Dec 13 13:29:56.118541 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:29:56.119783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:29:56.122097 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:29:56.123234 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:29:56.124040 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:29:56.126222 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:29:56.127036 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:29:56.133949 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:29:56.137069 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:29:56.139294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:29:56.139731 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:29:56.142063 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:29:56.142632 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:29:56.144387 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:29:56.144552 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:29:56.146414 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:29:56.157953 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:29:56.174675 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1376) Dec 13 13:29:56.174762 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1376) Dec 13 13:29:56.177124 systemd[1]: Finished ensure-sysext.service. Dec 13 13:29:56.181662 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:29:56.183324 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:29:56.190316 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:29:56.192272 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:29:56.193485 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:29:56.196784 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:29:56.199469 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:29:56.202111 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:29:56.203741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:29:56.206734 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:29:56.210647 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:29:56.211841 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:29:56.211872 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:29:56.212415 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:29:56.212642 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:29:56.214320 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:29:56.214609 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:29:56.218102 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:29:56.218706 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:29:56.221103 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:29:56.221282 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:29:56.225536 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:29:56.225622 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:29:56.235369 augenrules[1404]: /sbin/augenrules: No change Dec 13 13:29:56.237250 systemd-resolved[1331]: Positive Trust Anchors: Dec 13 13:29:56.237264 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:29:56.237294 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:29:56.244642 systemd-resolved[1331]: Defaulting to hostname 'linux'. Dec 13 13:29:56.245010 augenrules[1436]: No rules Dec 13 13:29:56.246409 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:29:56.246726 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:29:56.248020 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:29:56.249967 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:29:56.267581 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1375) Dec 13 13:29:56.290711 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 13:29:56.296594 kernel: ACPI: button: Power Button [PWRF] Dec 13 13:29:56.299789 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:29:56.301305 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:29:56.302917 systemd-networkd[1417]: lo: Link UP Dec 13 13:29:56.302926 systemd-networkd[1417]: lo: Gained carrier Dec 13 13:29:56.304501 systemd-networkd[1417]: Enumeration completed Dec 13 13:29:56.304614 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:29:56.304914 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:29:56.304918 systemd-networkd[1417]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:29:56.305860 systemd-networkd[1417]: eth0: Link UP Dec 13 13:29:56.305871 systemd-networkd[1417]: eth0: Gained carrier Dec 13 13:29:56.305885 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:29:56.306499 systemd[1]: Reached target network.target - Network. Dec 13 13:29:56.312727 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:29:56.317102 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:29:56.317662 systemd-networkd[1417]: eth0: DHCPv4 address 10.0.0.121/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:29:56.318510 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Dec 13 13:29:57.319215 systemd-timesyncd[1418]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 13:29:57.319268 systemd-timesyncd[1418]: Initial clock synchronization to Fri 2024-12-13 13:29:57.319071 UTC. Dec 13 13:29:57.320034 systemd-resolved[1331]: Clock change detected. Flushing caches. Dec 13 13:29:57.322705 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:29:57.348502 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 13:29:57.351405 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:29:57.373688 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 13:29:57.374047 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 13:29:57.374210 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 13:29:57.374403 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 13:29:57.381124 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:29:57.381501 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:29:57.387006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:29:57.387213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:57.390682 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:29:57.442917 kernel: kvm_amd: TSC scaling supported Dec 13 13:29:57.442992 kernel: kvm_amd: Nested Virtualization enabled Dec 13 13:29:57.443030 kernel: kvm_amd: Nested Paging enabled Dec 13 13:29:57.443042 kernel: kvm_amd: LBR virtualization supported Dec 13 13:29:57.443556 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 13:29:57.444757 kernel: kvm_amd: Virtual GIF supported Dec 13 13:29:57.464538 kernel: EDAC MC: Ver: 3.0.0 Dec 13 13:29:57.474130 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:29:57.500900 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:29:57.511741 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:29:57.520660 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:29:57.553979 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:29:57.555512 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:29:57.556611 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:29:57.557764 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:29:57.559006 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:29:57.560410 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:29:57.561562 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:29:57.562791 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:29:57.564031 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:29:57.564061 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:29:57.564933 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:29:57.566754 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:29:57.569541 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:29:57.582059 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:29:57.584544 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:29:57.586124 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:29:57.587241 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:29:57.588182 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:29:57.589126 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:29:57.589156 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:29:57.590148 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:29:57.592168 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:29:57.596500 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:29:57.596560 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:29:57.598898 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:29:57.598991 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:29:57.601661 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:29:57.606109 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:29:57.609736 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:29:57.612659 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:29:57.614616 jq[1470]: false Dec 13 13:29:57.618647 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:29:57.620091 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:29:57.621549 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:29:57.623776 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:29:57.626933 extend-filesystems[1471]: Found loop3 Dec 13 13:29:57.631255 extend-filesystems[1471]: Found loop4 Dec 13 13:29:57.631255 extend-filesystems[1471]: Found loop5 Dec 13 13:29:57.631255 extend-filesystems[1471]: Found sr0 Dec 13 13:29:57.631255 extend-filesystems[1471]: Found vda Dec 13 13:29:57.631255 extend-filesystems[1471]: Found vda1 Dec 13 13:29:57.631255 extend-filesystems[1471]: Found vda2 Dec 13 13:29:57.631255 extend-filesystems[1471]: Found vda3 Dec 13 13:29:57.631255 extend-filesystems[1471]: Found usr Dec 13 13:29:57.631255 extend-filesystems[1471]: Found vda4 Dec 13 13:29:57.631255 extend-filesystems[1471]: Found vda6 Dec 13 13:29:57.631255 extend-filesystems[1471]: Found vda7 Dec 13 13:29:57.631255 extend-filesystems[1471]: Found vda9 Dec 13 13:29:57.631255 extend-filesystems[1471]: Checking size of /dev/vda9 Dec 13 13:29:57.627838 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:29:57.646981 dbus-daemon[1469]: [system] SELinux support is enabled Dec 13 13:29:57.632650 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:29:57.665191 update_engine[1480]: I20241213 13:29:57.662082 1480 main.cc:92] Flatcar Update Engine starting Dec 13 13:29:57.638324 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:29:57.665498 jq[1483]: true Dec 13 13:29:57.638570 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:29:57.638893 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:29:57.639596 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:29:57.642897 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:29:57.643136 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:29:57.647852 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:29:57.667047 extend-filesystems[1471]: Resized partition /dev/vda9 Dec 13 13:29:57.672393 extend-filesystems[1503]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:29:57.674524 update_engine[1480]: I20241213 13:29:57.669926 1480 update_check_scheduler.cc:74] Next update check in 8m28s Dec 13 13:29:57.674278 (ntainerd)[1502]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:29:57.677381 systemd-logind[1477]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 13:29:57.677408 systemd-logind[1477]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:29:57.677982 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:29:57.678924 tar[1490]: linux-amd64/helm Dec 13 13:29:57.677987 systemd-logind[1477]: New seat seat0. Dec 13 13:29:57.678497 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:29:57.680060 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:29:57.680077 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:29:57.681377 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:29:57.682699 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:29:57.686522 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 13:29:57.690720 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1380) Dec 13 13:29:57.692041 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:29:57.695651 jq[1493]: true Dec 13 13:29:57.714496 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 13:29:57.754076 extend-filesystems[1503]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:29:57.754076 extend-filesystems[1503]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:29:57.754076 extend-filesystems[1503]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 13:29:57.762909 extend-filesystems[1471]: Resized filesystem in /dev/vda9 Dec 13 13:29:57.762953 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:29:57.764356 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:29:57.773999 bash[1523]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:29:57.775140 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:29:57.778700 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:29:57.782728 locksmithd[1506]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:29:57.884741 containerd[1502]: time="2024-12-13T13:29:57.884633443Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:29:57.909584 containerd[1502]: time="2024-12-13T13:29:57.909539811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:29:57.911629 containerd[1502]: time="2024-12-13T13:29:57.911602610Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:29:57.911707 containerd[1502]: time="2024-12-13T13:29:57.911693831Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:29:57.911757 containerd[1502]: time="2024-12-13T13:29:57.911746339Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:29:57.911989 containerd[1502]: time="2024-12-13T13:29:57.911974217Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:29:57.912053 containerd[1502]: time="2024-12-13T13:29:57.912040591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:29:57.912171 containerd[1502]: time="2024-12-13T13:29:57.912154054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:29:57.912221 containerd[1502]: time="2024-12-13T13:29:57.912209929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:29:57.912463 containerd[1502]: time="2024-12-13T13:29:57.912444879Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:29:57.912539 containerd[1502]: time="2024-12-13T13:29:57.912527394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:29:57.912601 containerd[1502]: time="2024-12-13T13:29:57.912587627Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:29:57.912643 containerd[1502]: time="2024-12-13T13:29:57.912631900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:29:57.912781 containerd[1502]: time="2024-12-13T13:29:57.912766542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:29:57.913071 containerd[1502]: time="2024-12-13T13:29:57.913054803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:29:57.913242 containerd[1502]: time="2024-12-13T13:29:57.913227126Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:29:57.913297 containerd[1502]: time="2024-12-13T13:29:57.913285816Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:29:57.913436 containerd[1502]: time="2024-12-13T13:29:57.913422733Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:29:57.913554 containerd[1502]: time="2024-12-13T13:29:57.913540374Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:29:57.929030 sshd_keygen[1494]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:29:57.935668 containerd[1502]: time="2024-12-13T13:29:57.935618688Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:29:57.935733 containerd[1502]: time="2024-12-13T13:29:57.935691355Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:29:57.935733 containerd[1502]: time="2024-12-13T13:29:57.935709278Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:29:57.935733 containerd[1502]: time="2024-12-13T13:29:57.935723806Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:29:57.935849 containerd[1502]: time="2024-12-13T13:29:57.935737792Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:29:57.936033 containerd[1502]: time="2024-12-13T13:29:57.935924031Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:29:57.936225 containerd[1502]: time="2024-12-13T13:29:57.936185792Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:29:57.936863 containerd[1502]: time="2024-12-13T13:29:57.936422165Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:29:57.936863 containerd[1502]: time="2024-12-13T13:29:57.936444166Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:29:57.936863 containerd[1502]: time="2024-12-13T13:29:57.936460247Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:29:57.936863 containerd[1502]: time="2024-12-13T13:29:57.936485935Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:29:57.936863 containerd[1502]: time="2024-12-13T13:29:57.936501935Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:29:57.936863 containerd[1502]: time="2024-12-13T13:29:57.936514438Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:29:57.936863 containerd[1502]: time="2024-12-13T13:29:57.936530498Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:29:57.936863 containerd[1502]: time="2024-12-13T13:29:57.936545256Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:29:57.936863 containerd[1502]: time="2024-12-13T13:29:57.936556417Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:29:57.936863 containerd[1502]: time="2024-12-13T13:29:57.936568349Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:29:57.936863 containerd[1502]: time="2024-12-13T13:29:57.936579059Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:29:57.936863 containerd[1502]: time="2024-12-13T13:29:57.936600359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.936863 containerd[1502]: time="2024-12-13T13:29:57.936614957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.936863 containerd[1502]: time="2024-12-13T13:29:57.936630396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.937279 containerd[1502]: time="2024-12-13T13:29:57.936642929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.937279 containerd[1502]: time="2024-12-13T13:29:57.936655673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.937279 containerd[1502]: time="2024-12-13T13:29:57.936669479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.937279 containerd[1502]: time="2024-12-13T13:29:57.936681391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.937279 containerd[1502]: time="2024-12-13T13:29:57.936694586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.937279 containerd[1502]: time="2024-12-13T13:29:57.936707460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.937279 containerd[1502]: time="2024-12-13T13:29:57.936723070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.937279 containerd[1502]: time="2024-12-13T13:29:57.936736565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.937279 containerd[1502]: time="2024-12-13T13:29:57.936748267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.937279 containerd[1502]: time="2024-12-13T13:29:57.936762163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.937279 containerd[1502]: time="2024-12-13T13:29:57.936776640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:29:57.937279 containerd[1502]: time="2024-12-13T13:29:57.936796457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.937279 containerd[1502]: time="2024-12-13T13:29:57.936816855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.937279 containerd[1502]: time="2024-12-13T13:29:57.936830792Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:29:57.937557 containerd[1502]: time="2024-12-13T13:29:57.936892918Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:29:57.937557 containerd[1502]: time="2024-12-13T13:29:57.936910972Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:29:57.937557 containerd[1502]: time="2024-12-13T13:29:57.936922153Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:29:57.937557 containerd[1502]: time="2024-12-13T13:29:57.936943463Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:29:57.937557 containerd[1502]: time="2024-12-13T13:29:57.936953371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.937557 containerd[1502]: time="2024-12-13T13:29:57.936989399Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:29:57.937557 containerd[1502]: time="2024-12-13T13:29:57.936999909Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:29:57.937557 containerd[1502]: time="2024-12-13T13:29:57.937010168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:29:57.937711 containerd[1502]: time="2024-12-13T13:29:57.937285484Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:29:57.937711 containerd[1502]: time="2024-12-13T13:29:57.937325399Z" level=info msg="Connect containerd service" Dec 13 13:29:57.937711 containerd[1502]: time="2024-12-13T13:29:57.937355315Z" level=info msg="using legacy CRI server" Dec 13 13:29:57.937711 containerd[1502]: time="2024-12-13T13:29:57.937361497Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:29:57.937711 containerd[1502]: time="2024-12-13T13:29:57.937462997Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:29:57.938176 containerd[1502]: time="2024-12-13T13:29:57.938152530Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:29:57.938412 containerd[1502]: time="2024-12-13T13:29:57.938331005Z" level=info msg="Start subscribing containerd event" Dec 13 13:29:57.938547 containerd[1502]: time="2024-12-13T13:29:57.938514449Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:29:57.938588 containerd[1502]: time="2024-12-13T13:29:57.938572277Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:29:57.938614 containerd[1502]: time="2024-12-13T13:29:57.938516192Z" level=info msg="Start recovering state" Dec 13 13:29:57.939782 containerd[1502]: time="2024-12-13T13:29:57.938652658Z" level=info msg="Start event monitor" Dec 13 13:29:57.939782 containerd[1502]: time="2024-12-13T13:29:57.938676813Z" level=info msg="Start snapshots syncer" Dec 13 13:29:57.939782 containerd[1502]: time="2024-12-13T13:29:57.938686191Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:29:57.939782 containerd[1502]: time="2024-12-13T13:29:57.938694316Z" level=info msg="Start streaming server" Dec 13 13:29:57.939782 containerd[1502]: time="2024-12-13T13:29:57.938929427Z" level=info msg="containerd successfully booted in 0.058904s" Dec 13 13:29:57.938824 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:29:57.953807 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:29:57.968755 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:29:57.976097 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:29:57.976307 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:29:57.980111 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:29:58.012661 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:29:58.020952 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:29:58.023395 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:29:58.024690 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:29:58.114076 tar[1490]: linux-amd64/LICENSE Dec 13 13:29:58.114188 tar[1490]: linux-amd64/README.md Dec 13 13:29:58.132526 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:29:58.135602 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:29:58.149811 systemd[1]: Started sshd@0-10.0.0.121:22-10.0.0.1:46986.service - OpenSSH per-connection server daemon (10.0.0.1:46986). Dec 13 13:29:58.191537 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 46986 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:29:58.193352 sshd-session[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:58.202338 systemd-logind[1477]: New session 1 of user core. Dec 13 13:29:58.203723 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:29:58.223723 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:29:58.236039 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:29:58.248753 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:29:58.252741 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:29:58.355715 systemd[1565]: Queued start job for default target default.target. Dec 13 13:29:58.366853 systemd[1565]: Created slice app.slice - User Application Slice. Dec 13 13:29:58.366882 systemd[1565]: Reached target paths.target - Paths. Dec 13 13:29:58.366896 systemd[1565]: Reached target timers.target - Timers. Dec 13 13:29:58.368601 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:29:58.380797 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:29:58.380934 systemd[1565]: Reached target sockets.target - Sockets. Dec 13 13:29:58.380954 systemd[1565]: Reached target basic.target - Basic System. Dec 13 13:29:58.380992 systemd[1565]: Reached target default.target - Main User Target. Dec 13 13:29:58.381026 systemd[1565]: Startup finished in 121ms. Dec 13 13:29:58.381545 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:29:58.762714 systemd-networkd[1417]: eth0: Gained IPv6LL Dec 13 13:29:58.764151 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:29:58.767458 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:29:58.769562 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:29:58.777663 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 13:29:58.780168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:29:58.782298 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:29:58.802398 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 13:29:58.802681 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 13:29:58.804611 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:29:58.806276 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:29:58.844012 systemd[1]: Started sshd@1-10.0.0.121:22-10.0.0.1:46990.service - OpenSSH per-connection server daemon (10.0.0.1:46990). Dec 13 13:29:58.882619 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 46990 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:29:58.883641 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:58.888398 systemd-logind[1477]: New session 2 of user core. Dec 13 13:29:58.901588 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:29:58.957576 sshd[1595]: Connection closed by 10.0.0.1 port 46990 Dec 13 13:29:58.957924 sshd-session[1593]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:58.972708 systemd[1]: sshd@1-10.0.0.121:22-10.0.0.1:46990.service: Deactivated successfully. Dec 13 13:29:58.974823 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:29:58.976677 systemd-logind[1477]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:29:58.986875 systemd[1]: Started sshd@2-10.0.0.121:22-10.0.0.1:47004.service - OpenSSH per-connection server daemon (10.0.0.1:47004). Dec 13 13:29:58.989304 systemd-logind[1477]: Removed session 2. Dec 13 13:29:59.021936 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 47004 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:29:59.023411 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:59.027922 systemd-logind[1477]: New session 3 of user core. Dec 13 13:29:59.039579 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:29:59.094685 sshd[1602]: Connection closed by 10.0.0.1 port 47004 Dec 13 13:29:59.094984 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:59.098945 systemd[1]: sshd@2-10.0.0.121:22-10.0.0.1:47004.service: Deactivated successfully. Dec 13 13:29:59.100795 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:29:59.101378 systemd-logind[1477]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:29:59.102355 systemd-logind[1477]: Removed session 3. Dec 13 13:29:59.400823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:29:59.402376 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:29:59.404780 systemd[1]: Startup finished in 758ms (kernel) + 5.759s (initrd) + 3.938s (userspace) = 10.456s. Dec 13 13:29:59.405517 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:29:59.414015 agetty[1556]: failed to open credentials directory Dec 13 13:29:59.414083 agetty[1555]: failed to open credentials directory Dec 13 13:29:59.833569 kubelet[1611]: E1213 13:29:59.833416 1611 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:29:59.837671 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:29:59.837901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:30:09.109436 systemd[1]: Started sshd@3-10.0.0.121:22-10.0.0.1:39192.service - OpenSSH per-connection server daemon (10.0.0.1:39192). Dec 13 13:30:09.145487 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 39192 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:30:09.146830 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:09.150837 systemd-logind[1477]: New session 4 of user core. Dec 13 13:30:09.161684 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:30:09.213595 sshd[1627]: Connection closed by 10.0.0.1 port 39192 Dec 13 13:30:09.213942 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:09.225987 systemd[1]: sshd@3-10.0.0.121:22-10.0.0.1:39192.service: Deactivated successfully. Dec 13 13:30:09.227848 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:30:09.229266 systemd-logind[1477]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:30:09.230572 systemd[1]: Started sshd@4-10.0.0.121:22-10.0.0.1:39194.service - OpenSSH per-connection server daemon (10.0.0.1:39194). Dec 13 13:30:09.231293 systemd-logind[1477]: Removed session 4. Dec 13 13:30:09.266741 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 39194 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:30:09.268063 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:09.272051 systemd-logind[1477]: New session 5 of user core. Dec 13 13:30:09.281595 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:30:09.332091 sshd[1634]: Connection closed by 10.0.0.1 port 39194 Dec 13 13:30:09.332620 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:09.344953 systemd[1]: sshd@4-10.0.0.121:22-10.0.0.1:39194.service: Deactivated successfully. Dec 13 13:30:09.346442 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:30:09.348094 systemd-logind[1477]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:30:09.349333 systemd[1]: Started sshd@5-10.0.0.121:22-10.0.0.1:39206.service - OpenSSH per-connection server daemon (10.0.0.1:39206). Dec 13 13:30:09.350118 systemd-logind[1477]: Removed session 5. Dec 13 13:30:09.398078 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 39206 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:30:09.399520 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:09.403251 systemd-logind[1477]: New session 6 of user core. Dec 13 13:30:09.420585 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:30:09.474876 sshd[1641]: Connection closed by 10.0.0.1 port 39206 Dec 13 13:30:09.475297 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:09.491523 systemd[1]: sshd@5-10.0.0.121:22-10.0.0.1:39206.service: Deactivated successfully. Dec 13 13:30:09.493317 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:30:09.494929 systemd-logind[1477]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:30:09.496198 systemd[1]: Started sshd@6-10.0.0.121:22-10.0.0.1:39216.service - OpenSSH per-connection server daemon (10.0.0.1:39216). Dec 13 13:30:09.496965 systemd-logind[1477]: Removed session 6. Dec 13 13:30:09.551026 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 39216 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:30:09.552605 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:09.556603 systemd-logind[1477]: New session 7 of user core. Dec 13 13:30:09.566682 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:30:09.626611 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:30:09.626989 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:30:09.646951 sudo[1649]: pam_unix(sudo:session): session closed for user root Dec 13 13:30:09.648645 sshd[1648]: Connection closed by 10.0.0.1 port 39216 Dec 13 13:30:09.649210 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:09.660451 systemd[1]: sshd@6-10.0.0.121:22-10.0.0.1:39216.service: Deactivated successfully. Dec 13 13:30:09.662185 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:30:09.663815 systemd-logind[1477]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:30:09.665183 systemd[1]: Started sshd@7-10.0.0.121:22-10.0.0.1:39228.service - OpenSSH per-connection server daemon (10.0.0.1:39228). Dec 13 13:30:09.665929 systemd-logind[1477]: Removed session 7. Dec 13 13:30:09.702677 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 39228 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:30:09.704261 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:09.708307 systemd-logind[1477]: New session 8 of user core. Dec 13 13:30:09.717593 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:30:09.772381 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:30:09.772735 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:30:09.776284 sudo[1658]: pam_unix(sudo:session): session closed for user root Dec 13 13:30:09.782467 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:30:09.782859 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:30:09.802738 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:30:09.831598 augenrules[1680]: No rules Dec 13 13:30:09.833332 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:30:09.833571 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:30:09.834697 sudo[1657]: pam_unix(sudo:session): session closed for user root Dec 13 13:30:09.836072 sshd[1656]: Connection closed by 10.0.0.1 port 39228 Dec 13 13:30:09.836418 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:09.846219 systemd[1]: sshd@7-10.0.0.121:22-10.0.0.1:39228.service: Deactivated successfully. Dec 13 13:30:09.847921 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:30:09.848779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:30:09.849190 systemd-logind[1477]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:30:09.856633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:30:09.857724 systemd[1]: Started sshd@8-10.0.0.121:22-10.0.0.1:39242.service - OpenSSH per-connection server daemon (10.0.0.1:39242). Dec 13 13:30:09.858697 systemd-logind[1477]: Removed session 8. Dec 13 13:30:09.894129 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 39242 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:30:09.895588 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:09.900020 systemd-logind[1477]: New session 9 of user core. Dec 13 13:30:09.914695 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:30:09.968398 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:30:09.968791 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:30:10.011604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:30:10.016743 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:30:10.062773 kubelet[1704]: E1213 13:30:10.062725 1704 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:30:10.069726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:30:10.069922 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:30:10.261790 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:30:10.261863 (dockerd)[1729]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:30:10.506753 dockerd[1729]: time="2024-12-13T13:30:10.506693020Z" level=info msg="Starting up" Dec 13 13:30:10.839879 systemd[1]: var-lib-docker-metacopy\x2dcheck631206060-merged.mount: Deactivated successfully. Dec 13 13:30:10.865723 dockerd[1729]: time="2024-12-13T13:30:10.865685800Z" level=info msg="Loading containers: start." Dec 13 13:30:11.039505 kernel: Initializing XFRM netlink socket Dec 13 13:30:11.118591 systemd-networkd[1417]: docker0: Link UP Dec 13 13:30:11.150893 dockerd[1729]: time="2024-12-13T13:30:11.150840577Z" level=info msg="Loading containers: done." Dec 13 13:30:11.165400 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1001793724-merged.mount: Deactivated successfully. Dec 13 13:30:11.166929 dockerd[1729]: time="2024-12-13T13:30:11.166879055Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:30:11.167010 dockerd[1729]: time="2024-12-13T13:30:11.166985053Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:30:11.167135 dockerd[1729]: time="2024-12-13T13:30:11.167105750Z" level=info msg="Daemon has completed initialization" Dec 13 13:30:11.205531 dockerd[1729]: time="2024-12-13T13:30:11.205438367Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:30:11.205683 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:30:11.894774 containerd[1502]: time="2024-12-13T13:30:11.894738962Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 13:30:12.537054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228053288.mount: Deactivated successfully. Dec 13 13:30:13.981243 containerd[1502]: time="2024-12-13T13:30:13.981186806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:13.981899 containerd[1502]: time="2024-12-13T13:30:13.981848026Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Dec 13 13:30:13.984130 containerd[1502]: time="2024-12-13T13:30:13.983880077Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:13.987223 containerd[1502]: time="2024-12-13T13:30:13.987174335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:13.988189 containerd[1502]: time="2024-12-13T13:30:13.988161927Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.093388911s" Dec 13 13:30:13.988245 containerd[1502]: time="2024-12-13T13:30:13.988191372Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 13:30:14.009083 containerd[1502]: time="2024-12-13T13:30:14.009049829Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 13:30:15.844351 containerd[1502]: time="2024-12-13T13:30:15.844292855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:15.845113 containerd[1502]: time="2024-12-13T13:30:15.845072838Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Dec 13 13:30:15.846432 containerd[1502]: time="2024-12-13T13:30:15.846389988Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:15.849167 containerd[1502]: time="2024-12-13T13:30:15.849140155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:15.850303 containerd[1502]: time="2024-12-13T13:30:15.850263933Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.841181753s" Dec 13 13:30:15.850341 containerd[1502]: time="2024-12-13T13:30:15.850305451Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 13:30:15.874211 containerd[1502]: time="2024-12-13T13:30:15.874174263Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 13:30:16.856934 containerd[1502]: time="2024-12-13T13:30:16.856874557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:16.857587 containerd[1502]: time="2024-12-13T13:30:16.857557588Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Dec 13 13:30:16.858889 containerd[1502]: time="2024-12-13T13:30:16.858843419Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:16.861487 containerd[1502]: time="2024-12-13T13:30:16.861426995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:16.862663 containerd[1502]: time="2024-12-13T13:30:16.862630742Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 988.419289ms" Dec 13 13:30:16.862709 containerd[1502]: time="2024-12-13T13:30:16.862663834Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 13:30:16.887709 containerd[1502]: time="2024-12-13T13:30:16.887675760Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 13:30:17.878979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2426483939.mount: Deactivated successfully. Dec 13 13:30:18.595433 containerd[1502]: time="2024-12-13T13:30:18.595347084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:18.637848 containerd[1502]: time="2024-12-13T13:30:18.637764251Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 13:30:18.652991 containerd[1502]: time="2024-12-13T13:30:18.652962323Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:18.655183 containerd[1502]: time="2024-12-13T13:30:18.655155746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:18.655731 containerd[1502]: time="2024-12-13T13:30:18.655692703Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.767979182s" Dec 13 13:30:18.655731 containerd[1502]: time="2024-12-13T13:30:18.655721537Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 13:30:18.680101 containerd[1502]: time="2024-12-13T13:30:18.680055402Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:30:19.264010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1929285318.mount: Deactivated successfully. Dec 13 13:30:19.946878 containerd[1502]: time="2024-12-13T13:30:19.946821280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:19.947693 containerd[1502]: time="2024-12-13T13:30:19.947639945Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 13:30:19.948829 containerd[1502]: time="2024-12-13T13:30:19.948796104Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:19.952078 containerd[1502]: time="2024-12-13T13:30:19.952026742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:19.953345 containerd[1502]: time="2024-12-13T13:30:19.953299669Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.273198702s" Dec 13 13:30:19.953345 containerd[1502]: time="2024-12-13T13:30:19.953338202Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 13:30:19.977596 containerd[1502]: time="2024-12-13T13:30:19.977535681Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:30:20.112787 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:30:20.129842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:30:20.297113 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:30:20.304267 (kubelet)[2089]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:30:20.528393 kubelet[2089]: E1213 13:30:20.528323 2089 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:30:20.532971 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:30:20.533180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:30:20.692850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2769004421.mount: Deactivated successfully. Dec 13 13:30:20.697285 containerd[1502]: time="2024-12-13T13:30:20.697225516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:20.697920 containerd[1502]: time="2024-12-13T13:30:20.697870445Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 13:30:20.698954 containerd[1502]: time="2024-12-13T13:30:20.698920134Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:20.701011 containerd[1502]: time="2024-12-13T13:30:20.700981730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:20.701715 containerd[1502]: time="2024-12-13T13:30:20.701688375Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 724.117369ms" Dec 13 13:30:20.701767 containerd[1502]: time="2024-12-13T13:30:20.701720245Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 13:30:20.725984 containerd[1502]: time="2024-12-13T13:30:20.725934916Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 13:30:21.297072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1116165342.mount: Deactivated successfully. Dec 13 13:30:23.265428 containerd[1502]: time="2024-12-13T13:30:23.265360065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:23.266799 containerd[1502]: time="2024-12-13T13:30:23.266732409Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Dec 13 13:30:23.268675 containerd[1502]: time="2024-12-13T13:30:23.268645556Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:23.271935 containerd[1502]: time="2024-12-13T13:30:23.271899769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:23.273377 containerd[1502]: time="2024-12-13T13:30:23.273342475Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.547367703s" Dec 13 13:30:23.273418 containerd[1502]: time="2024-12-13T13:30:23.273377210Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 13:30:26.042202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:30:26.049683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:30:26.067160 systemd[1]: Reloading requested from client PID 2237 ('systemctl') (unit session-9.scope)... Dec 13 13:30:26.067173 systemd[1]: Reloading... Dec 13 13:30:26.144760 zram_generator::config[2277]: No configuration found. Dec 13 13:30:26.383877 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:30:26.460717 systemd[1]: Reloading finished in 393 ms. Dec 13 13:30:26.509094 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 13:30:26.509201 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 13:30:26.509467 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:30:26.511971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:30:26.656213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:30:26.660583 (kubelet)[2325]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:30:26.699859 kubelet[2325]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:30:26.699859 kubelet[2325]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:30:26.699859 kubelet[2325]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:30:26.700768 kubelet[2325]: I1213 13:30:26.700716 2325 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:30:26.989965 kubelet[2325]: I1213 13:30:26.989867 2325 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:30:26.989965 kubelet[2325]: I1213 13:30:26.989894 2325 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:30:26.990106 kubelet[2325]: I1213 13:30:26.990088 2325 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:30:27.003691 kubelet[2325]: I1213 13:30:27.003641 2325 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:30:27.004127 kubelet[2325]: E1213 13:30:27.004098 2325 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:27.015519 kubelet[2325]: I1213 13:30:27.015494 2325 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:30:27.016510 kubelet[2325]: I1213 13:30:27.016458 2325 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:30:27.016662 kubelet[2325]: I1213 13:30:27.016500 2325 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:30:27.017040 kubelet[2325]: I1213 13:30:27.017018 2325 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:30:27.017040 kubelet[2325]: I1213 13:30:27.017033 2325 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:30:27.017187 kubelet[2325]: I1213 13:30:27.017159 2325 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:30:27.017766 kubelet[2325]: I1213 13:30:27.017743 2325 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:30:27.017766 kubelet[2325]: I1213 13:30:27.017759 2325 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:30:27.017811 kubelet[2325]: I1213 13:30:27.017779 2325 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:30:27.017811 kubelet[2325]: I1213 13:30:27.017797 2325 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:30:27.021215 kubelet[2325]: W1213 13:30:27.021098 2325 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:27.021215 kubelet[2325]: E1213 13:30:27.021150 2325 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:27.021215 kubelet[2325]: W1213 13:30:27.021143 2325 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:27.021215 kubelet[2325]: E1213 13:30:27.021195 2325 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:27.022186 kubelet[2325]: I1213 13:30:27.022144 2325 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:30:27.023313 kubelet[2325]: I1213 13:30:27.023295 2325 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:30:27.023358 kubelet[2325]: W1213 13:30:27.023349 2325 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:30:27.024055 kubelet[2325]: I1213 13:30:27.023958 2325 server.go:1264] "Started kubelet" Dec 13 13:30:27.025528 kubelet[2325]: I1213 13:30:27.025496 2325 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:30:27.026406 kubelet[2325]: I1213 13:30:27.026059 2325 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:30:27.027026 kubelet[2325]: I1213 13:30:27.026742 2325 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:30:27.028381 kubelet[2325]: I1213 13:30:27.028311 2325 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:30:27.029064 kubelet[2325]: I1213 13:30:27.028553 2325 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:30:27.029064 kubelet[2325]: I1213 13:30:27.028646 2325 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:30:27.029064 kubelet[2325]: I1213 13:30:27.028756 2325 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:30:27.029064 kubelet[2325]: I1213 13:30:27.028829 2325 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:30:27.029159 kubelet[2325]: W1213 13:30:27.029104 2325 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:27.029159 kubelet[2325]: E1213 13:30:27.029134 2325 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:27.029896 kubelet[2325]: E1213 13:30:27.029760 2325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="200ms" Dec 13 13:30:27.030555 kubelet[2325]: I1213 13:30:27.030139 2325 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:30:27.030555 kubelet[2325]: I1213 13:30:27.030225 2325 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:30:27.030555 kubelet[2325]: E1213 13:30:27.030249 2325 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.121:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.121:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810bfb1d3f7036b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:30:27.023938411 +0000 UTC m=+0.359311729,LastTimestamp:2024-12-13 13:30:27.023938411 +0000 UTC m=+0.359311729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:30:27.031642 kubelet[2325]: E1213 13:30:27.031626 2325 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:30:27.032312 kubelet[2325]: I1213 13:30:27.032285 2325 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:30:27.045117 kubelet[2325]: I1213 13:30:27.045100 2325 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:30:27.045117 kubelet[2325]: I1213 13:30:27.045114 2325 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:30:27.045214 kubelet[2325]: I1213 13:30:27.045128 2325 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:30:27.045966 kubelet[2325]: I1213 13:30:27.045941 2325 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:30:27.047398 kubelet[2325]: I1213 13:30:27.047373 2325 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:30:27.047398 kubelet[2325]: I1213 13:30:27.047395 2325 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:30:27.047548 kubelet[2325]: I1213 13:30:27.047409 2325 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:30:27.047548 kubelet[2325]: E1213 13:30:27.047441 2325 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:30:27.130723 kubelet[2325]: I1213 13:30:27.130680 2325 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:30:27.131058 kubelet[2325]: E1213 13:30:27.131025 2325 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" Dec 13 13:30:27.148204 kubelet[2325]: E1213 13:30:27.148170 2325 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:30:27.230777 kubelet[2325]: E1213 13:30:27.230724 2325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="400ms" Dec 13 13:30:27.332211 kubelet[2325]: I1213 13:30:27.332129 2325 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:30:27.332428 kubelet[2325]: E1213 13:30:27.332390 2325 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" Dec 13 13:30:27.348511 kubelet[2325]: E1213 13:30:27.348467 2325 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:30:27.360213 kubelet[2325]: W1213 13:30:27.360142 2325 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:27.360259 kubelet[2325]: E1213 13:30:27.360213 2325 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:27.361848 kubelet[2325]: I1213 13:30:27.361820 2325 policy_none.go:49] "None policy: Start" Dec 13 13:30:27.362386 kubelet[2325]: I1213 13:30:27.362368 2325 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:30:27.362420 kubelet[2325]: I1213 13:30:27.362395 2325 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:30:27.368963 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:30:27.386000 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:30:27.388651 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:30:27.404305 kubelet[2325]: I1213 13:30:27.404279 2325 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:30:27.404518 kubelet[2325]: I1213 13:30:27.404466 2325 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:30:27.404608 kubelet[2325]: I1213 13:30:27.404586 2325 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:30:27.405487 kubelet[2325]: E1213 13:30:27.405436 2325 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 13:30:27.631696 kubelet[2325]: E1213 13:30:27.631669 2325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="800ms" Dec 13 13:30:27.737985 kubelet[2325]: I1213 13:30:27.737942 2325 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:30:27.738429 kubelet[2325]: E1213 13:30:27.738177 2325 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" Dec 13 13:30:27.749392 kubelet[2325]: I1213 13:30:27.749350 2325 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:30:27.750191 kubelet[2325]: I1213 13:30:27.750168 2325 topology_manager.go:215] "Topology Admit Handler" podUID="5400f2e7d5c1a34e6d7bab38788b85fd" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:30:27.750995 kubelet[2325]: I1213 13:30:27.750945 2325 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:30:27.756328 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Dec 13 13:30:27.772961 systemd[1]: Created slice kubepods-burstable-pod5400f2e7d5c1a34e6d7bab38788b85fd.slice - libcontainer container kubepods-burstable-pod5400f2e7d5c1a34e6d7bab38788b85fd.slice. Dec 13 13:30:27.776957 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Dec 13 13:30:27.835520 kubelet[2325]: I1213 13:30:27.835466 2325 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:30:27.835520 kubelet[2325]: I1213 13:30:27.835513 2325 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5400f2e7d5c1a34e6d7bab38788b85fd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5400f2e7d5c1a34e6d7bab38788b85fd\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:30:27.835692 kubelet[2325]: I1213 13:30:27.835542 2325 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:30:27.835692 kubelet[2325]: I1213 13:30:27.835562 2325 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:30:27.835692 kubelet[2325]: I1213 13:30:27.835577 2325 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:30:27.835692 kubelet[2325]: I1213 13:30:27.835597 2325 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:30:27.835692 kubelet[2325]: I1213 13:30:27.835615 2325 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:30:27.835808 kubelet[2325]: I1213 13:30:27.835647 2325 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5400f2e7d5c1a34e6d7bab38788b85fd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5400f2e7d5c1a34e6d7bab38788b85fd\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:30:27.835808 kubelet[2325]: I1213 13:30:27.835686 2325 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5400f2e7d5c1a34e6d7bab38788b85fd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5400f2e7d5c1a34e6d7bab38788b85fd\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:30:27.973526 kubelet[2325]: W1213 13:30:27.973339 2325 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:27.973526 kubelet[2325]: E1213 13:30:27.973429 2325 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:28.072027 kubelet[2325]: E1213 13:30:28.071986 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:28.072532 containerd[1502]: time="2024-12-13T13:30:28.072496351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Dec 13 13:30:28.075886 kubelet[2325]: E1213 13:30:28.075854 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:28.076417 containerd[1502]: time="2024-12-13T13:30:28.076367291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5400f2e7d5c1a34e6d7bab38788b85fd,Namespace:kube-system,Attempt:0,}" Dec 13 13:30:28.078580 kubelet[2325]: E1213 13:30:28.078558 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:28.078824 containerd[1502]: time="2024-12-13T13:30:28.078798210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Dec 13 13:30:28.381635 kubelet[2325]: W1213 13:30:28.381582 2325 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:28.381807 kubelet[2325]: E1213 13:30:28.381637 2325 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:28.418225 kubelet[2325]: W1213 13:30:28.418177 2325 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:28.418225 kubelet[2325]: E1213 13:30:28.418226 2325 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:28.432953 kubelet[2325]: E1213 13:30:28.432902 2325 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="1.6s" Dec 13 13:30:28.539715 kubelet[2325]: I1213 13:30:28.539665 2325 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:30:28.539938 kubelet[2325]: E1213 13:30:28.539905 2325 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" Dec 13 13:30:28.609423 kubelet[2325]: W1213 13:30:28.609358 2325 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:28.609423 kubelet[2325]: E1213 13:30:28.609421 2325 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:29.031690 kubelet[2325]: E1213 13:30:29.031651 2325 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.121:6443: connect: connection refused Dec 13 13:30:29.037098 kubelet[2325]: E1213 13:30:29.037006 2325 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.121:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.121:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810bfb1d3f7036b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:30:27.023938411 +0000 UTC m=+0.359311729,LastTimestamp:2024-12-13 13:30:27.023938411 +0000 UTC m=+0.359311729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:30:29.107336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846143725.mount: Deactivated successfully. Dec 13 13:30:29.113760 containerd[1502]: time="2024-12-13T13:30:29.113690562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:30:29.115636 containerd[1502]: time="2024-12-13T13:30:29.115601706Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 13:30:29.118489 containerd[1502]: time="2024-12-13T13:30:29.118454596Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:30:29.119809 containerd[1502]: time="2024-12-13T13:30:29.119767819Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:30:29.121183 containerd[1502]: time="2024-12-13T13:30:29.121131757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:30:29.122113 containerd[1502]: time="2024-12-13T13:30:29.122079595Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:30:29.122934 containerd[1502]: time="2024-12-13T13:30:29.122902067Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:30:29.123875 containerd[1502]: time="2024-12-13T13:30:29.123842761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:30:29.124530 containerd[1502]: time="2024-12-13T13:30:29.124507217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.051921949s" Dec 13 13:30:29.128210 containerd[1502]: time="2024-12-13T13:30:29.128176388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.049319689s" Dec 13 13:30:29.128923 containerd[1502]: time="2024-12-13T13:30:29.128889896Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.052436363s" Dec 13 13:30:29.247943 containerd[1502]: time="2024-12-13T13:30:29.247853085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:30:29.248290 containerd[1502]: time="2024-12-13T13:30:29.247960707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:30:29.248290 containerd[1502]: time="2024-12-13T13:30:29.247997656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:29.248290 containerd[1502]: time="2024-12-13T13:30:29.248167274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:29.249116 containerd[1502]: time="2024-12-13T13:30:29.248952216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:30:29.249116 containerd[1502]: time="2024-12-13T13:30:29.249074706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:30:29.249116 containerd[1502]: time="2024-12-13T13:30:29.249087189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:29.250067 containerd[1502]: time="2024-12-13T13:30:29.247827537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:30:29.250067 containerd[1502]: time="2024-12-13T13:30:29.250003357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:30:29.250067 containerd[1502]: time="2024-12-13T13:30:29.250015751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:29.250067 containerd[1502]: time="2024-12-13T13:30:29.249186115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:29.251343 containerd[1502]: time="2024-12-13T13:30:29.250078358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:29.268610 systemd[1]: Started cri-containerd-02a9a6f3c5ce026afd116279117dc064518bf7eb2a8a5cdc2a79743833b483a1.scope - libcontainer container 02a9a6f3c5ce026afd116279117dc064518bf7eb2a8a5cdc2a79743833b483a1. Dec 13 13:30:29.272857 systemd[1]: Started cri-containerd-4f1aec12e553a3968f17ef8d4a4120f547400272279b342f5c4bf6c14de0a65f.scope - libcontainer container 4f1aec12e553a3968f17ef8d4a4120f547400272279b342f5c4bf6c14de0a65f. Dec 13 13:30:29.274249 systemd[1]: Started cri-containerd-f24b62b7738619ee28513fff7f8c3b6c7239fa90bf58e54d1d5b7e08d79371aa.scope - libcontainer container f24b62b7738619ee28513fff7f8c3b6c7239fa90bf58e54d1d5b7e08d79371aa. Dec 13 13:30:29.314503 containerd[1502]: time="2024-12-13T13:30:29.314174471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f1aec12e553a3968f17ef8d4a4120f547400272279b342f5c4bf6c14de0a65f\"" Dec 13 13:30:29.315602 kubelet[2325]: E1213 13:30:29.315570 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:29.317767 containerd[1502]: time="2024-12-13T13:30:29.317729137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"02a9a6f3c5ce026afd116279117dc064518bf7eb2a8a5cdc2a79743833b483a1\"" Dec 13 13:30:29.319023 kubelet[2325]: E1213 13:30:29.318993 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:29.319268 containerd[1502]: time="2024-12-13T13:30:29.319197561Z" level=info msg="CreateContainer within sandbox \"4f1aec12e553a3968f17ef8d4a4120f547400272279b342f5c4bf6c14de0a65f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:30:29.320805 containerd[1502]: time="2024-12-13T13:30:29.320778767Z" level=info msg="CreateContainer within sandbox \"02a9a6f3c5ce026afd116279117dc064518bf7eb2a8a5cdc2a79743833b483a1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:30:29.322708 containerd[1502]: time="2024-12-13T13:30:29.322658472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5400f2e7d5c1a34e6d7bab38788b85fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f24b62b7738619ee28513fff7f8c3b6c7239fa90bf58e54d1d5b7e08d79371aa\"" Dec 13 13:30:29.323171 kubelet[2325]: E1213 13:30:29.323151 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:29.325370 containerd[1502]: time="2024-12-13T13:30:29.325340061Z" level=info msg="CreateContainer within sandbox \"f24b62b7738619ee28513fff7f8c3b6c7239fa90bf58e54d1d5b7e08d79371aa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:30:29.341284 containerd[1502]: time="2024-12-13T13:30:29.341210804Z" level=info msg="CreateContainer within sandbox \"4f1aec12e553a3968f17ef8d4a4120f547400272279b342f5c4bf6c14de0a65f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ed9bb821b42c1c3d84a4d64bbe366ae985d0d15dc53fc4ebc0692be3585a914f\"" Dec 13 13:30:29.341710 containerd[1502]: time="2024-12-13T13:30:29.341670015Z" level=info msg="StartContainer for \"ed9bb821b42c1c3d84a4d64bbe366ae985d0d15dc53fc4ebc0692be3585a914f\"" Dec 13 13:30:29.351058 containerd[1502]: time="2024-12-13T13:30:29.351025149Z" level=info msg="CreateContainer within sandbox \"02a9a6f3c5ce026afd116279117dc064518bf7eb2a8a5cdc2a79743833b483a1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"67df5b04daa98ea4d259c9345d369d8ae166b72399b6d213b7a1edffbbba9e69\"" Dec 13 13:30:29.351521 containerd[1502]: time="2024-12-13T13:30:29.351453272Z" level=info msg="StartContainer for \"67df5b04daa98ea4d259c9345d369d8ae166b72399b6d213b7a1edffbbba9e69\"" Dec 13 13:30:29.355173 containerd[1502]: time="2024-12-13T13:30:29.355138734Z" level=info msg="CreateContainer within sandbox \"f24b62b7738619ee28513fff7f8c3b6c7239fa90bf58e54d1d5b7e08d79371aa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8424456bc06d43ba7efbf4ab7596b128c19475a524481409434d8ce9bc1808d8\"" Dec 13 13:30:29.356698 containerd[1502]: time="2024-12-13T13:30:29.355679929Z" level=info msg="StartContainer for \"8424456bc06d43ba7efbf4ab7596b128c19475a524481409434d8ce9bc1808d8\"" Dec 13 13:30:29.370612 systemd[1]: Started cri-containerd-ed9bb821b42c1c3d84a4d64bbe366ae985d0d15dc53fc4ebc0692be3585a914f.scope - libcontainer container ed9bb821b42c1c3d84a4d64bbe366ae985d0d15dc53fc4ebc0692be3585a914f. Dec 13 13:30:29.378433 systemd[1]: Started cri-containerd-67df5b04daa98ea4d259c9345d369d8ae166b72399b6d213b7a1edffbbba9e69.scope - libcontainer container 67df5b04daa98ea4d259c9345d369d8ae166b72399b6d213b7a1edffbbba9e69. Dec 13 13:30:29.382443 systemd[1]: Started cri-containerd-8424456bc06d43ba7efbf4ab7596b128c19475a524481409434d8ce9bc1808d8.scope - libcontainer container 8424456bc06d43ba7efbf4ab7596b128c19475a524481409434d8ce9bc1808d8. Dec 13 13:30:29.414438 containerd[1502]: time="2024-12-13T13:30:29.414289513Z" level=info msg="StartContainer for \"ed9bb821b42c1c3d84a4d64bbe366ae985d0d15dc53fc4ebc0692be3585a914f\" returns successfully" Dec 13 13:30:29.427312 containerd[1502]: time="2024-12-13T13:30:29.427266339Z" level=info msg="StartContainer for \"8424456bc06d43ba7efbf4ab7596b128c19475a524481409434d8ce9bc1808d8\" returns successfully" Dec 13 13:30:29.431609 containerd[1502]: time="2024-12-13T13:30:29.431319149Z" level=info msg="StartContainer for \"67df5b04daa98ea4d259c9345d369d8ae166b72399b6d213b7a1edffbbba9e69\" returns successfully" Dec 13 13:30:30.060388 kubelet[2325]: E1213 13:30:30.060344 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:30.064057 kubelet[2325]: E1213 13:30:30.063784 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:30.064057 kubelet[2325]: E1213 13:30:30.064007 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:30.142038 kubelet[2325]: I1213 13:30:30.141996 2325 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:30:30.302969 kubelet[2325]: E1213 13:30:30.302923 2325 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 13:30:30.395996 kubelet[2325]: I1213 13:30:30.395954 2325 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:30:31.020254 kubelet[2325]: I1213 13:30:31.020219 2325 apiserver.go:52] "Watching apiserver" Dec 13 13:30:31.029172 kubelet[2325]: I1213 13:30:31.029143 2325 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:30:31.069419 kubelet[2325]: E1213 13:30:31.069392 2325 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 13:30:31.069805 kubelet[2325]: E1213 13:30:31.069788 2325 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:32.091313 systemd[1]: Reloading requested from client PID 2607 ('systemctl') (unit session-9.scope)... Dec 13 13:30:32.091328 systemd[1]: Reloading... Dec 13 13:30:32.162247 zram_generator::config[2649]: No configuration found. Dec 13 13:30:32.269690 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:30:32.358951 systemd[1]: Reloading finished in 267 ms. Dec 13 13:30:32.408959 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:30:32.409124 kubelet[2325]: E1213 13:30:32.408858 2325 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.1810bfb1d3f7036b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:30:27.023938411 +0000 UTC m=+0.359311729,LastTimestamp:2024-12-13 13:30:27.023938411 +0000 UTC m=+0.359311729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:30:32.431813 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:30:32.432085 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:30:32.439905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:30:32.587283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:30:32.597961 (kubelet)[2691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:30:32.646302 kubelet[2691]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:30:32.646302 kubelet[2691]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:30:32.646302 kubelet[2691]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:30:32.646721 kubelet[2691]: I1213 13:30:32.646355 2691 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:30:32.651133 kubelet[2691]: I1213 13:30:32.651090 2691 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:30:32.651133 kubelet[2691]: I1213 13:30:32.651115 2691 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:30:32.651343 kubelet[2691]: I1213 13:30:32.651327 2691 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:30:32.652564 kubelet[2691]: I1213 13:30:32.652517 2691 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:30:32.653565 kubelet[2691]: I1213 13:30:32.653512 2691 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:30:32.660550 kubelet[2691]: I1213 13:30:32.660514 2691 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:30:32.660757 kubelet[2691]: I1213 13:30:32.660732 2691 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:30:32.661153 kubelet[2691]: I1213 13:30:32.660967 2691 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:30:32.661262 kubelet[2691]: I1213 13:30:32.661162 2691 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:30:32.661262 kubelet[2691]: I1213 13:30:32.661174 2691 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:30:32.661262 kubelet[2691]: I1213 13:30:32.661216 2691 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:30:32.661345 kubelet[2691]: I1213 13:30:32.661311 2691 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:30:32.661345 kubelet[2691]: I1213 13:30:32.661321 2691 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:30:32.661345 kubelet[2691]: I1213 13:30:32.661342 2691 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:30:32.661426 kubelet[2691]: I1213 13:30:32.661359 2691 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:30:32.665552 kubelet[2691]: I1213 13:30:32.664342 2691 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:30:32.665552 kubelet[2691]: I1213 13:30:32.664547 2691 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:30:32.665552 kubelet[2691]: I1213 13:30:32.665214 2691 server.go:1264] "Started kubelet" Dec 13 13:30:32.665552 kubelet[2691]: I1213 13:30:32.665265 2691 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:30:32.666582 kubelet[2691]: I1213 13:30:32.665885 2691 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:30:32.670495 kubelet[2691]: I1213 13:30:32.667643 2691 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:30:32.670495 kubelet[2691]: I1213 13:30:32.667706 2691 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:30:32.670495 kubelet[2691]: I1213 13:30:32.667823 2691 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:30:32.670495 kubelet[2691]: I1213 13:30:32.667910 2691 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:30:32.670495 kubelet[2691]: I1213 13:30:32.668071 2691 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:30:32.670495 kubelet[2691]: I1213 13:30:32.668843 2691 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:30:32.677068 kubelet[2691]: I1213 13:30:32.677020 2691 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:30:32.677068 kubelet[2691]: E1213 13:30:32.677060 2691 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:30:32.677224 kubelet[2691]: I1213 13:30:32.677118 2691 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:30:32.678665 kubelet[2691]: I1213 13:30:32.678638 2691 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:30:32.682821 kubelet[2691]: I1213 13:30:32.682775 2691 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:30:32.684410 kubelet[2691]: I1213 13:30:32.684331 2691 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:30:32.684508 kubelet[2691]: I1213 13:30:32.684423 2691 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:30:32.684508 kubelet[2691]: I1213 13:30:32.684443 2691 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:30:32.685062 kubelet[2691]: E1213 13:30:32.685023 2691 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:30:32.718901 kubelet[2691]: I1213 13:30:32.718867 2691 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:30:32.718901 kubelet[2691]: I1213 13:30:32.718886 2691 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:30:32.718901 kubelet[2691]: I1213 13:30:32.718904 2691 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:30:32.719067 kubelet[2691]: I1213 13:30:32.719046 2691 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:30:32.719088 kubelet[2691]: I1213 13:30:32.719055 2691 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:30:32.719088 kubelet[2691]: I1213 13:30:32.719074 2691 policy_none.go:49] "None policy: Start" Dec 13 13:30:32.719700 kubelet[2691]: I1213 13:30:32.719672 2691 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:30:32.719700 kubelet[2691]: I1213 13:30:32.719692 2691 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:30:32.719843 kubelet[2691]: I1213 13:30:32.719832 2691 state_mem.go:75] "Updated machine memory state" Dec 13 13:30:32.724100 kubelet[2691]: I1213 13:30:32.724067 2691 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:30:32.724316 kubelet[2691]: I1213 13:30:32.724259 2691 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:30:32.724456 kubelet[2691]: I1213 13:30:32.724378 2691 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:30:32.772301 kubelet[2691]: I1213 13:30:32.772272 2691 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:30:32.778491 kubelet[2691]: I1213 13:30:32.778457 2691 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 13:30:32.778559 kubelet[2691]: I1213 13:30:32.778539 2691 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:30:32.785201 kubelet[2691]: I1213 13:30:32.785178 2691 topology_manager.go:215] "Topology Admit Handler" podUID="5400f2e7d5c1a34e6d7bab38788b85fd" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:30:32.785273 kubelet[2691]: I1213 13:30:32.785256 2691 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:30:32.785324 kubelet[2691]: I1213 13:30:32.785301 2691 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:30:32.869309 kubelet[2691]: I1213 13:30:32.869254 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:30:32.869309 kubelet[2691]: I1213 13:30:32.869299 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:30:32.869431 kubelet[2691]: I1213 13:30:32.869351 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:30:32.869431 kubelet[2691]: I1213 13:30:32.869379 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:30:32.869431 kubelet[2691]: I1213 13:30:32.869395 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:30:32.869431 kubelet[2691]: I1213 13:30:32.869418 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:30:32.869576 kubelet[2691]: I1213 13:30:32.869435 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5400f2e7d5c1a34e6d7bab38788b85fd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5400f2e7d5c1a34e6d7bab38788b85fd\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:30:32.869576 kubelet[2691]: I1213 13:30:32.869453 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5400f2e7d5c1a34e6d7bab38788b85fd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5400f2e7d5c1a34e6d7bab38788b85fd\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:30:32.869576 kubelet[2691]: I1213 13:30:32.869497 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5400f2e7d5c1a34e6d7bab38788b85fd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5400f2e7d5c1a34e6d7bab38788b85fd\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:30:33.089672 sudo[2729]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 13:30:33.090077 sudo[2729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 13:30:33.091497 kubelet[2691]: E1213 13:30:33.090772 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:33.091497 kubelet[2691]: E1213 13:30:33.090889 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:33.091497 kubelet[2691]: E1213 13:30:33.091147 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:33.544563 sudo[2729]: pam_unix(sudo:session): session closed for user root Dec 13 13:30:33.662890 kubelet[2691]: I1213 13:30:33.662842 2691 apiserver.go:52] "Watching apiserver" Dec 13 13:30:33.669034 kubelet[2691]: I1213 13:30:33.669006 2691 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:30:33.698299 kubelet[2691]: E1213 13:30:33.697888 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:33.698299 kubelet[2691]: E1213 13:30:33.697913 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:33.698417 kubelet[2691]: E1213 13:30:33.698333 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:33.713723 kubelet[2691]: I1213 13:30:33.713681 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.71366635 podStartE2EDuration="1.71366635s" podCreationTimestamp="2024-12-13 13:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:30:33.712790439 +0000 UTC m=+1.109764738" watchObservedRunningTime="2024-12-13 13:30:33.71366635 +0000 UTC m=+1.110640649" Dec 13 13:30:33.720098 kubelet[2691]: I1213 13:30:33.720053 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.720042903 podStartE2EDuration="1.720042903s" podCreationTimestamp="2024-12-13 13:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:30:33.719928433 +0000 UTC m=+1.116902732" watchObservedRunningTime="2024-12-13 13:30:33.720042903 +0000 UTC m=+1.117017202" Dec 13 13:30:33.733517 kubelet[2691]: I1213 13:30:33.733246 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.73322468 podStartE2EDuration="1.73322468s" podCreationTimestamp="2024-12-13 13:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:30:33.726141853 +0000 UTC m=+1.123116152" watchObservedRunningTime="2024-12-13 13:30:33.73322468 +0000 UTC m=+1.130198979" Dec 13 13:30:34.699725 kubelet[2691]: E1213 13:30:34.699544 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:34.699725 kubelet[2691]: E1213 13:30:34.699680 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:34.954260 sudo[1694]: pam_unix(sudo:session): session closed for user root Dec 13 13:30:34.955906 sshd[1693]: Connection closed by 10.0.0.1 port 39242 Dec 13 13:30:34.956321 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:34.960849 systemd[1]: sshd@8-10.0.0.121:22-10.0.0.1:39242.service: Deactivated successfully. Dec 13 13:30:34.962917 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:30:34.963101 systemd[1]: session-9.scope: Consumed 4.970s CPU time, 189.3M memory peak, 0B memory swap peak. Dec 13 13:30:34.963644 systemd-logind[1477]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:30:34.964479 systemd-logind[1477]: Removed session 9. Dec 13 13:30:35.867308 kubelet[2691]: E1213 13:30:35.867261 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:39.193440 kubelet[2691]: E1213 13:30:39.193397 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:39.704856 kubelet[2691]: E1213 13:30:39.704832 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:42.957497 update_engine[1480]: I20241213 13:30:42.957418 1480 update_attempter.cc:509] Updating boot flags... Dec 13 13:30:43.016503 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2778) Dec 13 13:30:43.058352 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2779) Dec 13 13:30:43.084503 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2779) Dec 13 13:30:44.338781 kubelet[2691]: E1213 13:30:44.338736 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:45.870793 kubelet[2691]: E1213 13:30:45.870757 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:46.713525 kubelet[2691]: E1213 13:30:46.713492 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:47.570901 kubelet[2691]: I1213 13:30:47.570856 2691 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:30:47.571321 containerd[1502]: time="2024-12-13T13:30:47.571144536Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:30:47.571572 kubelet[2691]: I1213 13:30:47.571416 2691 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:30:48.068756 kubelet[2691]: I1213 13:30:48.068717 2691 topology_manager.go:215] "Topology Admit Handler" podUID="b8fee92f-a5a3-48b4-bfb4-219b02e1b8ad" podNamespace="kube-system" podName="kube-proxy-vxcct" Dec 13 13:30:48.071017 kubelet[2691]: I1213 13:30:48.070667 2691 topology_manager.go:215] "Topology Admit Handler" podUID="78f475bc-85ff-47a3-8f1f-5d9cd7115cea" podNamespace="kube-system" podName="cilium-9b9x6" Dec 13 13:30:48.080360 systemd[1]: Created slice kubepods-besteffort-podb8fee92f_a5a3_48b4_bfb4_219b02e1b8ad.slice - libcontainer container kubepods-besteffort-podb8fee92f_a5a3_48b4_bfb4_219b02e1b8ad.slice. Dec 13 13:30:48.096922 systemd[1]: Created slice kubepods-burstable-pod78f475bc_85ff_47a3_8f1f_5d9cd7115cea.slice - libcontainer container kubepods-burstable-pod78f475bc_85ff_47a3_8f1f_5d9cd7115cea.slice. Dec 13 13:30:48.168198 kubelet[2691]: I1213 13:30:48.168160 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-bpf-maps\") pod \"cilium-9b9x6\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " pod="kube-system/cilium-9b9x6" Dec 13 13:30:48.168198 kubelet[2691]: I1213 13:30:48.168200 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-hubble-tls\") pod \"cilium-9b9x6\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " pod="kube-system/cilium-9b9x6" Dec 13 13:30:48.173870 kubelet[2691]: I1213 13:30:48.168218 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwsnc\" (UniqueName: \"kubernetes.io/projected/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-kube-api-access-xwsnc\") pod \"cilium-9b9x6\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " pod="kube-system/cilium-9b9x6" Dec 13 13:30:48.173870 kubelet[2691]: I1213 13:30:48.168235 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpgzw\" (UniqueName: \"kubernetes.io/projected/b8fee92f-a5a3-48b4-bfb4-219b02e1b8ad-kube-api-access-fpgzw\") pod \"kube-proxy-vxcct\" (UID: \"b8fee92f-a5a3-48b4-bfb4-219b02e1b8ad\") " pod="kube-system/kube-proxy-vxcct" Dec 13 13:30:48.173870 kubelet[2691]: I1213 13:30:48.168249 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-hostproc\") pod \"cilium-9b9x6\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " pod="kube-system/cilium-9b9x6" Dec 13 13:30:48.173870 kubelet[2691]: I1213 13:30:48.168264 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-cilium-config-path\") pod \"cilium-9b9x6\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " pod="kube-system/cilium-9b9x6" Dec 13 13:30:48.173870 kubelet[2691]: I1213 13:30:48.168318 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-xtables-lock\") pod \"cilium-9b9x6\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " pod="kube-system/cilium-9b9x6" Dec 13 13:30:48.174001 kubelet[2691]: I1213 13:30:48.168359 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-host-proc-sys-net\") pod \"cilium-9b9x6\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " pod="kube-system/cilium-9b9x6" Dec 13 13:30:48.174001 kubelet[2691]: I1213 13:30:48.168412 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-clustermesh-secrets\") pod \"cilium-9b9x6\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " pod="kube-system/cilium-9b9x6" Dec 13 13:30:48.174001 kubelet[2691]: I1213 13:30:48.168436 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-host-proc-sys-kernel\") pod \"cilium-9b9x6\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " pod="kube-system/cilium-9b9x6" Dec 13 13:30:48.174001 kubelet[2691]: I1213 13:30:48.168459 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8fee92f-a5a3-48b4-bfb4-219b02e1b8ad-kube-proxy\") pod \"kube-proxy-vxcct\" (UID: \"b8fee92f-a5a3-48b4-bfb4-219b02e1b8ad\") " pod="kube-system/kube-proxy-vxcct" Dec 13 13:30:48.174001 kubelet[2691]: I1213 13:30:48.168484 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-cilium-run\") pod \"cilium-9b9x6\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " pod="kube-system/cilium-9b9x6" Dec 13 13:30:48.174001 kubelet[2691]: I1213 13:30:48.168499 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-cni-path\") pod \"cilium-9b9x6\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " pod="kube-system/cilium-9b9x6" Dec 13 13:30:48.174134 kubelet[2691]: I1213 13:30:48.168512 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-etc-cni-netd\") pod \"cilium-9b9x6\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " pod="kube-system/cilium-9b9x6" Dec 13 13:30:48.174134 kubelet[2691]: I1213 13:30:48.168526 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8fee92f-a5a3-48b4-bfb4-219b02e1b8ad-xtables-lock\") pod \"kube-proxy-vxcct\" (UID: \"b8fee92f-a5a3-48b4-bfb4-219b02e1b8ad\") " pod="kube-system/kube-proxy-vxcct" Dec 13 13:30:48.174134 kubelet[2691]: I1213 13:30:48.168541 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8fee92f-a5a3-48b4-bfb4-219b02e1b8ad-lib-modules\") pod \"kube-proxy-vxcct\" (UID: \"b8fee92f-a5a3-48b4-bfb4-219b02e1b8ad\") " pod="kube-system/kube-proxy-vxcct" Dec 13 13:30:48.174134 kubelet[2691]: I1213 13:30:48.168552 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-cilium-cgroup\") pod \"cilium-9b9x6\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " pod="kube-system/cilium-9b9x6" Dec 13 13:30:48.174134 kubelet[2691]: I1213 13:30:48.168578 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-lib-modules\") pod \"cilium-9b9x6\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " pod="kube-system/cilium-9b9x6" Dec 13 13:30:48.274676 kubelet[2691]: E1213 13:30:48.274611 2691 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 13:30:48.274676 kubelet[2691]: E1213 13:30:48.274647 2691 projected.go:200] Error preparing data for projected volume kube-api-access-xwsnc for pod kube-system/cilium-9b9x6: configmap "kube-root-ca.crt" not found Dec 13 13:30:48.274816 kubelet[2691]: E1213 13:30:48.274697 2691 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-kube-api-access-xwsnc podName:78f475bc-85ff-47a3-8f1f-5d9cd7115cea nodeName:}" failed. No retries permitted until 2024-12-13 13:30:48.774680566 +0000 UTC m=+16.171654865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xwsnc" (UniqueName: "kubernetes.io/projected/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-kube-api-access-xwsnc") pod "cilium-9b9x6" (UID: "78f475bc-85ff-47a3-8f1f-5d9cd7115cea") : configmap "kube-root-ca.crt" not found Dec 13 13:30:48.275167 kubelet[2691]: E1213 13:30:48.275012 2691 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 13:30:48.275167 kubelet[2691]: E1213 13:30:48.275027 2691 projected.go:200] Error preparing data for projected volume kube-api-access-fpgzw for pod kube-system/kube-proxy-vxcct: configmap "kube-root-ca.crt" not found Dec 13 13:30:48.275167 kubelet[2691]: E1213 13:30:48.275053 2691 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b8fee92f-a5a3-48b4-bfb4-219b02e1b8ad-kube-api-access-fpgzw podName:b8fee92f-a5a3-48b4-bfb4-219b02e1b8ad nodeName:}" failed. No retries permitted until 2024-12-13 13:30:48.775044915 +0000 UTC m=+16.172019214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fpgzw" (UniqueName: "kubernetes.io/projected/b8fee92f-a5a3-48b4-bfb4-219b02e1b8ad-kube-api-access-fpgzw") pod "kube-proxy-vxcct" (UID: "b8fee92f-a5a3-48b4-bfb4-219b02e1b8ad") : configmap "kube-root-ca.crt" not found Dec 13 13:30:48.463989 kubelet[2691]: I1213 13:30:48.463593 2691 topology_manager.go:215] "Topology Admit Handler" podUID="2a442a27-a2da-493d-a9c5-a4882c486d72" podNamespace="kube-system" podName="cilium-operator-599987898-tvmcl" Dec 13 13:30:48.478352 systemd[1]: Created slice kubepods-besteffort-pod2a442a27_a2da_493d_a9c5_a4882c486d72.slice - libcontainer container kubepods-besteffort-pod2a442a27_a2da_493d_a9c5_a4882c486d72.slice. Dec 13 13:30:48.571597 kubelet[2691]: I1213 13:30:48.571539 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a442a27-a2da-493d-a9c5-a4882c486d72-cilium-config-path\") pod \"cilium-operator-599987898-tvmcl\" (UID: \"2a442a27-a2da-493d-a9c5-a4882c486d72\") " pod="kube-system/cilium-operator-599987898-tvmcl" Dec 13 13:30:48.571597 kubelet[2691]: I1213 13:30:48.571584 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p59bt\" (UniqueName: \"kubernetes.io/projected/2a442a27-a2da-493d-a9c5-a4882c486d72-kube-api-access-p59bt\") pod \"cilium-operator-599987898-tvmcl\" (UID: \"2a442a27-a2da-493d-a9c5-a4882c486d72\") " pod="kube-system/cilium-operator-599987898-tvmcl" Dec 13 13:30:48.781522 kubelet[2691]: E1213 13:30:48.781333 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:48.781906 containerd[1502]: time="2024-12-13T13:30:48.781867342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-tvmcl,Uid:2a442a27-a2da-493d-a9c5-a4882c486d72,Namespace:kube-system,Attempt:0,}" Dec 13 13:30:48.806133 containerd[1502]: time="2024-12-13T13:30:48.806036113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:30:48.806392 containerd[1502]: time="2024-12-13T13:30:48.806111727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:30:48.806392 containerd[1502]: time="2024-12-13T13:30:48.806264215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:48.806830 containerd[1502]: time="2024-12-13T13:30:48.806352651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:48.832619 systemd[1]: Started cri-containerd-1cb1e36672f2f6d9145a22ae2254e19145ea0ac12a015da174d56d8645d385a5.scope - libcontainer container 1cb1e36672f2f6d9145a22ae2254e19145ea0ac12a015da174d56d8645d385a5. Dec 13 13:30:48.866892 containerd[1502]: time="2024-12-13T13:30:48.866857779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-tvmcl,Uid:2a442a27-a2da-493d-a9c5-a4882c486d72,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cb1e36672f2f6d9145a22ae2254e19145ea0ac12a015da174d56d8645d385a5\"" Dec 13 13:30:48.867776 kubelet[2691]: E1213 13:30:48.867756 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:48.868797 containerd[1502]: time="2024-12-13T13:30:48.868748917Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 13:30:48.994193 kubelet[2691]: E1213 13:30:48.994140 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:48.994622 containerd[1502]: time="2024-12-13T13:30:48.994589610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vxcct,Uid:b8fee92f-a5a3-48b4-bfb4-219b02e1b8ad,Namespace:kube-system,Attempt:0,}" Dec 13 13:30:48.999357 kubelet[2691]: E1213 13:30:48.999330 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:49.000147 containerd[1502]: time="2024-12-13T13:30:48.999672285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9b9x6,Uid:78f475bc-85ff-47a3-8f1f-5d9cd7115cea,Namespace:kube-system,Attempt:0,}" Dec 13 13:30:49.019776 containerd[1502]: time="2024-12-13T13:30:49.019687901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:30:49.019776 containerd[1502]: time="2024-12-13T13:30:49.019735360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:30:49.020118 containerd[1502]: time="2024-12-13T13:30:49.019748796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:49.020118 containerd[1502]: time="2024-12-13T13:30:49.019815973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:49.024383 containerd[1502]: time="2024-12-13T13:30:49.024300330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:30:49.024513 containerd[1502]: time="2024-12-13T13:30:49.024373159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:30:49.024513 containerd[1502]: time="2024-12-13T13:30:49.024400039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:49.024568 containerd[1502]: time="2024-12-13T13:30:49.024511711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:49.038618 systemd[1]: Started cri-containerd-daf294f3e8b74df1b9907b15eb18fad6cf46e897812b5d899f8cd62759279e6a.scope - libcontainer container daf294f3e8b74df1b9907b15eb18fad6cf46e897812b5d899f8cd62759279e6a. Dec 13 13:30:49.041669 systemd[1]: Started cri-containerd-de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b.scope - libcontainer container de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b. Dec 13 13:30:49.068311 containerd[1502]: time="2024-12-13T13:30:49.067695247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9b9x6,Uid:78f475bc-85ff-47a3-8f1f-5d9cd7115cea,Namespace:kube-system,Attempt:0,} returns sandbox id \"de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b\"" Dec 13 13:30:49.068429 kubelet[2691]: E1213 13:30:49.068281 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:49.069688 containerd[1502]: time="2024-12-13T13:30:49.069622550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vxcct,Uid:b8fee92f-a5a3-48b4-bfb4-219b02e1b8ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"daf294f3e8b74df1b9907b15eb18fad6cf46e897812b5d899f8cd62759279e6a\"" Dec 13 13:30:49.070495 kubelet[2691]: E1213 13:30:49.070453 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:49.072249 containerd[1502]: time="2024-12-13T13:30:49.072219671Z" level=info msg="CreateContainer within sandbox \"daf294f3e8b74df1b9907b15eb18fad6cf46e897812b5d899f8cd62759279e6a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:30:49.090781 containerd[1502]: time="2024-12-13T13:30:49.090691566Z" level=info msg="CreateContainer within sandbox \"daf294f3e8b74df1b9907b15eb18fad6cf46e897812b5d899f8cd62759279e6a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"202141f54257998135a68e0dc0d970173821a37a0e8e191fa6d5babc9d3f7236\"" Dec 13 13:30:49.091330 containerd[1502]: time="2024-12-13T13:30:49.091191300Z" level=info msg="StartContainer for \"202141f54257998135a68e0dc0d970173821a37a0e8e191fa6d5babc9d3f7236\"" Dec 13 13:30:49.121606 systemd[1]: Started cri-containerd-202141f54257998135a68e0dc0d970173821a37a0e8e191fa6d5babc9d3f7236.scope - libcontainer container 202141f54257998135a68e0dc0d970173821a37a0e8e191fa6d5babc9d3f7236. Dec 13 13:30:49.153973 containerd[1502]: time="2024-12-13T13:30:49.153887489Z" level=info msg="StartContainer for \"202141f54257998135a68e0dc0d970173821a37a0e8e191fa6d5babc9d3f7236\" returns successfully" Dec 13 13:30:49.718466 kubelet[2691]: E1213 13:30:49.718422 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:49.730494 kubelet[2691]: I1213 13:30:49.728697 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vxcct" podStartSLOduration=1.728679578 podStartE2EDuration="1.728679578s" podCreationTimestamp="2024-12-13 13:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:30:49.72861198 +0000 UTC m=+17.125586279" watchObservedRunningTime="2024-12-13 13:30:49.728679578 +0000 UTC m=+17.125653867" Dec 13 13:30:50.663006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4218595633.mount: Deactivated successfully. Dec 13 13:30:51.967808 containerd[1502]: time="2024-12-13T13:30:51.967758442Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:51.968604 containerd[1502]: time="2024-12-13T13:30:51.968554887Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907193" Dec 13 13:30:51.969610 containerd[1502]: time="2024-12-13T13:30:51.969579222Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:30:51.970894 containerd[1502]: time="2024-12-13T13:30:51.970864910Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.102081026s" Dec 13 13:30:51.970894 containerd[1502]: time="2024-12-13T13:30:51.970891631Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 13:30:51.972072 containerd[1502]: time="2024-12-13T13:30:51.972044318Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 13:30:51.973527 containerd[1502]: time="2024-12-13T13:30:51.973451386Z" level=info msg="CreateContainer within sandbox \"1cb1e36672f2f6d9145a22ae2254e19145ea0ac12a015da174d56d8645d385a5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 13:30:51.987831 containerd[1502]: time="2024-12-13T13:30:51.987789088Z" level=info msg="CreateContainer within sandbox \"1cb1e36672f2f6d9145a22ae2254e19145ea0ac12a015da174d56d8645d385a5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255\"" Dec 13 13:30:51.988283 containerd[1502]: time="2024-12-13T13:30:51.988248415Z" level=info msg="StartContainer for \"97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255\"" Dec 13 13:30:52.016622 systemd[1]: Started cri-containerd-97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255.scope - libcontainer container 97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255. Dec 13 13:30:52.040811 containerd[1502]: time="2024-12-13T13:30:52.040759135Z" level=info msg="StartContainer for \"97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255\" returns successfully" Dec 13 13:30:52.737328 kubelet[2691]: E1213 13:30:52.737291 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:52.746272 kubelet[2691]: I1213 13:30:52.746200 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-tvmcl" podStartSLOduration=1.64261072 podStartE2EDuration="4.746185129s" podCreationTimestamp="2024-12-13 13:30:48 +0000 UTC" firstStartedPulling="2024-12-13 13:30:48.868353169 +0000 UTC m=+16.265327468" lastFinishedPulling="2024-12-13 13:30:51.971927588 +0000 UTC m=+19.368901877" observedRunningTime="2024-12-13 13:30:52.745510966 +0000 UTC m=+20.142485275" watchObservedRunningTime="2024-12-13 13:30:52.746185129 +0000 UTC m=+20.143159428" Dec 13 13:30:53.729225 kubelet[2691]: E1213 13:30:53.729195 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:30:59.290456 systemd[1]: Started sshd@9-10.0.0.121:22-10.0.0.1:49820.service - OpenSSH per-connection server daemon (10.0.0.1:49820). Dec 13 13:30:59.332585 sshd[3128]: Accepted publickey for core from 10.0.0.1 port 49820 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:30:59.333854 sshd-session[3128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:59.340782 systemd-logind[1477]: New session 10 of user core. Dec 13 13:30:59.345655 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:30:59.477337 sshd[3130]: Connection closed by 10.0.0.1 port 49820 Dec 13 13:30:59.477705 sshd-session[3128]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:59.481788 systemd[1]: sshd@9-10.0.0.121:22-10.0.0.1:49820.service: Deactivated successfully. Dec 13 13:30:59.484000 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:30:59.484635 systemd-logind[1477]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:30:59.485527 systemd-logind[1477]: Removed session 10. Dec 13 13:31:03.716205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3524042106.mount: Deactivated successfully. Dec 13 13:31:04.488575 systemd[1]: Started sshd@10-10.0.0.121:22-10.0.0.1:49830.service - OpenSSH per-connection server daemon (10.0.0.1:49830). Dec 13 13:31:04.531259 sshd[3165]: Accepted publickey for core from 10.0.0.1 port 49830 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:04.532742 sshd-session[3165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:04.537039 systemd-logind[1477]: New session 11 of user core. Dec 13 13:31:04.547587 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:31:04.700773 sshd[3167]: Connection closed by 10.0.0.1 port 49830 Dec 13 13:31:04.701146 sshd-session[3165]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:04.705340 systemd[1]: sshd@10-10.0.0.121:22-10.0.0.1:49830.service: Deactivated successfully. Dec 13 13:31:04.707528 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:31:04.708136 systemd-logind[1477]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:31:04.709031 systemd-logind[1477]: Removed session 11. Dec 13 13:31:06.031436 containerd[1502]: time="2024-12-13T13:31:06.031377284Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:06.032004 containerd[1502]: time="2024-12-13T13:31:06.031976110Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734723" Dec 13 13:31:06.033063 containerd[1502]: time="2024-12-13T13:31:06.033035682Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:31:06.034536 containerd[1502]: time="2024-12-13T13:31:06.034514503Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.062444958s" Dec 13 13:31:06.034587 containerd[1502]: time="2024-12-13T13:31:06.034539781Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 13:31:06.042935 containerd[1502]: time="2024-12-13T13:31:06.042901334Z" level=info msg="CreateContainer within sandbox \"de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:31:06.056026 containerd[1502]: time="2024-12-13T13:31:06.055991159Z" level=info msg="CreateContainer within sandbox \"de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115\"" Dec 13 13:31:06.056405 containerd[1502]: time="2024-12-13T13:31:06.056378698Z" level=info msg="StartContainer for \"e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115\"" Dec 13 13:31:06.087660 systemd[1]: Started cri-containerd-e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115.scope - libcontainer container e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115. Dec 13 13:31:06.170049 systemd[1]: cri-containerd-e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115.scope: Deactivated successfully. Dec 13 13:31:06.190343 containerd[1502]: time="2024-12-13T13:31:06.190288543Z" level=info msg="StartContainer for \"e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115\" returns successfully" Dec 13 13:31:06.479907 containerd[1502]: time="2024-12-13T13:31:06.479840637Z" level=info msg="shim disconnected" id=e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115 namespace=k8s.io Dec 13 13:31:06.479907 containerd[1502]: time="2024-12-13T13:31:06.479899057Z" level=warning msg="cleaning up after shim disconnected" id=e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115 namespace=k8s.io Dec 13 13:31:06.479907 containerd[1502]: time="2024-12-13T13:31:06.479907122Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:31:06.792492 kubelet[2691]: E1213 13:31:06.792199 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:06.794094 containerd[1502]: time="2024-12-13T13:31:06.794050891Z" level=info msg="CreateContainer within sandbox \"de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:31:06.915126 containerd[1502]: time="2024-12-13T13:31:06.915073370Z" level=info msg="CreateContainer within sandbox \"de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1\"" Dec 13 13:31:06.915730 containerd[1502]: time="2024-12-13T13:31:06.915678808Z" level=info msg="StartContainer for \"333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1\"" Dec 13 13:31:06.944651 systemd[1]: Started cri-containerd-333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1.scope - libcontainer container 333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1. Dec 13 13:31:06.969171 containerd[1502]: time="2024-12-13T13:31:06.969121195Z" level=info msg="StartContainer for \"333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1\" returns successfully" Dec 13 13:31:06.980934 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:31:06.981181 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:31:06.981253 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:31:06.987871 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:31:06.988104 systemd[1]: cri-containerd-333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1.scope: Deactivated successfully. Dec 13 13:31:07.005267 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:31:07.016118 containerd[1502]: time="2024-12-13T13:31:07.016055780Z" level=info msg="shim disconnected" id=333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1 namespace=k8s.io Dec 13 13:31:07.016118 containerd[1502]: time="2024-12-13T13:31:07.016111225Z" level=warning msg="cleaning up after shim disconnected" id=333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1 namespace=k8s.io Dec 13 13:31:07.016118 containerd[1502]: time="2024-12-13T13:31:07.016121384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:31:07.052890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115-rootfs.mount: Deactivated successfully. Dec 13 13:31:07.796121 kubelet[2691]: E1213 13:31:07.796087 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:07.797648 containerd[1502]: time="2024-12-13T13:31:07.797603441Z" level=info msg="CreateContainer within sandbox \"de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:31:07.816346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2436902052.mount: Deactivated successfully. Dec 13 13:31:07.819291 containerd[1502]: time="2024-12-13T13:31:07.819237677Z" level=info msg="CreateContainer within sandbox \"de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c\"" Dec 13 13:31:07.819926 containerd[1502]: time="2024-12-13T13:31:07.819875886Z" level=info msg="StartContainer for \"04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c\"" Dec 13 13:31:07.852615 systemd[1]: Started cri-containerd-04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c.scope - libcontainer container 04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c. Dec 13 13:31:07.884266 containerd[1502]: time="2024-12-13T13:31:07.884220384Z" level=info msg="StartContainer for \"04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c\" returns successfully" Dec 13 13:31:07.885591 systemd[1]: cri-containerd-04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c.scope: Deactivated successfully. Dec 13 13:31:07.916988 containerd[1502]: time="2024-12-13T13:31:07.916922268Z" level=info msg="shim disconnected" id=04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c namespace=k8s.io Dec 13 13:31:07.916988 containerd[1502]: time="2024-12-13T13:31:07.916985336Z" level=warning msg="cleaning up after shim disconnected" id=04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c namespace=k8s.io Dec 13 13:31:07.917177 containerd[1502]: time="2024-12-13T13:31:07.916997960Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:31:08.052826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c-rootfs.mount: Deactivated successfully. Dec 13 13:31:08.799645 kubelet[2691]: E1213 13:31:08.799614 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:08.801721 containerd[1502]: time="2024-12-13T13:31:08.801671031Z" level=info msg="CreateContainer within sandbox \"de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:31:09.373681 containerd[1502]: time="2024-12-13T13:31:09.373628543Z" level=info msg="CreateContainer within sandbox \"de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3\"" Dec 13 13:31:09.374321 containerd[1502]: time="2024-12-13T13:31:09.374265060Z" level=info msg="StartContainer for \"16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3\"" Dec 13 13:31:09.397712 systemd[1]: run-containerd-runc-k8s.io-16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3-runc.6G77K6.mount: Deactivated successfully. Dec 13 13:31:09.413614 systemd[1]: Started cri-containerd-16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3.scope - libcontainer container 16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3. Dec 13 13:31:09.436854 systemd[1]: cri-containerd-16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3.scope: Deactivated successfully. Dec 13 13:31:09.549906 containerd[1502]: time="2024-12-13T13:31:09.549843963Z" level=info msg="StartContainer for \"16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3\" returns successfully" Dec 13 13:31:09.712559 systemd[1]: Started sshd@11-10.0.0.121:22-10.0.0.1:46680.service - OpenSSH per-connection server daemon (10.0.0.1:46680). Dec 13 13:31:09.803084 kubelet[2691]: E1213 13:31:09.803029 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:09.813288 sshd[3429]: Accepted publickey for core from 10.0.0.1 port 46680 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:09.815069 sshd-session[3429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:09.819390 systemd-logind[1477]: New session 12 of user core. Dec 13 13:31:09.829711 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:31:09.960432 containerd[1502]: time="2024-12-13T13:31:09.960354157Z" level=info msg="shim disconnected" id=16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3 namespace=k8s.io Dec 13 13:31:09.960432 containerd[1502]: time="2024-12-13T13:31:09.960408268Z" level=warning msg="cleaning up after shim disconnected" id=16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3 namespace=k8s.io Dec 13 13:31:09.960432 containerd[1502]: time="2024-12-13T13:31:09.960417586Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:31:09.970602 sshd[3431]: Connection closed by 10.0.0.1 port 46680 Dec 13 13:31:09.972414 sshd-session[3429]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:09.977695 systemd[1]: sshd@11-10.0.0.121:22-10.0.0.1:46680.service: Deactivated successfully. Dec 13 13:31:09.979904 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:31:09.980584 systemd-logind[1477]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:31:09.981614 systemd-logind[1477]: Removed session 12. Dec 13 13:31:10.157220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3-rootfs.mount: Deactivated successfully. Dec 13 13:31:10.806243 kubelet[2691]: E1213 13:31:10.806211 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:10.808820 containerd[1502]: time="2024-12-13T13:31:10.808769782Z" level=info msg="CreateContainer within sandbox \"de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:31:10.832009 containerd[1502]: time="2024-12-13T13:31:10.831948542Z" level=info msg="CreateContainer within sandbox \"de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52\"" Dec 13 13:31:10.832708 containerd[1502]: time="2024-12-13T13:31:10.832620735Z" level=info msg="StartContainer for \"d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52\"" Dec 13 13:31:10.861605 systemd[1]: Started cri-containerd-d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52.scope - libcontainer container d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52. Dec 13 13:31:10.892893 containerd[1502]: time="2024-12-13T13:31:10.892830539Z" level=info msg="StartContainer for \"d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52\" returns successfully" Dec 13 13:31:10.987283 kubelet[2691]: I1213 13:31:10.987242 2691 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:31:11.008230 kubelet[2691]: I1213 13:31:11.008181 2691 topology_manager.go:215] "Topology Admit Handler" podUID="e70b380c-ef77-4381-9584-f84e7fb6946f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-f9k69" Dec 13 13:31:11.008383 kubelet[2691]: I1213 13:31:11.008333 2691 topology_manager.go:215] "Topology Admit Handler" podUID="462d47fa-b1ba-4942-91f7-f1156a0d9b34" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zw6x4" Dec 13 13:31:11.015277 systemd[1]: Created slice kubepods-burstable-pod462d47fa_b1ba_4942_91f7_f1156a0d9b34.slice - libcontainer container kubepods-burstable-pod462d47fa_b1ba_4942_91f7_f1156a0d9b34.slice. Dec 13 13:31:11.021620 systemd[1]: Created slice kubepods-burstable-pode70b380c_ef77_4381_9584_f84e7fb6946f.slice - libcontainer container kubepods-burstable-pode70b380c_ef77_4381_9584_f84e7fb6946f.slice. Dec 13 13:31:11.026886 kubelet[2691]: I1213 13:31:11.026841 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e70b380c-ef77-4381-9584-f84e7fb6946f-config-volume\") pod \"coredns-7db6d8ff4d-f9k69\" (UID: \"e70b380c-ef77-4381-9584-f84e7fb6946f\") " pod="kube-system/coredns-7db6d8ff4d-f9k69" Dec 13 13:31:11.026886 kubelet[2691]: I1213 13:31:11.026878 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/462d47fa-b1ba-4942-91f7-f1156a0d9b34-config-volume\") pod \"coredns-7db6d8ff4d-zw6x4\" (UID: \"462d47fa-b1ba-4942-91f7-f1156a0d9b34\") " pod="kube-system/coredns-7db6d8ff4d-zw6x4" Dec 13 13:31:11.026990 kubelet[2691]: I1213 13:31:11.026900 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsjvr\" (UniqueName: \"kubernetes.io/projected/462d47fa-b1ba-4942-91f7-f1156a0d9b34-kube-api-access-dsjvr\") pod \"coredns-7db6d8ff4d-zw6x4\" (UID: \"462d47fa-b1ba-4942-91f7-f1156a0d9b34\") " pod="kube-system/coredns-7db6d8ff4d-zw6x4" Dec 13 13:31:11.027464 kubelet[2691]: I1213 13:31:11.027403 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5992\" (UniqueName: \"kubernetes.io/projected/e70b380c-ef77-4381-9584-f84e7fb6946f-kube-api-access-g5992\") pod \"coredns-7db6d8ff4d-f9k69\" (UID: \"e70b380c-ef77-4381-9584-f84e7fb6946f\") " pod="kube-system/coredns-7db6d8ff4d-f9k69" Dec 13 13:31:11.319946 kubelet[2691]: E1213 13:31:11.319898 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:11.320595 containerd[1502]: time="2024-12-13T13:31:11.320519730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zw6x4,Uid:462d47fa-b1ba-4942-91f7-f1156a0d9b34,Namespace:kube-system,Attempt:0,}" Dec 13 13:31:11.324465 kubelet[2691]: E1213 13:31:11.324442 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:11.324907 containerd[1502]: time="2024-12-13T13:31:11.324879191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f9k69,Uid:e70b380c-ef77-4381-9584-f84e7fb6946f,Namespace:kube-system,Attempt:0,}" Dec 13 13:31:11.816340 kubelet[2691]: E1213 13:31:11.816290 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:11.827465 kubelet[2691]: I1213 13:31:11.827411 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9b9x6" podStartSLOduration=6.860505516 podStartE2EDuration="23.827393527s" podCreationTimestamp="2024-12-13 13:30:48 +0000 UTC" firstStartedPulling="2024-12-13 13:30:49.069082099 +0000 UTC m=+16.466056398" lastFinishedPulling="2024-12-13 13:31:06.03597011 +0000 UTC m=+33.432944409" observedRunningTime="2024-12-13 13:31:11.827179555 +0000 UTC m=+39.224153854" watchObservedRunningTime="2024-12-13 13:31:11.827393527 +0000 UTC m=+39.224367836" Dec 13 13:31:12.817308 kubelet[2691]: E1213 13:31:12.817265 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:13.120963 systemd-networkd[1417]: cilium_host: Link UP Dec 13 13:31:13.121654 systemd-networkd[1417]: cilium_net: Link UP Dec 13 13:31:13.121659 systemd-networkd[1417]: cilium_net: Gained carrier Dec 13 13:31:13.121886 systemd-networkd[1417]: cilium_host: Gained carrier Dec 13 13:31:13.215799 systemd-networkd[1417]: cilium_vxlan: Link UP Dec 13 13:31:13.215809 systemd-networkd[1417]: cilium_vxlan: Gained carrier Dec 13 13:31:13.446503 kernel: NET: Registered PF_ALG protocol family Dec 13 13:31:13.818986 kubelet[2691]: E1213 13:31:13.818940 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:13.998607 systemd-networkd[1417]: cilium_host: Gained IPv6LL Dec 13 13:31:14.063567 systemd-networkd[1417]: cilium_net: Gained IPv6LL Dec 13 13:31:14.101129 systemd-networkd[1417]: lxc_health: Link UP Dec 13 13:31:14.108987 systemd-networkd[1417]: lxc_health: Gained carrier Dec 13 13:31:14.319606 systemd-networkd[1417]: cilium_vxlan: Gained IPv6LL Dec 13 13:31:14.406170 systemd-networkd[1417]: lxc814e7b75107f: Link UP Dec 13 13:31:14.412569 kernel: eth0: renamed from tmpf5d79 Dec 13 13:31:14.420202 systemd-networkd[1417]: lxc814e7b75107f: Gained carrier Dec 13 13:31:14.420644 systemd-networkd[1417]: lxc371cd27476b8: Link UP Dec 13 13:31:14.435679 kernel: eth0: renamed from tmpb477f Dec 13 13:31:14.444527 systemd-networkd[1417]: lxc371cd27476b8: Gained carrier Dec 13 13:31:14.988618 systemd[1]: Started sshd@12-10.0.0.121:22-10.0.0.1:46690.service - OpenSSH per-connection server daemon (10.0.0.1:46690). Dec 13 13:31:15.003711 kubelet[2691]: E1213 13:31:15.003670 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:15.034499 sshd[3976]: Accepted publickey for core from 10.0.0.1 port 46690 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:15.036598 sshd-session[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:15.043363 systemd-logind[1477]: New session 13 of user core. Dec 13 13:31:15.047764 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:31:15.192621 sshd[3978]: Connection closed by 10.0.0.1 port 46690 Dec 13 13:31:15.193024 sshd-session[3976]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:15.197231 systemd[1]: sshd@12-10.0.0.121:22-10.0.0.1:46690.service: Deactivated successfully. Dec 13 13:31:15.199518 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:31:15.200151 systemd-logind[1477]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:31:15.201021 systemd-logind[1477]: Removed session 13. Dec 13 13:31:15.214611 systemd-networkd[1417]: lxc_health: Gained IPv6LL Dec 13 13:31:15.726737 systemd-networkd[1417]: lxc371cd27476b8: Gained IPv6LL Dec 13 13:31:15.821540 kubelet[2691]: E1213 13:31:15.821466 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:15.854755 systemd-networkd[1417]: lxc814e7b75107f: Gained IPv6LL Dec 13 13:31:16.823121 kubelet[2691]: E1213 13:31:16.823075 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:17.808203 containerd[1502]: time="2024-12-13T13:31:17.807995342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:31:17.808203 containerd[1502]: time="2024-12-13T13:31:17.808056907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:31:17.808203 containerd[1502]: time="2024-12-13T13:31:17.808075572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:31:17.808734 containerd[1502]: time="2024-12-13T13:31:17.808218201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:31:17.829621 systemd[1]: Started cri-containerd-f5d79709449acb3f850578a4188cee399cad51775ccb9305aaff9aa93f1469dd.scope - libcontainer container f5d79709449acb3f850578a4188cee399cad51775ccb9305aaff9aa93f1469dd. Dec 13 13:31:17.835550 containerd[1502]: time="2024-12-13T13:31:17.835291641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:31:17.835550 containerd[1502]: time="2024-12-13T13:31:17.835344791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:31:17.835550 containerd[1502]: time="2024-12-13T13:31:17.835357304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:31:17.835550 containerd[1502]: time="2024-12-13T13:31:17.835440610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:31:17.844530 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:31:17.852893 systemd[1]: run-containerd-runc-k8s.io-b477f67006f3fec3ec5386c4fae2e8a1fda9fbe9dcb51e5f240a6be5f853b8d8-runc.Ee8MWi.mount: Deactivated successfully. Dec 13 13:31:17.867697 systemd[1]: Started cri-containerd-b477f67006f3fec3ec5386c4fae2e8a1fda9fbe9dcb51e5f240a6be5f853b8d8.scope - libcontainer container b477f67006f3fec3ec5386c4fae2e8a1fda9fbe9dcb51e5f240a6be5f853b8d8. Dec 13 13:31:17.880155 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:31:17.880874 containerd[1502]: time="2024-12-13T13:31:17.880839274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zw6x4,Uid:462d47fa-b1ba-4942-91f7-f1156a0d9b34,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5d79709449acb3f850578a4188cee399cad51775ccb9305aaff9aa93f1469dd\"" Dec 13 13:31:17.882007 kubelet[2691]: E1213 13:31:17.881763 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:17.886664 containerd[1502]: time="2024-12-13T13:31:17.886625960Z" level=info msg="CreateContainer within sandbox \"f5d79709449acb3f850578a4188cee399cad51775ccb9305aaff9aa93f1469dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:31:17.910169 containerd[1502]: time="2024-12-13T13:31:17.910124869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-f9k69,Uid:e70b380c-ef77-4381-9584-f84e7fb6946f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b477f67006f3fec3ec5386c4fae2e8a1fda9fbe9dcb51e5f240a6be5f853b8d8\"" Dec 13 13:31:17.910925 kubelet[2691]: E1213 13:31:17.910895 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:17.914464 containerd[1502]: time="2024-12-13T13:31:17.914298275Z" level=info msg="CreateContainer within sandbox \"b477f67006f3fec3ec5386c4fae2e8a1fda9fbe9dcb51e5f240a6be5f853b8d8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:31:18.150419 containerd[1502]: time="2024-12-13T13:31:18.150238066Z" level=info msg="CreateContainer within sandbox \"b477f67006f3fec3ec5386c4fae2e8a1fda9fbe9dcb51e5f240a6be5f853b8d8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e1d04d73a735738e5f9a867502e33e6dd263b0efe899322d877ac9410c260295\"" Dec 13 13:31:18.150992 containerd[1502]: time="2024-12-13T13:31:18.150897455Z" level=info msg="StartContainer for \"e1d04d73a735738e5f9a867502e33e6dd263b0efe899322d877ac9410c260295\"" Dec 13 13:31:18.151594 containerd[1502]: time="2024-12-13T13:31:18.151551823Z" level=info msg="CreateContainer within sandbox \"f5d79709449acb3f850578a4188cee399cad51775ccb9305aaff9aa93f1469dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a26ba051628c8a3488ce7843ee7f4400b97dc04016440a6f7d5ad56271ea1458\"" Dec 13 13:31:18.151969 containerd[1502]: time="2024-12-13T13:31:18.151946274Z" level=info msg="StartContainer for \"a26ba051628c8a3488ce7843ee7f4400b97dc04016440a6f7d5ad56271ea1458\"" Dec 13 13:31:18.177666 systemd[1]: Started cri-containerd-e1d04d73a735738e5f9a867502e33e6dd263b0efe899322d877ac9410c260295.scope - libcontainer container e1d04d73a735738e5f9a867502e33e6dd263b0efe899322d877ac9410c260295. Dec 13 13:31:18.186685 systemd[1]: Started cri-containerd-a26ba051628c8a3488ce7843ee7f4400b97dc04016440a6f7d5ad56271ea1458.scope - libcontainer container a26ba051628c8a3488ce7843ee7f4400b97dc04016440a6f7d5ad56271ea1458. Dec 13 13:31:18.229313 containerd[1502]: time="2024-12-13T13:31:18.229233337Z" level=info msg="StartContainer for \"e1d04d73a735738e5f9a867502e33e6dd263b0efe899322d877ac9410c260295\" returns successfully" Dec 13 13:31:18.230301 containerd[1502]: time="2024-12-13T13:31:18.230261988Z" level=info msg="StartContainer for \"a26ba051628c8a3488ce7843ee7f4400b97dc04016440a6f7d5ad56271ea1458\" returns successfully" Dec 13 13:31:18.827673 kubelet[2691]: E1213 13:31:18.827601 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:18.830444 kubelet[2691]: E1213 13:31:18.829945 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:18.847610 kubelet[2691]: I1213 13:31:18.847537 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-f9k69" podStartSLOduration=30.847516277 podStartE2EDuration="30.847516277s" podCreationTimestamp="2024-12-13 13:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:31:18.837856759 +0000 UTC m=+46.234831058" watchObservedRunningTime="2024-12-13 13:31:18.847516277 +0000 UTC m=+46.244490576" Dec 13 13:31:18.861128 kubelet[2691]: I1213 13:31:18.860847 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zw6x4" podStartSLOduration=30.860828723 podStartE2EDuration="30.860828723s" podCreationTimestamp="2024-12-13 13:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:31:18.848141591 +0000 UTC m=+46.245115890" watchObservedRunningTime="2024-12-13 13:31:18.860828723 +0000 UTC m=+46.257803022" Dec 13 13:31:19.831156 kubelet[2691]: E1213 13:31:19.831131 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:19.831573 kubelet[2691]: E1213 13:31:19.831174 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:20.207834 systemd[1]: Started sshd@13-10.0.0.121:22-10.0.0.1:38238.service - OpenSSH per-connection server daemon (10.0.0.1:38238). Dec 13 13:31:20.247983 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 38238 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:20.249362 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:20.253350 systemd-logind[1477]: New session 14 of user core. Dec 13 13:31:20.263632 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:31:20.411060 sshd[4174]: Connection closed by 10.0.0.1 port 38238 Dec 13 13:31:20.411597 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:20.423520 systemd[1]: sshd@13-10.0.0.121:22-10.0.0.1:38238.service: Deactivated successfully. Dec 13 13:31:20.425381 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:31:20.426991 systemd-logind[1477]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:31:20.432702 systemd[1]: Started sshd@14-10.0.0.121:22-10.0.0.1:38240.service - OpenSSH per-connection server daemon (10.0.0.1:38240). Dec 13 13:31:20.433829 systemd-logind[1477]: Removed session 14. Dec 13 13:31:20.465349 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 38240 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:20.466740 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:20.470643 systemd-logind[1477]: New session 15 of user core. Dec 13 13:31:20.478600 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:31:20.736774 sshd[4190]: Connection closed by 10.0.0.1 port 38240 Dec 13 13:31:20.737160 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:20.748352 systemd[1]: sshd@14-10.0.0.121:22-10.0.0.1:38240.service: Deactivated successfully. Dec 13 13:31:20.750156 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:31:20.751835 systemd-logind[1477]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:31:20.757706 systemd[1]: Started sshd@15-10.0.0.121:22-10.0.0.1:38256.service - OpenSSH per-connection server daemon (10.0.0.1:38256). Dec 13 13:31:20.758667 systemd-logind[1477]: Removed session 15. Dec 13 13:31:20.789411 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 38256 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:20.791191 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:20.795943 systemd-logind[1477]: New session 16 of user core. Dec 13 13:31:20.805631 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:31:20.832970 kubelet[2691]: E1213 13:31:20.832930 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:20.833323 kubelet[2691]: E1213 13:31:20.833096 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:20.971250 sshd[4203]: Connection closed by 10.0.0.1 port 38256 Dec 13 13:31:20.971690 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:20.975237 systemd[1]: sshd@15-10.0.0.121:22-10.0.0.1:38256.service: Deactivated successfully. Dec 13 13:31:20.977300 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:31:20.977921 systemd-logind[1477]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:31:20.978817 systemd-logind[1477]: Removed session 16. Dec 13 13:31:25.983783 systemd[1]: Started sshd@16-10.0.0.121:22-10.0.0.1:38272.service - OpenSSH per-connection server daemon (10.0.0.1:38272). Dec 13 13:31:26.021784 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 38272 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:26.023076 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:26.027185 systemd-logind[1477]: New session 17 of user core. Dec 13 13:31:26.040621 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:31:26.145122 sshd[4219]: Connection closed by 10.0.0.1 port 38272 Dec 13 13:31:26.145446 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:26.149176 systemd[1]: sshd@16-10.0.0.121:22-10.0.0.1:38272.service: Deactivated successfully. Dec 13 13:31:26.151272 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:31:26.151891 systemd-logind[1477]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:31:26.152799 systemd-logind[1477]: Removed session 17. Dec 13 13:31:31.156514 systemd[1]: Started sshd@17-10.0.0.121:22-10.0.0.1:46612.service - OpenSSH per-connection server daemon (10.0.0.1:46612). Dec 13 13:31:31.192158 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 46612 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:31.193508 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:31.197134 systemd-logind[1477]: New session 18 of user core. Dec 13 13:31:31.206581 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:31:31.309785 sshd[4234]: Connection closed by 10.0.0.1 port 46612 Dec 13 13:31:31.310125 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:31.317133 systemd[1]: sshd@17-10.0.0.121:22-10.0.0.1:46612.service: Deactivated successfully. Dec 13 13:31:31.318779 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:31:31.320064 systemd-logind[1477]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:31:31.324815 systemd[1]: Started sshd@18-10.0.0.121:22-10.0.0.1:46624.service - OpenSSH per-connection server daemon (10.0.0.1:46624). Dec 13 13:31:31.325972 systemd-logind[1477]: Removed session 18. Dec 13 13:31:31.358145 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 46624 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:31.359811 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:31.364355 systemd-logind[1477]: New session 19 of user core. Dec 13 13:31:31.375604 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:31:31.591567 sshd[4249]: Connection closed by 10.0.0.1 port 46624 Dec 13 13:31:31.592325 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:31.604413 systemd[1]: sshd@18-10.0.0.121:22-10.0.0.1:46624.service: Deactivated successfully. Dec 13 13:31:31.606259 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:31:31.607893 systemd-logind[1477]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:31:31.620707 systemd[1]: Started sshd@19-10.0.0.121:22-10.0.0.1:46638.service - OpenSSH per-connection server daemon (10.0.0.1:46638). Dec 13 13:31:31.621516 systemd-logind[1477]: Removed session 19. Dec 13 13:31:31.656654 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 46638 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:31.658187 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:31.662660 systemd-logind[1477]: New session 20 of user core. Dec 13 13:31:31.676653 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:31:33.030771 sshd[4261]: Connection closed by 10.0.0.1 port 46638 Dec 13 13:31:33.028103 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:33.040090 systemd[1]: sshd@19-10.0.0.121:22-10.0.0.1:46638.service: Deactivated successfully. Dec 13 13:31:33.045598 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:31:33.048845 systemd-logind[1477]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:31:33.056774 systemd-logind[1477]: Removed session 20. Dec 13 13:31:33.070939 systemd[1]: Started sshd@20-10.0.0.121:22-10.0.0.1:46640.service - OpenSSH per-connection server daemon (10.0.0.1:46640). Dec 13 13:31:33.107499 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 46640 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:33.109101 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:33.113430 systemd-logind[1477]: New session 21 of user core. Dec 13 13:31:33.123609 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 13:31:33.356159 sshd[4285]: Connection closed by 10.0.0.1 port 46640 Dec 13 13:31:33.356851 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:33.365751 systemd[1]: sshd@20-10.0.0.121:22-10.0.0.1:46640.service: Deactivated successfully. Dec 13 13:31:33.367586 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 13:31:33.369099 systemd-logind[1477]: Session 21 logged out. Waiting for processes to exit. Dec 13 13:31:33.376784 systemd[1]: Started sshd@21-10.0.0.121:22-10.0.0.1:46656.service - OpenSSH per-connection server daemon (10.0.0.1:46656). Dec 13 13:31:33.377958 systemd-logind[1477]: Removed session 21. Dec 13 13:31:33.408575 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 46656 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:33.410347 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:33.414609 systemd-logind[1477]: New session 22 of user core. Dec 13 13:31:33.422664 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 13:31:33.530847 sshd[4297]: Connection closed by 10.0.0.1 port 46656 Dec 13 13:31:33.531186 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:33.535804 systemd[1]: sshd@21-10.0.0.121:22-10.0.0.1:46656.service: Deactivated successfully. Dec 13 13:31:33.538299 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 13:31:33.538997 systemd-logind[1477]: Session 22 logged out. Waiting for processes to exit. Dec 13 13:31:33.539894 systemd-logind[1477]: Removed session 22. Dec 13 13:31:38.543998 systemd[1]: Started sshd@22-10.0.0.121:22-10.0.0.1:36732.service - OpenSSH per-connection server daemon (10.0.0.1:36732). Dec 13 13:31:38.581398 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 36732 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:38.582794 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:38.586378 systemd-logind[1477]: New session 23 of user core. Dec 13 13:31:38.595597 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 13:31:38.701959 sshd[4312]: Connection closed by 10.0.0.1 port 36732 Dec 13 13:31:38.702422 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:38.706248 systemd[1]: sshd@22-10.0.0.121:22-10.0.0.1:36732.service: Deactivated successfully. Dec 13 13:31:38.708210 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 13:31:38.708950 systemd-logind[1477]: Session 23 logged out. Waiting for processes to exit. Dec 13 13:31:38.709949 systemd-logind[1477]: Removed session 23. Dec 13 13:31:43.715327 systemd[1]: Started sshd@23-10.0.0.121:22-10.0.0.1:36736.service - OpenSSH per-connection server daemon (10.0.0.1:36736). Dec 13 13:31:43.751497 sshd[4327]: Accepted publickey for core from 10.0.0.1 port 36736 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:43.752908 sshd-session[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:43.757211 systemd-logind[1477]: New session 24 of user core. Dec 13 13:31:43.771609 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 13:31:43.881068 sshd[4329]: Connection closed by 10.0.0.1 port 36736 Dec 13 13:31:43.881418 sshd-session[4327]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:43.885239 systemd[1]: sshd@23-10.0.0.121:22-10.0.0.1:36736.service: Deactivated successfully. Dec 13 13:31:43.887091 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 13:31:43.887767 systemd-logind[1477]: Session 24 logged out. Waiting for processes to exit. Dec 13 13:31:43.888656 systemd-logind[1477]: Removed session 24. Dec 13 13:31:48.896841 systemd[1]: Started sshd@24-10.0.0.121:22-10.0.0.1:34688.service - OpenSSH per-connection server daemon (10.0.0.1:34688). Dec 13 13:31:48.937721 sshd[4342]: Accepted publickey for core from 10.0.0.1 port 34688 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:48.939558 sshd-session[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:48.943601 systemd-logind[1477]: New session 25 of user core. Dec 13 13:31:48.960611 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 13:31:49.082512 sshd[4344]: Connection closed by 10.0.0.1 port 34688 Dec 13 13:31:49.082891 sshd-session[4342]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:49.086861 systemd[1]: sshd@24-10.0.0.121:22-10.0.0.1:34688.service: Deactivated successfully. Dec 13 13:31:49.088918 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 13:31:49.089715 systemd-logind[1477]: Session 25 logged out. Waiting for processes to exit. Dec 13 13:31:49.090721 systemd-logind[1477]: Removed session 25. Dec 13 13:31:49.686145 kubelet[2691]: E1213 13:31:49.686106 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:54.094936 systemd[1]: Started sshd@25-10.0.0.121:22-10.0.0.1:34704.service - OpenSSH per-connection server daemon (10.0.0.1:34704). Dec 13 13:31:54.132069 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 34704 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:54.133736 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:54.137502 systemd-logind[1477]: New session 26 of user core. Dec 13 13:31:54.146732 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 13:31:54.264915 sshd[4362]: Connection closed by 10.0.0.1 port 34704 Dec 13 13:31:54.265289 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:54.277821 systemd[1]: sshd@25-10.0.0.121:22-10.0.0.1:34704.service: Deactivated successfully. Dec 13 13:31:54.280020 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 13:31:54.281876 systemd-logind[1477]: Session 26 logged out. Waiting for processes to exit. Dec 13 13:31:54.295792 systemd[1]: Started sshd@26-10.0.0.121:22-10.0.0.1:34706.service - OpenSSH per-connection server daemon (10.0.0.1:34706). Dec 13 13:31:54.297058 systemd-logind[1477]: Removed session 26. Dec 13 13:31:54.330768 sshd[4374]: Accepted publickey for core from 10.0.0.1 port 34706 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:54.332284 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:54.336625 systemd-logind[1477]: New session 27 of user core. Dec 13 13:31:54.345703 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 13:31:55.709399 containerd[1502]: time="2024-12-13T13:31:55.709336722Z" level=info msg="StopContainer for \"97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255\" with timeout 30 (s)" Dec 13 13:31:55.710008 containerd[1502]: time="2024-12-13T13:31:55.709893725Z" level=info msg="Stop container \"97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255\" with signal terminated" Dec 13 13:31:55.723518 systemd[1]: cri-containerd-97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255.scope: Deactivated successfully. Dec 13 13:31:55.745451 containerd[1502]: time="2024-12-13T13:31:55.745398820Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:31:55.746634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255-rootfs.mount: Deactivated successfully. Dec 13 13:31:55.753932 containerd[1502]: time="2024-12-13T13:31:55.753884179Z" level=info msg="StopContainer for \"d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52\" with timeout 2 (s)" Dec 13 13:31:55.754160 containerd[1502]: time="2024-12-13T13:31:55.754138604Z" level=info msg="Stop container \"d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52\" with signal terminated" Dec 13 13:31:55.754630 containerd[1502]: time="2024-12-13T13:31:55.754570771Z" level=info msg="shim disconnected" id=97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255 namespace=k8s.io Dec 13 13:31:55.754630 containerd[1502]: time="2024-12-13T13:31:55.754618211Z" level=warning msg="cleaning up after shim disconnected" id=97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255 namespace=k8s.io Dec 13 13:31:55.754630 containerd[1502]: time="2024-12-13T13:31:55.754626226Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:31:55.761016 systemd-networkd[1417]: lxc_health: Link DOWN Dec 13 13:31:55.761504 systemd-networkd[1417]: lxc_health: Lost carrier Dec 13 13:31:55.790361 containerd[1502]: time="2024-12-13T13:31:55.790296857Z" level=info msg="StopContainer for \"97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255\" returns successfully" Dec 13 13:31:55.792041 systemd[1]: cri-containerd-d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52.scope: Deactivated successfully. Dec 13 13:31:55.792336 systemd[1]: cri-containerd-d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52.scope: Consumed 6.655s CPU time. Dec 13 13:31:55.795701 containerd[1502]: time="2024-12-13T13:31:55.795640454Z" level=info msg="StopPodSandbox for \"1cb1e36672f2f6d9145a22ae2254e19145ea0ac12a015da174d56d8645d385a5\"" Dec 13 13:31:55.811527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52-rootfs.mount: Deactivated successfully. Dec 13 13:31:55.817821 containerd[1502]: time="2024-12-13T13:31:55.817763535Z" level=info msg="shim disconnected" id=d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52 namespace=k8s.io Dec 13 13:31:55.817821 containerd[1502]: time="2024-12-13T13:31:55.817815164Z" level=warning msg="cleaning up after shim disconnected" id=d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52 namespace=k8s.io Dec 13 13:31:55.817821 containerd[1502]: time="2024-12-13T13:31:55.817823700Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:31:55.822612 containerd[1502]: time="2024-12-13T13:31:55.795714555Z" level=info msg="Container to stop \"97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:31:55.824863 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1cb1e36672f2f6d9145a22ae2254e19145ea0ac12a015da174d56d8645d385a5-shm.mount: Deactivated successfully. Dec 13 13:31:55.832532 systemd[1]: cri-containerd-1cb1e36672f2f6d9145a22ae2254e19145ea0ac12a015da174d56d8645d385a5.scope: Deactivated successfully. Dec 13 13:31:55.835359 containerd[1502]: time="2024-12-13T13:31:55.835326534Z" level=info msg="StopContainer for \"d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52\" returns successfully" Dec 13 13:31:55.835926 containerd[1502]: time="2024-12-13T13:31:55.835895642Z" level=info msg="StopPodSandbox for \"de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b\"" Dec 13 13:31:55.835979 containerd[1502]: time="2024-12-13T13:31:55.835933884Z" level=info msg="Container to stop \"333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:31:55.835979 containerd[1502]: time="2024-12-13T13:31:55.835966787Z" level=info msg="Container to stop \"d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:31:55.835979 containerd[1502]: time="2024-12-13T13:31:55.835974863Z" level=info msg="Container to stop \"e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:31:55.836049 containerd[1502]: time="2024-12-13T13:31:55.835983189Z" level=info msg="Container to stop \"04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:31:55.836049 containerd[1502]: time="2024-12-13T13:31:55.835991915Z" level=info msg="Container to stop \"16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:31:55.838031 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b-shm.mount: Deactivated successfully. Dec 13 13:31:55.848921 systemd[1]: cri-containerd-de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b.scope: Deactivated successfully. Dec 13 13:31:55.859250 containerd[1502]: time="2024-12-13T13:31:55.859155684Z" level=info msg="shim disconnected" id=1cb1e36672f2f6d9145a22ae2254e19145ea0ac12a015da174d56d8645d385a5 namespace=k8s.io Dec 13 13:31:55.859250 containerd[1502]: time="2024-12-13T13:31:55.859216470Z" level=warning msg="cleaning up after shim disconnected" id=1cb1e36672f2f6d9145a22ae2254e19145ea0ac12a015da174d56d8645d385a5 namespace=k8s.io Dec 13 13:31:55.859250 containerd[1502]: time="2024-12-13T13:31:55.859226841Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:31:55.872373 containerd[1502]: time="2024-12-13T13:31:55.871378162Z" level=info msg="shim disconnected" id=de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b namespace=k8s.io Dec 13 13:31:55.872373 containerd[1502]: time="2024-12-13T13:31:55.871443577Z" level=warning msg="cleaning up after shim disconnected" id=de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b namespace=k8s.io Dec 13 13:31:55.872373 containerd[1502]: time="2024-12-13T13:31:55.871451993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:31:55.879092 containerd[1502]: time="2024-12-13T13:31:55.879038134Z" level=info msg="TearDown network for sandbox \"1cb1e36672f2f6d9145a22ae2254e19145ea0ac12a015da174d56d8645d385a5\" successfully" Dec 13 13:31:55.879092 containerd[1502]: time="2024-12-13T13:31:55.879074705Z" level=info msg="StopPodSandbox for \"1cb1e36672f2f6d9145a22ae2254e19145ea0ac12a015da174d56d8645d385a5\" returns successfully" Dec 13 13:31:55.889161 containerd[1502]: time="2024-12-13T13:31:55.889106397Z" level=info msg="TearDown network for sandbox \"de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b\" successfully" Dec 13 13:31:55.889161 containerd[1502]: time="2024-12-13T13:31:55.889141745Z" level=info msg="StopPodSandbox for \"de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b\" returns successfully" Dec 13 13:31:55.895302 kubelet[2691]: I1213 13:31:55.895245 2691 scope.go:117] "RemoveContainer" containerID="97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255" Dec 13 13:31:55.902736 containerd[1502]: time="2024-12-13T13:31:55.902682361Z" level=info msg="RemoveContainer for \"97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255\"" Dec 13 13:31:55.906520 containerd[1502]: time="2024-12-13T13:31:55.906458014Z" level=info msg="RemoveContainer for \"97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255\" returns successfully" Dec 13 13:31:55.906780 kubelet[2691]: I1213 13:31:55.906749 2691 scope.go:117] "RemoveContainer" containerID="97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255" Dec 13 13:31:55.907008 containerd[1502]: time="2024-12-13T13:31:55.906971885Z" level=error msg="ContainerStatus for \"97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255\": not found" Dec 13 13:31:55.915758 kubelet[2691]: E1213 13:31:55.915715 2691 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255\": not found" containerID="97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255" Dec 13 13:31:55.915932 kubelet[2691]: I1213 13:31:55.915757 2691 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255"} err="failed to get container status \"97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255\": rpc error: code = NotFound desc = an error occurred when try to find container \"97790c5c2cded2838f51cf13cf0d9f75a08d378030c03cb10ad77a4468d6f255\": not found" Dec 13 13:31:55.915932 kubelet[2691]: I1213 13:31:55.915833 2691 scope.go:117] "RemoveContainer" containerID="d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52" Dec 13 13:31:55.917112 containerd[1502]: time="2024-12-13T13:31:55.917068242Z" level=info msg="RemoveContainer for \"d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52\"" Dec 13 13:31:55.920745 containerd[1502]: time="2024-12-13T13:31:55.920702675Z" level=info msg="RemoveContainer for \"d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52\" returns successfully" Dec 13 13:31:55.920937 kubelet[2691]: I1213 13:31:55.920893 2691 scope.go:117] "RemoveContainer" containerID="16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3" Dec 13 13:31:55.922027 containerd[1502]: time="2024-12-13T13:31:55.921992398Z" level=info msg="RemoveContainer for \"16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3\"" Dec 13 13:31:55.925794 containerd[1502]: time="2024-12-13T13:31:55.925762821Z" level=info msg="RemoveContainer for \"16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3\" returns successfully" Dec 13 13:31:55.925991 kubelet[2691]: I1213 13:31:55.925961 2691 scope.go:117] "RemoveContainer" containerID="04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c" Dec 13 13:31:55.926888 containerd[1502]: time="2024-12-13T13:31:55.926864375Z" level=info msg="RemoveContainer for \"04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c\"" Dec 13 13:31:55.930085 containerd[1502]: time="2024-12-13T13:31:55.930053788Z" level=info msg="RemoveContainer for \"04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c\" returns successfully" Dec 13 13:31:55.930224 kubelet[2691]: I1213 13:31:55.930202 2691 scope.go:117] "RemoveContainer" containerID="333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1" Dec 13 13:31:55.931073 containerd[1502]: time="2024-12-13T13:31:55.931027898Z" level=info msg="RemoveContainer for \"333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1\"" Dec 13 13:31:55.934632 containerd[1502]: time="2024-12-13T13:31:55.934601234Z" level=info msg="RemoveContainer for \"333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1\" returns successfully" Dec 13 13:31:55.934773 kubelet[2691]: I1213 13:31:55.934752 2691 scope.go:117] "RemoveContainer" containerID="e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115" Dec 13 13:31:55.935442 containerd[1502]: time="2024-12-13T13:31:55.935411312Z" level=info msg="RemoveContainer for \"e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115\"" Dec 13 13:31:55.938634 containerd[1502]: time="2024-12-13T13:31:55.938606846Z" level=info msg="RemoveContainer for \"e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115\" returns successfully" Dec 13 13:31:55.938845 kubelet[2691]: I1213 13:31:55.938821 2691 scope.go:117] "RemoveContainer" containerID="d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52" Dec 13 13:31:55.939062 containerd[1502]: time="2024-12-13T13:31:55.939031187Z" level=error msg="ContainerStatus for \"d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52\": not found" Dec 13 13:31:55.939172 kubelet[2691]: E1213 13:31:55.939150 2691 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52\": not found" containerID="d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52" Dec 13 13:31:55.939205 kubelet[2691]: I1213 13:31:55.939178 2691 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52"} err="failed to get container status \"d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4a37323f3b2fead48ad664083c73e8b34ca5999b08f2f01c6f855ee3d84ef52\": not found" Dec 13 13:31:55.939205 kubelet[2691]: I1213 13:31:55.939200 2691 scope.go:117] "RemoveContainer" containerID="16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3" Dec 13 13:31:55.939362 containerd[1502]: time="2024-12-13T13:31:55.939335878Z" level=error msg="ContainerStatus for \"16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3\": not found" Dec 13 13:31:55.939465 kubelet[2691]: E1213 13:31:55.939446 2691 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3\": not found" containerID="16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3" Dec 13 13:31:55.939528 kubelet[2691]: I1213 13:31:55.939467 2691 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3"} err="failed to get container status \"16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"16beeffc0f54d47f65c303801b0d2ef073c0e25b9a7d3ff37288526b5ed727a3\": not found" Dec 13 13:31:55.939528 kubelet[2691]: I1213 13:31:55.939496 2691 scope.go:117] "RemoveContainer" containerID="04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c" Dec 13 13:31:55.939635 containerd[1502]: time="2024-12-13T13:31:55.939608219Z" level=error msg="ContainerStatus for \"04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c\": not found" Dec 13 13:31:55.939765 kubelet[2691]: E1213 13:31:55.939736 2691 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c\": not found" containerID="04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c" Dec 13 13:31:55.939798 kubelet[2691]: I1213 13:31:55.939764 2691 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c"} err="failed to get container status \"04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c\": rpc error: code = NotFound desc = an error occurred when try to find container \"04b3744596ac332572a71bc06868dea4ca0b9d05310260df66c8490799a7544c\": not found" Dec 13 13:31:55.939798 kubelet[2691]: I1213 13:31:55.939787 2691 scope.go:117] "RemoveContainer" containerID="333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1" Dec 13 13:31:55.940029 containerd[1502]: time="2024-12-13T13:31:55.939990589Z" level=error msg="ContainerStatus for \"333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1\": not found" Dec 13 13:31:55.940205 kubelet[2691]: E1213 13:31:55.940176 2691 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1\": not found" containerID="333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1" Dec 13 13:31:55.940241 kubelet[2691]: I1213 13:31:55.940210 2691 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1"} err="failed to get container status \"333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"333654a04bf074521465144c76702c8283b0f0762b0a686fefed17bab59ef6d1\": not found" Dec 13 13:31:55.940269 kubelet[2691]: I1213 13:31:55.940237 2691 scope.go:117] "RemoveContainer" containerID="e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115" Dec 13 13:31:55.940467 containerd[1502]: time="2024-12-13T13:31:55.940428647Z" level=error msg="ContainerStatus for \"e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115\": not found" Dec 13 13:31:55.940617 kubelet[2691]: E1213 13:31:55.940596 2691 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115\": not found" containerID="e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115" Dec 13 13:31:55.940680 kubelet[2691]: I1213 13:31:55.940623 2691 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115"} err="failed to get container status \"e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115\": rpc error: code = NotFound desc = an error occurred when try to find container \"e96eee2d73e2a20aa6041fe54ea6c29a8df1e82f60922d99c10ccb7769685115\": not found" Dec 13 13:31:55.985082 kubelet[2691]: I1213 13:31:55.984973 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-cilium-config-path\") pod \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " Dec 13 13:31:55.985082 kubelet[2691]: I1213 13:31:55.985026 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-bpf-maps\") pod \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " Dec 13 13:31:55.985082 kubelet[2691]: I1213 13:31:55.985052 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-xtables-lock\") pod \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " Dec 13 13:31:55.985082 kubelet[2691]: I1213 13:31:55.985067 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-cilium-run\") pod \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " Dec 13 13:31:55.985082 kubelet[2691]: I1213 13:31:55.985083 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-cni-path\") pod \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " Dec 13 13:31:55.985228 kubelet[2691]: I1213 13:31:55.985103 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-clustermesh-secrets\") pod \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " Dec 13 13:31:55.985228 kubelet[2691]: I1213 13:31:55.985122 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a442a27-a2da-493d-a9c5-a4882c486d72-cilium-config-path\") pod \"2a442a27-a2da-493d-a9c5-a4882c486d72\" (UID: \"2a442a27-a2da-493d-a9c5-a4882c486d72\") " Dec 13 13:31:55.985228 kubelet[2691]: I1213 13:31:55.985139 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwsnc\" (UniqueName: \"kubernetes.io/projected/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-kube-api-access-xwsnc\") pod \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " Dec 13 13:31:55.985228 kubelet[2691]: I1213 13:31:55.985153 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-etc-cni-netd\") pod \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " Dec 13 13:31:55.985228 kubelet[2691]: I1213 13:31:55.985170 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-hubble-tls\") pod \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " Dec 13 13:31:55.985228 kubelet[2691]: I1213 13:31:55.985167 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "78f475bc-85ff-47a3-8f1f-5d9cd7115cea" (UID: "78f475bc-85ff-47a3-8f1f-5d9cd7115cea"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:31:55.985367 kubelet[2691]: I1213 13:31:55.985187 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-hostproc\") pod \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " Dec 13 13:31:55.985367 kubelet[2691]: I1213 13:31:55.985229 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-hostproc" (OuterVolumeSpecName: "hostproc") pod "78f475bc-85ff-47a3-8f1f-5d9cd7115cea" (UID: "78f475bc-85ff-47a3-8f1f-5d9cd7115cea"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:31:55.985367 kubelet[2691]: I1213 13:31:55.985252 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-lib-modules\") pod \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " Dec 13 13:31:55.985367 kubelet[2691]: I1213 13:31:55.985267 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "78f475bc-85ff-47a3-8f1f-5d9cd7115cea" (UID: "78f475bc-85ff-47a3-8f1f-5d9cd7115cea"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:31:55.985367 kubelet[2691]: I1213 13:31:55.985272 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-cilium-cgroup\") pod \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " Dec 13 13:31:55.985512 kubelet[2691]: I1213 13:31:55.985285 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "78f475bc-85ff-47a3-8f1f-5d9cd7115cea" (UID: "78f475bc-85ff-47a3-8f1f-5d9cd7115cea"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:31:55.985512 kubelet[2691]: I1213 13:31:55.985301 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-host-proc-sys-kernel\") pod \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " Dec 13 13:31:55.985512 kubelet[2691]: I1213 13:31:55.985306 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "78f475bc-85ff-47a3-8f1f-5d9cd7115cea" (UID: "78f475bc-85ff-47a3-8f1f-5d9cd7115cea"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:31:55.985512 kubelet[2691]: I1213 13:31:55.985319 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p59bt\" (UniqueName: \"kubernetes.io/projected/2a442a27-a2da-493d-a9c5-a4882c486d72-kube-api-access-p59bt\") pod \"2a442a27-a2da-493d-a9c5-a4882c486d72\" (UID: \"2a442a27-a2da-493d-a9c5-a4882c486d72\") " Dec 13 13:31:55.985512 kubelet[2691]: I1213 13:31:55.985335 2691 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-host-proc-sys-net\") pod \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\" (UID: \"78f475bc-85ff-47a3-8f1f-5d9cd7115cea\") " Dec 13 13:31:55.985512 kubelet[2691]: I1213 13:31:55.985377 2691 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 13:31:55.986132 kubelet[2691]: I1213 13:31:55.985391 2691 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 13:31:55.986132 kubelet[2691]: I1213 13:31:55.985404 2691 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 13:31:55.986132 kubelet[2691]: I1213 13:31:55.985412 2691 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 13:31:55.986132 kubelet[2691]: I1213 13:31:55.985421 2691 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 13:31:55.988976 kubelet[2691]: I1213 13:31:55.985320 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "78f475bc-85ff-47a3-8f1f-5d9cd7115cea" (UID: "78f475bc-85ff-47a3-8f1f-5d9cd7115cea"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:31:55.988976 kubelet[2691]: I1213 13:31:55.985439 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "78f475bc-85ff-47a3-8f1f-5d9cd7115cea" (UID: "78f475bc-85ff-47a3-8f1f-5d9cd7115cea"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:31:55.989055 kubelet[2691]: I1213 13:31:55.985449 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "78f475bc-85ff-47a3-8f1f-5d9cd7115cea" (UID: "78f475bc-85ff-47a3-8f1f-5d9cd7115cea"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:31:55.989055 kubelet[2691]: I1213 13:31:55.988976 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "78f475bc-85ff-47a3-8f1f-5d9cd7115cea" (UID: "78f475bc-85ff-47a3-8f1f-5d9cd7115cea"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:31:55.989055 kubelet[2691]: I1213 13:31:55.988806 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-cni-path" (OuterVolumeSpecName: "cni-path") pod "78f475bc-85ff-47a3-8f1f-5d9cd7115cea" (UID: "78f475bc-85ff-47a3-8f1f-5d9cd7115cea"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:31:55.990205 kubelet[2691]: I1213 13:31:55.990161 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a442a27-a2da-493d-a9c5-a4882c486d72-kube-api-access-p59bt" (OuterVolumeSpecName: "kube-api-access-p59bt") pod "2a442a27-a2da-493d-a9c5-a4882c486d72" (UID: "2a442a27-a2da-493d-a9c5-a4882c486d72"). InnerVolumeSpecName "kube-api-access-p59bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:31:55.990843 kubelet[2691]: I1213 13:31:55.990749 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "78f475bc-85ff-47a3-8f1f-5d9cd7115cea" (UID: "78f475bc-85ff-47a3-8f1f-5d9cd7115cea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:31:55.990994 kubelet[2691]: I1213 13:31:55.990974 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-kube-api-access-xwsnc" (OuterVolumeSpecName: "kube-api-access-xwsnc") pod "78f475bc-85ff-47a3-8f1f-5d9cd7115cea" (UID: "78f475bc-85ff-47a3-8f1f-5d9cd7115cea"). InnerVolumeSpecName "kube-api-access-xwsnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:31:55.991772 kubelet[2691]: I1213 13:31:55.991745 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "78f475bc-85ff-47a3-8f1f-5d9cd7115cea" (UID: "78f475bc-85ff-47a3-8f1f-5d9cd7115cea"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 13:31:55.992299 kubelet[2691]: I1213 13:31:55.992269 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "78f475bc-85ff-47a3-8f1f-5d9cd7115cea" (UID: "78f475bc-85ff-47a3-8f1f-5d9cd7115cea"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:31:55.993926 kubelet[2691]: I1213 13:31:55.993891 2691 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a442a27-a2da-493d-a9c5-a4882c486d72-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2a442a27-a2da-493d-a9c5-a4882c486d72" (UID: "2a442a27-a2da-493d-a9c5-a4882c486d72"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:31:56.085772 kubelet[2691]: I1213 13:31:56.085734 2691 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 13:31:56.085772 kubelet[2691]: I1213 13:31:56.085769 2691 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 13:31:56.085772 kubelet[2691]: I1213 13:31:56.085779 2691 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 13:31:56.085928 kubelet[2691]: I1213 13:31:56.085787 2691 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a442a27-a2da-493d-a9c5-a4882c486d72-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 13:31:56.085928 kubelet[2691]: I1213 13:31:56.085797 2691 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xwsnc\" (UniqueName: \"kubernetes.io/projected/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-kube-api-access-xwsnc\") on node \"localhost\" DevicePath \"\"" Dec 13 13:31:56.085928 kubelet[2691]: I1213 13:31:56.085805 2691 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 13:31:56.085928 kubelet[2691]: I1213 13:31:56.085814 2691 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 13:31:56.085928 kubelet[2691]: I1213 13:31:56.085822 2691 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 13:31:56.085928 kubelet[2691]: I1213 13:31:56.085832 2691 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 13:31:56.085928 kubelet[2691]: I1213 13:31:56.085840 2691 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-p59bt\" (UniqueName: \"kubernetes.io/projected/2a442a27-a2da-493d-a9c5-a4882c486d72-kube-api-access-p59bt\") on node \"localhost\" DevicePath \"\"" Dec 13 13:31:56.085928 kubelet[2691]: I1213 13:31:56.085847 2691 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/78f475bc-85ff-47a3-8f1f-5d9cd7115cea-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 13:31:56.201207 systemd[1]: Removed slice kubepods-besteffort-pod2a442a27_a2da_493d_a9c5_a4882c486d72.slice - libcontainer container kubepods-besteffort-pod2a442a27_a2da_493d_a9c5_a4882c486d72.slice. Dec 13 13:31:56.687701 kubelet[2691]: I1213 13:31:56.687637 2691 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a442a27-a2da-493d-a9c5-a4882c486d72" path="/var/lib/kubelet/pods/2a442a27-a2da-493d-a9c5-a4882c486d72/volumes" Dec 13 13:31:56.693285 systemd[1]: Removed slice kubepods-burstable-pod78f475bc_85ff_47a3_8f1f_5d9cd7115cea.slice - libcontainer container kubepods-burstable-pod78f475bc_85ff_47a3_8f1f_5d9cd7115cea.slice. Dec 13 13:31:56.693622 systemd[1]: kubepods-burstable-pod78f475bc_85ff_47a3_8f1f_5d9cd7115cea.slice: Consumed 6.756s CPU time. Dec 13 13:31:56.719217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de6b4ccc0cba5b98aa7e62e482e0c580291547ea4ea9df366be37a1ba7b3fc1b-rootfs.mount: Deactivated successfully. Dec 13 13:31:56.719343 systemd[1]: var-lib-kubelet-pods-78f475bc\x2d85ff\x2d47a3\x2d8f1f\x2d5d9cd7115cea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxwsnc.mount: Deactivated successfully. Dec 13 13:31:56.719453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cb1e36672f2f6d9145a22ae2254e19145ea0ac12a015da174d56d8645d385a5-rootfs.mount: Deactivated successfully. Dec 13 13:31:56.719583 systemd[1]: var-lib-kubelet-pods-2a442a27\x2da2da\x2d493d\x2da9c5\x2da4882c486d72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp59bt.mount: Deactivated successfully. Dec 13 13:31:56.719711 systemd[1]: var-lib-kubelet-pods-78f475bc\x2d85ff\x2d47a3\x2d8f1f\x2d5d9cd7115cea-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 13:31:56.719821 systemd[1]: var-lib-kubelet-pods-78f475bc\x2d85ff\x2d47a3\x2d8f1f\x2d5d9cd7115cea-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 13:31:57.677444 sshd[4376]: Connection closed by 10.0.0.1 port 34706 Dec 13 13:31:57.677944 sshd-session[4374]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:57.687454 systemd[1]: sshd@26-10.0.0.121:22-10.0.0.1:34706.service: Deactivated successfully. Dec 13 13:31:57.689222 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 13:31:57.690648 systemd-logind[1477]: Session 27 logged out. Waiting for processes to exit. Dec 13 13:31:57.702831 systemd[1]: Started sshd@27-10.0.0.121:22-10.0.0.1:56530.service - OpenSSH per-connection server daemon (10.0.0.1:56530). Dec 13 13:31:57.703852 systemd-logind[1477]: Removed session 27. Dec 13 13:31:57.739700 sshd[4535]: Accepted publickey for core from 10.0.0.1 port 56530 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:57.741081 kubelet[2691]: E1213 13:31:57.741047 2691 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 13:31:57.741268 sshd-session[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:57.745280 systemd-logind[1477]: New session 28 of user core. Dec 13 13:31:57.754600 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 13:31:58.332884 sshd[4537]: Connection closed by 10.0.0.1 port 56530 Dec 13 13:31:58.333621 sshd-session[4535]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:58.344054 kubelet[2691]: I1213 13:31:58.343981 2691 topology_manager.go:215] "Topology Admit Handler" podUID="6f99ebe6-ae4f-4a43-812c-4258e0b91c1e" podNamespace="kube-system" podName="cilium-wdw22" Dec 13 13:31:58.344054 kubelet[2691]: E1213 13:31:58.344047 2691 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="78f475bc-85ff-47a3-8f1f-5d9cd7115cea" containerName="mount-cgroup" Dec 13 13:31:58.344054 kubelet[2691]: E1213 13:31:58.344060 2691 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="78f475bc-85ff-47a3-8f1f-5d9cd7115cea" containerName="mount-bpf-fs" Dec 13 13:31:58.344054 kubelet[2691]: E1213 13:31:58.344069 2691 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2a442a27-a2da-493d-a9c5-a4882c486d72" containerName="cilium-operator" Dec 13 13:31:58.344299 kubelet[2691]: E1213 13:31:58.344077 2691 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="78f475bc-85ff-47a3-8f1f-5d9cd7115cea" containerName="apply-sysctl-overwrites" Dec 13 13:31:58.344299 kubelet[2691]: E1213 13:31:58.344087 2691 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="78f475bc-85ff-47a3-8f1f-5d9cd7115cea" containerName="clean-cilium-state" Dec 13 13:31:58.344299 kubelet[2691]: E1213 13:31:58.344096 2691 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="78f475bc-85ff-47a3-8f1f-5d9cd7115cea" containerName="cilium-agent" Dec 13 13:31:58.346282 systemd[1]: sshd@27-10.0.0.121:22-10.0.0.1:56530.service: Deactivated successfully. Dec 13 13:31:58.346937 kubelet[2691]: I1213 13:31:58.346908 2691 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a442a27-a2da-493d-a9c5-a4882c486d72" containerName="cilium-operator" Dec 13 13:31:58.347006 kubelet[2691]: I1213 13:31:58.346952 2691 memory_manager.go:354] "RemoveStaleState removing state" podUID="78f475bc-85ff-47a3-8f1f-5d9cd7115cea" containerName="cilium-agent" Dec 13 13:31:58.349564 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 13:31:58.353703 systemd-logind[1477]: Session 28 logged out. Waiting for processes to exit. Dec 13 13:31:58.357806 systemd-logind[1477]: Removed session 28. Dec 13 13:31:58.367311 systemd[1]: Started sshd@28-10.0.0.121:22-10.0.0.1:56540.service - OpenSSH per-connection server daemon (10.0.0.1:56540). Dec 13 13:31:58.378441 systemd[1]: Created slice kubepods-burstable-pod6f99ebe6_ae4f_4a43_812c_4258e0b91c1e.slice - libcontainer container kubepods-burstable-pod6f99ebe6_ae4f_4a43_812c_4258e0b91c1e.slice. Dec 13 13:31:58.399699 kubelet[2691]: I1213 13:31:58.399630 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f99ebe6-ae4f-4a43-812c-4258e0b91c1e-cilium-cgroup\") pod \"cilium-wdw22\" (UID: \"6f99ebe6-ae4f-4a43-812c-4258e0b91c1e\") " pod="kube-system/cilium-wdw22" Dec 13 13:31:58.399699 kubelet[2691]: I1213 13:31:58.399697 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f99ebe6-ae4f-4a43-812c-4258e0b91c1e-cni-path\") pod \"cilium-wdw22\" (UID: \"6f99ebe6-ae4f-4a43-812c-4258e0b91c1e\") " pod="kube-system/cilium-wdw22" Dec 13 13:31:58.399837 kubelet[2691]: I1213 13:31:58.399742 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f99ebe6-ae4f-4a43-812c-4258e0b91c1e-etc-cni-netd\") pod \"cilium-wdw22\" (UID: \"6f99ebe6-ae4f-4a43-812c-4258e0b91c1e\") " pod="kube-system/cilium-wdw22" Dec 13 13:31:58.399837 kubelet[2691]: I1213 13:31:58.399792 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f99ebe6-ae4f-4a43-812c-4258e0b91c1e-xtables-lock\") pod \"cilium-wdw22\" (UID: \"6f99ebe6-ae4f-4a43-812c-4258e0b91c1e\") " pod="kube-system/cilium-wdw22" Dec 13 13:31:58.399837 kubelet[2691]: I1213 13:31:58.399817 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f99ebe6-ae4f-4a43-812c-4258e0b91c1e-clustermesh-secrets\") pod \"cilium-wdw22\" (UID: \"6f99ebe6-ae4f-4a43-812c-4258e0b91c1e\") " pod="kube-system/cilium-wdw22" Dec 13 13:31:58.399957 kubelet[2691]: I1213 13:31:58.399840 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f99ebe6-ae4f-4a43-812c-4258e0b91c1e-host-proc-sys-net\") pod \"cilium-wdw22\" (UID: \"6f99ebe6-ae4f-4a43-812c-4258e0b91c1e\") " pod="kube-system/cilium-wdw22" Dec 13 13:31:58.399957 kubelet[2691]: I1213 13:31:58.399863 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xqhn\" (UniqueName: \"kubernetes.io/projected/6f99ebe6-ae4f-4a43-812c-4258e0b91c1e-kube-api-access-8xqhn\") pod \"cilium-wdw22\" (UID: \"6f99ebe6-ae4f-4a43-812c-4258e0b91c1e\") " pod="kube-system/cilium-wdw22" Dec 13 13:31:58.399957 kubelet[2691]: I1213 13:31:58.399887 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f99ebe6-ae4f-4a43-812c-4258e0b91c1e-bpf-maps\") pod \"cilium-wdw22\" (UID: \"6f99ebe6-ae4f-4a43-812c-4258e0b91c1e\") " pod="kube-system/cilium-wdw22" Dec 13 13:31:58.399957 kubelet[2691]: I1213 13:31:58.399907 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f99ebe6-ae4f-4a43-812c-4258e0b91c1e-host-proc-sys-kernel\") pod \"cilium-wdw22\" (UID: \"6f99ebe6-ae4f-4a43-812c-4258e0b91c1e\") " pod="kube-system/cilium-wdw22" Dec 13 13:31:58.399957 kubelet[2691]: I1213 13:31:58.399943 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f99ebe6-ae4f-4a43-812c-4258e0b91c1e-hostproc\") pod \"cilium-wdw22\" (UID: \"6f99ebe6-ae4f-4a43-812c-4258e0b91c1e\") " pod="kube-system/cilium-wdw22" Dec 13 13:31:58.400105 kubelet[2691]: I1213 13:31:58.399960 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6f99ebe6-ae4f-4a43-812c-4258e0b91c1e-cilium-ipsec-secrets\") pod \"cilium-wdw22\" (UID: \"6f99ebe6-ae4f-4a43-812c-4258e0b91c1e\") " pod="kube-system/cilium-wdw22" Dec 13 13:31:58.400105 kubelet[2691]: I1213 13:31:58.399982 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f99ebe6-ae4f-4a43-812c-4258e0b91c1e-lib-modules\") pod \"cilium-wdw22\" (UID: \"6f99ebe6-ae4f-4a43-812c-4258e0b91c1e\") " pod="kube-system/cilium-wdw22" Dec 13 13:31:58.400105 kubelet[2691]: I1213 13:31:58.400005 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f99ebe6-ae4f-4a43-812c-4258e0b91c1e-cilium-run\") pod \"cilium-wdw22\" (UID: \"6f99ebe6-ae4f-4a43-812c-4258e0b91c1e\") " pod="kube-system/cilium-wdw22" Dec 13 13:31:58.400105 kubelet[2691]: I1213 13:31:58.400024 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f99ebe6-ae4f-4a43-812c-4258e0b91c1e-cilium-config-path\") pod \"cilium-wdw22\" (UID: \"6f99ebe6-ae4f-4a43-812c-4258e0b91c1e\") " pod="kube-system/cilium-wdw22" Dec 13 13:31:58.400105 kubelet[2691]: I1213 13:31:58.400042 2691 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f99ebe6-ae4f-4a43-812c-4258e0b91c1e-hubble-tls\") pod \"cilium-wdw22\" (UID: \"6f99ebe6-ae4f-4a43-812c-4258e0b91c1e\") " pod="kube-system/cilium-wdw22" Dec 13 13:31:58.405862 sshd[4549]: Accepted publickey for core from 10.0.0.1 port 56540 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:58.407502 sshd-session[4549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:58.412172 systemd-logind[1477]: New session 29 of user core. Dec 13 13:31:58.423722 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 13:31:58.474353 sshd[4551]: Connection closed by 10.0.0.1 port 56540 Dec 13 13:31:58.474856 sshd-session[4549]: pam_unix(sshd:session): session closed for user core Dec 13 13:31:58.487230 systemd[1]: sshd@28-10.0.0.121:22-10.0.0.1:56540.service: Deactivated successfully. Dec 13 13:31:58.489776 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 13:31:58.491747 systemd-logind[1477]: Session 29 logged out. Waiting for processes to exit. Dec 13 13:31:58.499943 systemd[1]: Started sshd@29-10.0.0.121:22-10.0.0.1:56546.service - OpenSSH per-connection server daemon (10.0.0.1:56546). Dec 13 13:31:58.501174 systemd-logind[1477]: Removed session 29. Dec 13 13:31:58.538374 sshd[4557]: Accepted publickey for core from 10.0.0.1 port 56546 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:31:58.540185 sshd-session[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:31:58.544443 systemd-logind[1477]: New session 30 of user core. Dec 13 13:31:58.553630 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 13:31:58.683499 kubelet[2691]: E1213 13:31:58.683425 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:58.684435 containerd[1502]: time="2024-12-13T13:31:58.684023955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wdw22,Uid:6f99ebe6-ae4f-4a43-812c-4258e0b91c1e,Namespace:kube-system,Attempt:0,}" Dec 13 13:31:58.688171 kubelet[2691]: I1213 13:31:58.688141 2691 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78f475bc-85ff-47a3-8f1f-5d9cd7115cea" path="/var/lib/kubelet/pods/78f475bc-85ff-47a3-8f1f-5d9cd7115cea/volumes" Dec 13 13:31:58.705167 containerd[1502]: time="2024-12-13T13:31:58.705058383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:31:58.705862 containerd[1502]: time="2024-12-13T13:31:58.705204642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:31:58.705862 containerd[1502]: time="2024-12-13T13:31:58.705829053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:31:58.706018 containerd[1502]: time="2024-12-13T13:31:58.705925507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:31:58.728688 systemd[1]: Started cri-containerd-c11da78e5e7a978dfc5a9c056afd670a5e4b634a86d5f64a140e988d931eb8d6.scope - libcontainer container c11da78e5e7a978dfc5a9c056afd670a5e4b634a86d5f64a140e988d931eb8d6. Dec 13 13:31:58.753872 containerd[1502]: time="2024-12-13T13:31:58.753827840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wdw22,Uid:6f99ebe6-ae4f-4a43-812c-4258e0b91c1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c11da78e5e7a978dfc5a9c056afd670a5e4b634a86d5f64a140e988d931eb8d6\"" Dec 13 13:31:58.754659 kubelet[2691]: E1213 13:31:58.754632 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:58.757114 containerd[1502]: time="2024-12-13T13:31:58.757071269Z" level=info msg="CreateContainer within sandbox \"c11da78e5e7a978dfc5a9c056afd670a5e4b634a86d5f64a140e988d931eb8d6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:31:58.772680 containerd[1502]: time="2024-12-13T13:31:58.772621393Z" level=info msg="CreateContainer within sandbox \"c11da78e5e7a978dfc5a9c056afd670a5e4b634a86d5f64a140e988d931eb8d6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5659ebff635a53380b5a5485e919ef6487c34d6db9162c003e06adae7de20ea8\"" Dec 13 13:31:58.773275 containerd[1502]: time="2024-12-13T13:31:58.773123411Z" level=info msg="StartContainer for \"5659ebff635a53380b5a5485e919ef6487c34d6db9162c003e06adae7de20ea8\"" Dec 13 13:31:58.805649 systemd[1]: Started cri-containerd-5659ebff635a53380b5a5485e919ef6487c34d6db9162c003e06adae7de20ea8.scope - libcontainer container 5659ebff635a53380b5a5485e919ef6487c34d6db9162c003e06adae7de20ea8. Dec 13 13:31:58.833933 containerd[1502]: time="2024-12-13T13:31:58.833875949Z" level=info msg="StartContainer for \"5659ebff635a53380b5a5485e919ef6487c34d6db9162c003e06adae7de20ea8\" returns successfully" Dec 13 13:31:58.843657 systemd[1]: cri-containerd-5659ebff635a53380b5a5485e919ef6487c34d6db9162c003e06adae7de20ea8.scope: Deactivated successfully. Dec 13 13:31:58.878796 containerd[1502]: time="2024-12-13T13:31:58.878729927Z" level=info msg="shim disconnected" id=5659ebff635a53380b5a5485e919ef6487c34d6db9162c003e06adae7de20ea8 namespace=k8s.io Dec 13 13:31:58.878796 containerd[1502]: time="2024-12-13T13:31:58.878791715Z" level=warning msg="cleaning up after shim disconnected" id=5659ebff635a53380b5a5485e919ef6487c34d6db9162c003e06adae7de20ea8 namespace=k8s.io Dec 13 13:31:58.878796 containerd[1502]: time="2024-12-13T13:31:58.878802917Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:31:58.912123 kubelet[2691]: E1213 13:31:58.912086 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:58.913614 containerd[1502]: time="2024-12-13T13:31:58.913520988Z" level=info msg="CreateContainer within sandbox \"c11da78e5e7a978dfc5a9c056afd670a5e4b634a86d5f64a140e988d931eb8d6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:31:58.929011 containerd[1502]: time="2024-12-13T13:31:58.928962175Z" level=info msg="CreateContainer within sandbox \"c11da78e5e7a978dfc5a9c056afd670a5e4b634a86d5f64a140e988d931eb8d6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"94d11fa60c697cebc9ab36afa9160510b5a0889939ad9f76739fe06b4f9ae1f7\"" Dec 13 13:31:58.930497 containerd[1502]: time="2024-12-13T13:31:58.930433492Z" level=info msg="StartContainer for \"94d11fa60c697cebc9ab36afa9160510b5a0889939ad9f76739fe06b4f9ae1f7\"" Dec 13 13:31:58.958717 systemd[1]: Started cri-containerd-94d11fa60c697cebc9ab36afa9160510b5a0889939ad9f76739fe06b4f9ae1f7.scope - libcontainer container 94d11fa60c697cebc9ab36afa9160510b5a0889939ad9f76739fe06b4f9ae1f7. Dec 13 13:31:58.984766 containerd[1502]: time="2024-12-13T13:31:58.984728400Z" level=info msg="StartContainer for \"94d11fa60c697cebc9ab36afa9160510b5a0889939ad9f76739fe06b4f9ae1f7\" returns successfully" Dec 13 13:31:58.991297 systemd[1]: cri-containerd-94d11fa60c697cebc9ab36afa9160510b5a0889939ad9f76739fe06b4f9ae1f7.scope: Deactivated successfully. Dec 13 13:31:59.024058 containerd[1502]: time="2024-12-13T13:31:59.023986335Z" level=info msg="shim disconnected" id=94d11fa60c697cebc9ab36afa9160510b5a0889939ad9f76739fe06b4f9ae1f7 namespace=k8s.io Dec 13 13:31:59.024058 containerd[1502]: time="2024-12-13T13:31:59.024047231Z" level=warning msg="cleaning up after shim disconnected" id=94d11fa60c697cebc9ab36afa9160510b5a0889939ad9f76739fe06b4f9ae1f7 namespace=k8s.io Dec 13 13:31:59.024058 containerd[1502]: time="2024-12-13T13:31:59.024058372Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:31:59.915532 kubelet[2691]: E1213 13:31:59.915496 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:31:59.918041 containerd[1502]: time="2024-12-13T13:31:59.917637414Z" level=info msg="CreateContainer within sandbox \"c11da78e5e7a978dfc5a9c056afd670a5e4b634a86d5f64a140e988d931eb8d6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:31:59.935394 containerd[1502]: time="2024-12-13T13:31:59.935355040Z" level=info msg="CreateContainer within sandbox \"c11da78e5e7a978dfc5a9c056afd670a5e4b634a86d5f64a140e988d931eb8d6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9bc747ef14893a30814b4b4326b8209ec009eb46a23dfdc3f268e8dfd20f62f0\"" Dec 13 13:31:59.935864 containerd[1502]: time="2024-12-13T13:31:59.935840567Z" level=info msg="StartContainer for \"9bc747ef14893a30814b4b4326b8209ec009eb46a23dfdc3f268e8dfd20f62f0\"" Dec 13 13:31:59.966657 systemd[1]: Started cri-containerd-9bc747ef14893a30814b4b4326b8209ec009eb46a23dfdc3f268e8dfd20f62f0.scope - libcontainer container 9bc747ef14893a30814b4b4326b8209ec009eb46a23dfdc3f268e8dfd20f62f0. Dec 13 13:31:59.997882 containerd[1502]: time="2024-12-13T13:31:59.997842639Z" level=info msg="StartContainer for \"9bc747ef14893a30814b4b4326b8209ec009eb46a23dfdc3f268e8dfd20f62f0\" returns successfully" Dec 13 13:31:59.999217 systemd[1]: cri-containerd-9bc747ef14893a30814b4b4326b8209ec009eb46a23dfdc3f268e8dfd20f62f0.scope: Deactivated successfully. Dec 13 13:32:00.026419 containerd[1502]: time="2024-12-13T13:32:00.026357584Z" level=info msg="shim disconnected" id=9bc747ef14893a30814b4b4326b8209ec009eb46a23dfdc3f268e8dfd20f62f0 namespace=k8s.io Dec 13 13:32:00.026419 containerd[1502]: time="2024-12-13T13:32:00.026414241Z" level=warning msg="cleaning up after shim disconnected" id=9bc747ef14893a30814b4b4326b8209ec009eb46a23dfdc3f268e8dfd20f62f0 namespace=k8s.io Dec 13 13:32:00.026419 containerd[1502]: time="2024-12-13T13:32:00.026424251Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:32:00.506112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bc747ef14893a30814b4b4326b8209ec009eb46a23dfdc3f268e8dfd20f62f0-rootfs.mount: Deactivated successfully. Dec 13 13:32:00.919271 kubelet[2691]: E1213 13:32:00.919242 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:32:00.921649 containerd[1502]: time="2024-12-13T13:32:00.921611428Z" level=info msg="CreateContainer within sandbox \"c11da78e5e7a978dfc5a9c056afd670a5e4b634a86d5f64a140e988d931eb8d6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:32:01.068820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2132813029.mount: Deactivated successfully. Dec 13 13:32:01.167052 containerd[1502]: time="2024-12-13T13:32:01.166983446Z" level=info msg="CreateContainer within sandbox \"c11da78e5e7a978dfc5a9c056afd670a5e4b634a86d5f64a140e988d931eb8d6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"03150189eecc92458a8ff3299915624010edab719641a5b2f7894def5d44ca44\"" Dec 13 13:32:01.167796 containerd[1502]: time="2024-12-13T13:32:01.167730830Z" level=info msg="StartContainer for \"03150189eecc92458a8ff3299915624010edab719641a5b2f7894def5d44ca44\"" Dec 13 13:32:01.206600 systemd[1]: Started cri-containerd-03150189eecc92458a8ff3299915624010edab719641a5b2f7894def5d44ca44.scope - libcontainer container 03150189eecc92458a8ff3299915624010edab719641a5b2f7894def5d44ca44. Dec 13 13:32:01.228963 systemd[1]: cri-containerd-03150189eecc92458a8ff3299915624010edab719641a5b2f7894def5d44ca44.scope: Deactivated successfully. Dec 13 13:32:01.235944 containerd[1502]: time="2024-12-13T13:32:01.235890941Z" level=info msg="StartContainer for \"03150189eecc92458a8ff3299915624010edab719641a5b2f7894def5d44ca44\" returns successfully" Dec 13 13:32:01.260064 containerd[1502]: time="2024-12-13T13:32:01.259995540Z" level=info msg="shim disconnected" id=03150189eecc92458a8ff3299915624010edab719641a5b2f7894def5d44ca44 namespace=k8s.io Dec 13 13:32:01.260064 containerd[1502]: time="2024-12-13T13:32:01.260062297Z" level=warning msg="cleaning up after shim disconnected" id=03150189eecc92458a8ff3299915624010edab719641a5b2f7894def5d44ca44 namespace=k8s.io Dec 13 13:32:01.260292 containerd[1502]: time="2024-12-13T13:32:01.260076013Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:32:01.506614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03150189eecc92458a8ff3299915624010edab719641a5b2f7894def5d44ca44-rootfs.mount: Deactivated successfully. Dec 13 13:32:01.921995 kubelet[2691]: E1213 13:32:01.921967 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:32:01.923955 containerd[1502]: time="2024-12-13T13:32:01.923902860Z" level=info msg="CreateContainer within sandbox \"c11da78e5e7a978dfc5a9c056afd670a5e4b634a86d5f64a140e988d931eb8d6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:32:02.191087 containerd[1502]: time="2024-12-13T13:32:02.190964157Z" level=info msg="CreateContainer within sandbox \"c11da78e5e7a978dfc5a9c056afd670a5e4b634a86d5f64a140e988d931eb8d6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1a02e3b522772d8bd0ded77d2ee2c0817e9f3d2361410dccbb18bf3e174e6d3c\"" Dec 13 13:32:02.191631 containerd[1502]: time="2024-12-13T13:32:02.191588486Z" level=info msg="StartContainer for \"1a02e3b522772d8bd0ded77d2ee2c0817e9f3d2361410dccbb18bf3e174e6d3c\"" Dec 13 13:32:02.218649 systemd[1]: Started cri-containerd-1a02e3b522772d8bd0ded77d2ee2c0817e9f3d2361410dccbb18bf3e174e6d3c.scope - libcontainer container 1a02e3b522772d8bd0ded77d2ee2c0817e9f3d2361410dccbb18bf3e174e6d3c. Dec 13 13:32:02.249492 containerd[1502]: time="2024-12-13T13:32:02.249435044Z" level=info msg="StartContainer for \"1a02e3b522772d8bd0ded77d2ee2c0817e9f3d2361410dccbb18bf3e174e6d3c\" returns successfully" Dec 13 13:32:02.685525 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 13:32:02.929376 kubelet[2691]: E1213 13:32:02.928485 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:32:04.685316 kubelet[2691]: E1213 13:32:04.685249 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:32:04.686678 kubelet[2691]: E1213 13:32:04.686628 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:32:05.764063 systemd-networkd[1417]: lxc_health: Link UP Dec 13 13:32:05.769787 systemd-networkd[1417]: lxc_health: Gained carrier Dec 13 13:32:06.688673 kubelet[2691]: E1213 13:32:06.688019 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:32:06.690095 kubelet[2691]: E1213 13:32:06.689246 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:32:06.706093 kubelet[2691]: I1213 13:32:06.706032 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wdw22" podStartSLOduration=8.706013683 podStartE2EDuration="8.706013683s" podCreationTimestamp="2024-12-13 13:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:32:02.945915393 +0000 UTC m=+90.342889722" watchObservedRunningTime="2024-12-13 13:32:06.706013683 +0000 UTC m=+94.102987982" Dec 13 13:32:06.936680 kubelet[2691]: E1213 13:32:06.936638 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:32:07.695729 systemd-networkd[1417]: lxc_health: Gained IPv6LL Dec 13 13:32:07.938466 kubelet[2691]: E1213 13:32:07.938428 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:32:11.174197 systemd[1]: run-containerd-runc-k8s.io-1a02e3b522772d8bd0ded77d2ee2c0817e9f3d2361410dccbb18bf3e174e6d3c-runc.Q5KZz7.mount: Deactivated successfully. Dec 13 13:32:11.220511 sshd[4565]: Connection closed by 10.0.0.1 port 56546 Dec 13 13:32:11.220960 sshd-session[4557]: pam_unix(sshd:session): session closed for user core Dec 13 13:32:11.224705 systemd[1]: sshd@29-10.0.0.121:22-10.0.0.1:56546.service: Deactivated successfully. Dec 13 13:32:11.226642 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 13:32:11.227278 systemd-logind[1477]: Session 30 logged out. Waiting for processes to exit. Dec 13 13:32:11.228216 systemd-logind[1477]: Removed session 30. Dec 13 13:32:12.686583 kubelet[2691]: E1213 13:32:12.686414 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"