Dec 16 13:03:51.884396 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:03:51.884421 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:03:51.884430 kernel: BIOS-provided physical RAM map: Dec 16 13:03:51.884436 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 16 13:03:51.884443 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 16 13:03:51.884451 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Dec 16 13:03:51.884458 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 16 13:03:51.884465 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Dec 16 13:03:51.884472 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 16 13:03:51.884478 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 16 13:03:51.884485 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 16 13:03:51.884491 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 16 13:03:51.884498 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 16 13:03:51.884504 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 16 13:03:51.884515 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 16 13:03:51.884522 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 16 13:03:51.884529 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 13:03:51.884535 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 13:03:51.884542 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 13:03:51.884551 kernel: NX (Execute Disable) protection: active Dec 16 13:03:51.884558 kernel: APIC: Static calls initialized Dec 16 13:03:51.884565 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Dec 16 13:03:51.884572 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Dec 16 13:03:51.884579 kernel: extended physical RAM map: Dec 16 13:03:51.884586 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 16 13:03:51.884593 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 16 13:03:51.884600 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Dec 16 13:03:51.884607 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 16 13:03:51.884613 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Dec 16 13:03:51.884620 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Dec 16 13:03:51.884629 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Dec 16 13:03:51.884636 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Dec 16 13:03:51.884643 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Dec 16 13:03:51.884650 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 16 13:03:51.884657 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 16 13:03:51.884664 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 16 13:03:51.884671 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 16 13:03:51.884678 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 16 13:03:51.884684 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 16 13:03:51.884692 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 16 13:03:51.884703 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 16 13:03:51.884711 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 13:03:51.884718 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 13:03:51.884725 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 13:03:51.884732 kernel: efi: EFI v2.7 by EDK II Dec 16 13:03:51.884740 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Dec 16 13:03:51.884749 kernel: random: crng init done Dec 16 13:03:51.884756 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 16 13:03:51.884763 kernel: secureboot: Secure boot enabled Dec 16 13:03:51.884770 kernel: SMBIOS 2.8 present. Dec 16 13:03:51.884778 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 16 13:03:51.884795 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:03:51.884809 kernel: Hypervisor detected: KVM Dec 16 13:03:51.884817 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 16 13:03:51.884838 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:03:51.884846 kernel: kvm-clock: using sched offset of 5437777088 cycles Dec 16 13:03:51.884854 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:03:51.884861 kernel: tsc: Detected 2794.750 MHz processor Dec 16 13:03:51.884872 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:03:51.884879 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:03:51.884886 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 16 13:03:51.884902 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:03:51.884910 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:03:51.884917 kernel: Using GB pages for direct mapping Dec 16 13:03:51.884925 kernel: ACPI: Early table checksum verification disabled Dec 16 13:03:51.884932 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Dec 16 13:03:51.884940 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 16 13:03:51.884950 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:03:51.884957 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:03:51.884965 kernel: ACPI: FACS 0x000000009BBDD000 000040 Dec 16 13:03:51.884972 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:03:51.884979 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:03:51.884987 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:03:51.884994 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:03:51.885001 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 16 13:03:51.885011 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Dec 16 13:03:51.885018 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Dec 16 13:03:51.885026 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Dec 16 13:03:51.885033 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Dec 16 13:03:51.885040 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Dec 16 13:03:51.885047 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Dec 16 13:03:51.885055 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Dec 16 13:03:51.885062 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Dec 16 13:03:51.885069 kernel: No NUMA configuration found Dec 16 13:03:51.885077 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Dec 16 13:03:51.885086 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Dec 16 13:03:51.885094 kernel: Zone ranges: Dec 16 13:03:51.885101 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:03:51.885108 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Dec 16 13:03:51.885116 kernel: Normal empty Dec 16 13:03:51.885123 kernel: Device empty Dec 16 13:03:51.885130 kernel: Movable zone start for each node Dec 16 13:03:51.885150 kernel: Early memory node ranges Dec 16 13:03:51.885157 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Dec 16 13:03:51.885167 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Dec 16 13:03:51.885174 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Dec 16 13:03:51.885181 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Dec 16 13:03:51.885189 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Dec 16 13:03:51.885196 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Dec 16 13:03:51.885203 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:03:51.885211 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Dec 16 13:03:51.885218 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 13:03:51.885225 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 16 13:03:51.885235 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 16 13:03:51.885242 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Dec 16 13:03:51.885249 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 13:03:51.885257 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:03:51.885264 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:03:51.885271 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 13:03:51.885279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:03:51.885286 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:03:51.885293 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:03:51.885301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:03:51.885310 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:03:51.885323 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 13:03:51.885331 kernel: TSC deadline timer available Dec 16 13:03:51.885338 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:03:51.885346 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:03:51.885360 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:03:51.885369 kernel: CPU topo: Max. threads per core: 1 Dec 16 13:03:51.885376 kernel: CPU topo: Num. cores per package: 4 Dec 16 13:03:51.885384 kernel: CPU topo: Num. threads per package: 4 Dec 16 13:03:51.885392 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 16 13:03:51.885399 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 13:03:51.885407 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 16 13:03:51.885417 kernel: kvm-guest: setup PV sched yield Dec 16 13:03:51.885424 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 16 13:03:51.885432 kernel: Booting paravirtualized kernel on KVM Dec 16 13:03:51.885440 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:03:51.885448 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 16 13:03:51.885457 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 16 13:03:51.885465 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 16 13:03:51.885473 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 16 13:03:51.885480 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:03:51.885488 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:03:51.885497 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:03:51.885505 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:03:51.885513 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:03:51.885522 kernel: Fallback order for Node 0: 0 Dec 16 13:03:51.885530 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Dec 16 13:03:51.885537 kernel: Policy zone: DMA32 Dec 16 13:03:51.885545 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:03:51.885553 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 13:03:51.885560 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:03:51.885568 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:03:51.885575 kernel: Dynamic Preempt: voluntary Dec 16 13:03:51.885583 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:03:51.885593 kernel: rcu: RCU event tracing is enabled. Dec 16 13:03:51.885601 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 13:03:51.885609 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:03:51.885617 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:03:51.885625 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:03:51.885632 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:03:51.885640 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 13:03:51.885648 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 13:03:51.885656 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 13:03:51.885666 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 13:03:51.885673 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 16 13:03:51.885681 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:03:51.885689 kernel: Console: colour dummy device 80x25 Dec 16 13:03:51.885696 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:03:51.885704 kernel: ACPI: Core revision 20240827 Dec 16 13:03:51.885712 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 13:03:51.885720 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:03:51.885727 kernel: x2apic enabled Dec 16 13:03:51.885737 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:03:51.885744 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 16 13:03:51.885752 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 16 13:03:51.885760 kernel: kvm-guest: setup PV IPIs Dec 16 13:03:51.885767 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 13:03:51.885775 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Dec 16 13:03:51.885783 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Dec 16 13:03:51.885791 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:03:51.885798 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 13:03:51.885808 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 13:03:51.885816 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:03:51.885824 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:03:51.885831 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:03:51.885839 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 16 13:03:51.885847 kernel: active return thunk: retbleed_return_thunk Dec 16 13:03:51.885854 kernel: RETBleed: Mitigation: untrained return thunk Dec 16 13:03:51.885862 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 13:03:51.885870 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 13:03:51.885880 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 13:03:51.885888 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 13:03:51.885896 kernel: active return thunk: srso_return_thunk Dec 16 13:03:51.885903 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 13:03:51.885911 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:03:51.885919 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:03:51.885926 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:03:51.885934 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:03:51.885944 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 13:03:51.885951 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:03:51.885959 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:03:51.885967 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:03:51.885974 kernel: landlock: Up and running. Dec 16 13:03:51.885982 kernel: SELinux: Initializing. Dec 16 13:03:51.885989 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 13:03:51.885997 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 13:03:51.886005 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 16 13:03:51.886014 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 13:03:51.886022 kernel: ... version: 0 Dec 16 13:03:51.886030 kernel: ... bit width: 48 Dec 16 13:03:51.886037 kernel: ... generic registers: 6 Dec 16 13:03:51.886045 kernel: ... value mask: 0000ffffffffffff Dec 16 13:03:51.886052 kernel: ... max period: 00007fffffffffff Dec 16 13:03:51.886060 kernel: ... fixed-purpose events: 0 Dec 16 13:03:51.886067 kernel: ... event mask: 000000000000003f Dec 16 13:03:51.886075 kernel: signal: max sigframe size: 1776 Dec 16 13:03:51.886082 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:03:51.886092 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:03:51.886100 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:03:51.886107 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:03:51.886115 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:03:51.886123 kernel: .... node #0, CPUs: #1 #2 #3 Dec 16 13:03:51.886130 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 13:03:51.886150 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Dec 16 13:03:51.886158 kernel: Memory: 2401020K/2552216K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 145256K reserved, 0K cma-reserved) Dec 16 13:03:51.886166 kernel: devtmpfs: initialized Dec 16 13:03:51.886175 kernel: x86/mm: Memory block size: 128MB Dec 16 13:03:51.886183 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Dec 16 13:03:51.886191 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Dec 16 13:03:51.886199 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:03:51.886206 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 13:03:51.886214 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:03:51.886222 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:03:51.886229 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:03:51.886239 kernel: audit: type=2000 audit(1765890230.339:1): state=initialized audit_enabled=0 res=1 Dec 16 13:03:51.886246 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:03:51.886254 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:03:51.886261 kernel: cpuidle: using governor menu Dec 16 13:03:51.886269 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:03:51.886277 kernel: dca service started, version 1.12.1 Dec 16 13:03:51.886285 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Dec 16 13:03:51.886292 kernel: PCI: Using configuration type 1 for base access Dec 16 13:03:51.886300 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:03:51.886309 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:03:51.886323 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:03:51.886331 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:03:51.886339 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:03:51.886346 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:03:51.886354 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:03:51.886362 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:03:51.886370 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:03:51.886378 kernel: ACPI: Interpreter enabled Dec 16 13:03:51.886385 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:03:51.886395 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:03:51.886402 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:03:51.886410 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:03:51.886418 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 13:03:51.886425 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:03:51.886593 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:03:51.886714 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 13:03:51.886858 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 13:03:51.886870 kernel: PCI host bridge to bus 0000:00 Dec 16 13:03:51.886989 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:03:51.887095 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:03:51.887218 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:03:51.887337 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 16 13:03:51.887449 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 16 13:03:51.887557 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 16 13:03:51.887662 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:03:51.887815 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:03:51.887961 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:03:51.888198 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Dec 16 13:03:51.888351 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Dec 16 13:03:51.888478 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 16 13:03:51.888592 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:03:51.888718 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 13:03:51.888834 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Dec 16 13:03:51.888949 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Dec 16 13:03:51.889078 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Dec 16 13:03:51.889228 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 13:03:51.889402 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Dec 16 13:03:51.889532 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Dec 16 13:03:51.889649 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Dec 16 13:03:51.889781 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 13:03:51.889902 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Dec 16 13:03:51.890022 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Dec 16 13:03:51.890155 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 16 13:03:51.890292 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Dec 16 13:03:51.890476 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:03:51.890601 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 13:03:51.890725 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 13:03:51.890846 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Dec 16 13:03:51.890961 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Dec 16 13:03:51.891091 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 13:03:51.891262 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Dec 16 13:03:51.891276 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:03:51.891284 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:03:51.891292 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:03:51.891299 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:03:51.891307 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 13:03:51.891326 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 13:03:51.891334 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 13:03:51.891345 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 13:03:51.891353 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 13:03:51.891361 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 13:03:51.891370 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 13:03:51.891378 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 13:03:51.891385 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 13:03:51.891393 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 13:03:51.891402 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 13:03:51.891413 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 13:03:51.891425 kernel: iommu: Default domain type: Translated Dec 16 13:03:51.891433 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:03:51.891441 kernel: efivars: Registered efivars operations Dec 16 13:03:51.891449 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:03:51.891456 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:03:51.891464 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Dec 16 13:03:51.891472 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Dec 16 13:03:51.891479 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Dec 16 13:03:51.891487 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Dec 16 13:03:51.891496 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Dec 16 13:03:51.891624 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 13:03:51.891740 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 13:03:51.891860 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:03:51.891871 kernel: vgaarb: loaded Dec 16 13:03:51.891879 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 13:03:51.891887 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 13:03:51.891895 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:03:51.891906 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:03:51.891914 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:03:51.891921 kernel: pnp: PnP ACPI init Dec 16 13:03:51.892057 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 16 13:03:51.892070 kernel: pnp: PnP ACPI: found 6 devices Dec 16 13:03:51.892078 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:03:51.892086 kernel: NET: Registered PF_INET protocol family Dec 16 13:03:51.892094 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:03:51.892102 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 13:03:51.892113 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:03:51.892121 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:03:51.892129 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 13:03:51.892165 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 13:03:51.892173 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 13:03:51.892183 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 13:03:51.892193 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:03:51.892203 kernel: NET: Registered PF_XDP protocol family Dec 16 13:03:51.892337 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Dec 16 13:03:51.892460 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Dec 16 13:03:51.892688 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:03:51.892800 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:03:51.892912 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:03:51.893018 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 16 13:03:51.893129 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 16 13:03:51.893254 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 16 13:03:51.893270 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:03:51.893279 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Dec 16 13:03:51.893287 kernel: Initialise system trusted keyrings Dec 16 13:03:51.893295 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 13:03:51.893303 kernel: Key type asymmetric registered Dec 16 13:03:51.893311 kernel: Asymmetric key parser 'x509' registered Dec 16 13:03:51.893340 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:03:51.893351 kernel: io scheduler mq-deadline registered Dec 16 13:03:51.893359 kernel: io scheduler kyber registered Dec 16 13:03:51.893371 kernel: io scheduler bfq registered Dec 16 13:03:51.893379 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:03:51.893388 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 13:03:51.893396 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 13:03:51.893404 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 13:03:51.893412 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:03:51.893420 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:03:51.893429 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:03:51.893437 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:03:51.893447 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:03:51.893455 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:03:51.893585 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 16 13:03:51.893702 kernel: rtc_cmos 00:04: registered as rtc0 Dec 16 13:03:51.893812 kernel: rtc_cmos 00:04: setting system clock to 2025-12-16T13:03:51 UTC (1765890231) Dec 16 13:03:51.893927 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 16 13:03:51.893939 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 13:03:51.893947 kernel: efifb: probing for efifb Dec 16 13:03:51.893958 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 16 13:03:51.893966 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 16 13:03:51.893974 kernel: efifb: scrolling: redraw Dec 16 13:03:51.893982 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:03:51.893990 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 13:03:51.894002 kernel: fb0: EFI VGA frame buffer device Dec 16 13:03:51.894010 kernel: pstore: Using crash dump compression: deflate Dec 16 13:03:51.894018 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:03:51.894027 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:03:51.894037 kernel: Segment Routing with IPv6 Dec 16 13:03:51.894048 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:03:51.894056 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:03:51.894064 kernel: Key type dns_resolver registered Dec 16 13:03:51.894072 kernel: IPI shorthand broadcast: enabled Dec 16 13:03:51.894080 kernel: sched_clock: Marking stable (2804003612, 251767835)->(3184842389, -129070942) Dec 16 13:03:51.894090 kernel: registered taskstats version 1 Dec 16 13:03:51.894098 kernel: Loading compiled-in X.509 certificates Dec 16 13:03:51.894106 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:03:51.894114 kernel: Demotion targets for Node 0: null Dec 16 13:03:51.894122 kernel: Key type .fscrypt registered Dec 16 13:03:51.894130 kernel: Key type fscrypt-provisioning registered Dec 16 13:03:51.894154 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:03:51.894188 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:03:51.894199 kernel: ima: No architecture policies found Dec 16 13:03:51.894207 kernel: clk: Disabling unused clocks Dec 16 13:03:51.894215 kernel: Warning: unable to open an initial console. Dec 16 13:03:51.894224 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:03:51.894232 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:03:51.894240 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:03:51.894248 kernel: Run /init as init process Dec 16 13:03:51.894255 kernel: with arguments: Dec 16 13:03:51.894266 kernel: /init Dec 16 13:03:51.894278 kernel: with environment: Dec 16 13:03:51.894286 kernel: HOME=/ Dec 16 13:03:51.894294 kernel: TERM=linux Dec 16 13:03:51.894303 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:03:51.894315 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:03:51.894331 systemd[1]: Detected virtualization kvm. Dec 16 13:03:51.894340 systemd[1]: Detected architecture x86-64. Dec 16 13:03:51.894350 systemd[1]: Running in initrd. Dec 16 13:03:51.894359 systemd[1]: No hostname configured, using default hostname. Dec 16 13:03:51.894368 systemd[1]: Hostname set to . Dec 16 13:03:51.894377 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:03:51.894385 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:03:51.894394 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:03:51.894403 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:03:51.894412 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:03:51.894424 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:03:51.894435 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:03:51.894447 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:03:51.894457 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:03:51.894466 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:03:51.894475 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:03:51.894483 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:03:51.894494 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:03:51.894503 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:03:51.894511 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:03:51.894520 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:03:51.894528 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:03:51.894537 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:03:51.894546 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:03:51.894554 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:03:51.894563 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:03:51.894574 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:03:51.894582 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:03:51.894591 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:03:51.894600 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:03:51.894608 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:03:51.894617 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:03:51.894626 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:03:51.894635 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:03:51.894649 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:03:51.894659 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:03:51.894668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:51.894677 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:03:51.894707 systemd-journald[201]: Collecting audit messages is disabled. Dec 16 13:03:51.894730 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:03:51.894739 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:03:51.894748 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:03:51.894757 systemd-journald[201]: Journal started Dec 16 13:03:51.894779 systemd-journald[201]: Runtime Journal (/run/log/journal/403a87cbc4d64b19a15f389844d766e5) is 5.9M, max 47.9M, 41.9M free. Dec 16 13:03:51.889060 systemd-modules-load[203]: Inserted module 'overlay' Dec 16 13:03:51.900390 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:03:51.907788 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:03:51.912647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:51.915612 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:03:51.923936 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:03:51.924407 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:03:51.933114 kernel: Bridge firewalling registered Dec 16 13:03:51.924637 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:03:51.927310 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:03:51.930286 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:03:51.933044 systemd-modules-load[203]: Inserted module 'br_netfilter' Dec 16 13:03:51.934248 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:03:51.936905 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:03:51.944920 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:03:51.948107 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:03:51.955027 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:03:51.955896 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:03:51.961739 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:03:51.976782 dracut-cmdline[240]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:03:52.015874 systemd-resolved[246]: Positive Trust Anchors: Dec 16 13:03:52.015888 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:03:52.015917 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:03:52.018416 systemd-resolved[246]: Defaulting to hostname 'linux'. Dec 16 13:03:52.019398 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:03:52.034983 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:03:52.091169 kernel: SCSI subsystem initialized Dec 16 13:03:52.101166 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:03:52.111170 kernel: iscsi: registered transport (tcp) Dec 16 13:03:52.132733 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:03:52.132771 kernel: QLogic iSCSI HBA Driver Dec 16 13:03:52.154778 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:03:52.178348 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:03:52.179397 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:03:52.244659 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:03:52.246994 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:03:52.304167 kernel: raid6: avx2x4 gen() 30029 MB/s Dec 16 13:03:52.321165 kernel: raid6: avx2x2 gen() 30597 MB/s Dec 16 13:03:52.338904 kernel: raid6: avx2x1 gen() 25571 MB/s Dec 16 13:03:52.338926 kernel: raid6: using algorithm avx2x2 gen() 30597 MB/s Dec 16 13:03:52.356925 kernel: raid6: .... xor() 19690 MB/s, rmw enabled Dec 16 13:03:52.356955 kernel: raid6: using avx2x2 recovery algorithm Dec 16 13:03:52.378169 kernel: xor: automatically using best checksumming function avx Dec 16 13:03:52.539186 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:03:52.547922 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:03:52.551406 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:03:52.584794 systemd-udevd[454]: Using default interface naming scheme 'v255'. Dec 16 13:03:52.592051 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:03:52.593656 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:03:52.617458 dracut-pre-trigger[456]: rd.md=0: removing MD RAID activation Dec 16 13:03:52.649391 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:03:52.653335 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:03:52.745417 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:03:52.747121 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:03:52.795164 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 16 13:03:52.805171 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:03:52.817170 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 16 13:03:52.819493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:03:52.819658 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:52.830514 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:03:52.830547 kernel: GPT:9289727 != 19775487 Dec 16 13:03:52.830557 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:03:52.830567 kernel: GPT:9289727 != 19775487 Dec 16 13:03:52.830577 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:03:52.830591 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:03:52.831047 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:52.839347 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:52.845180 kernel: AES CTR mode by8 optimization enabled Dec 16 13:03:52.847206 kernel: libata version 3.00 loaded. Dec 16 13:03:52.865454 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:03:52.868073 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:52.878286 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:03:52.880890 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 13:03:52.879104 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:52.893154 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 13:03:52.893354 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 13:03:52.893367 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 13:03:52.895987 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 13:03:52.896159 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 13:03:52.903166 kernel: scsi host0: ahci Dec 16 13:03:52.903385 kernel: scsi host1: ahci Dec 16 13:03:52.903487 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 13:03:52.907043 kernel: scsi host2: ahci Dec 16 13:03:52.909515 kernel: scsi host3: ahci Dec 16 13:03:52.909666 kernel: scsi host4: ahci Dec 16 13:03:52.911165 kernel: scsi host5: ahci Dec 16 13:03:52.911370 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Dec 16 13:03:52.915149 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Dec 16 13:03:52.915168 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Dec 16 13:03:52.915179 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Dec 16 13:03:52.917158 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Dec 16 13:03:52.917174 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Dec 16 13:03:52.917554 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 13:03:52.924763 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:52.947079 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 13:03:52.947737 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 13:03:52.956875 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 13:03:52.958326 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:03:52.990830 disk-uuid[621]: Primary Header is updated. Dec 16 13:03:52.990830 disk-uuid[621]: Secondary Entries is updated. Dec 16 13:03:52.990830 disk-uuid[621]: Secondary Header is updated. Dec 16 13:03:52.996058 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:03:52.998160 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:03:53.229314 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 13:03:53.231172 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 13:03:53.231189 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 13:03:53.232169 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 16 13:03:53.235168 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 13:03:53.235192 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 13:03:53.236373 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 16 13:03:53.236387 kernel: ata3.00: applying bridge limits Dec 16 13:03:53.239180 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 13:03:53.239201 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 13:03:53.240309 kernel: ata3.00: configured for UDMA/100 Dec 16 13:03:53.241172 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 13:03:53.301758 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 16 13:03:53.301965 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 13:03:53.322165 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 16 13:03:53.753911 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:03:53.755106 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:03:53.757965 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:03:53.761887 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:03:53.766490 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:03:53.803637 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:03:54.000192 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 13:03:54.000248 disk-uuid[622]: The operation has completed successfully. Dec 16 13:03:54.028296 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:03:54.028418 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:03:54.071479 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:03:54.095747 sh[651]: Success Dec 16 13:03:54.114187 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:03:54.114213 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:03:54.115906 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:03:54.125155 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 16 13:03:54.154702 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:03:54.157535 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:03:54.171101 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:03:54.180494 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (663) Dec 16 13:03:54.180524 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:03:54.180535 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:03:54.185656 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:03:54.185684 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:03:54.187162 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:03:54.190042 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:03:54.190776 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:03:54.191610 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:03:54.199493 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:03:54.225220 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (694) Dec 16 13:03:54.225282 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:03:54.227968 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:03:54.231983 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:03:54.232005 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:03:54.237159 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:03:54.238536 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:03:54.240432 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:03:54.329225 ignition[743]: Ignition 2.22.0 Dec 16 13:03:54.329239 ignition[743]: Stage: fetch-offline Dec 16 13:03:54.329282 ignition[743]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:54.329291 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:03:54.329365 ignition[743]: parsed url from cmdline: "" Dec 16 13:03:54.329369 ignition[743]: no config URL provided Dec 16 13:03:54.329374 ignition[743]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:03:54.329382 ignition[743]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:03:54.329404 ignition[743]: op(1): [started] loading QEMU firmware config module Dec 16 13:03:54.329409 ignition[743]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 13:03:54.340802 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:03:54.345750 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:03:54.349892 ignition[743]: op(1): [finished] loading QEMU firmware config module Dec 16 13:03:54.351088 ignition[743]: QEMU firmware config was not found. Ignoring... Dec 16 13:03:54.381725 systemd-networkd[842]: lo: Link UP Dec 16 13:03:54.381735 systemd-networkd[842]: lo: Gained carrier Dec 16 13:03:54.383235 systemd-networkd[842]: Enumeration completed Dec 16 13:03:54.383598 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:03:54.383603 systemd-networkd[842]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:03:54.383864 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:03:54.384845 systemd[1]: Reached target network.target - Network. Dec 16 13:03:54.385333 systemd-networkd[842]: eth0: Link UP Dec 16 13:03:54.385490 systemd-networkd[842]: eth0: Gained carrier Dec 16 13:03:54.385505 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:03:54.425200 systemd-networkd[842]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 13:03:54.448404 ignition[743]: parsing config with SHA512: 33174aa27f67f2b7bffe98f10363409d33cb9d449a81e7b2eda43d8aa3655f1edd4ed3e822fb44248bfe138e88e9a8183052adf64c42b0b1c58e9dfdf657ea2c Dec 16 13:03:54.454407 unknown[743]: fetched base config from "system" Dec 16 13:03:54.454788 ignition[743]: fetch-offline: fetch-offline passed Dec 16 13:03:54.454420 unknown[743]: fetched user config from "qemu" Dec 16 13:03:54.454838 ignition[743]: Ignition finished successfully Dec 16 13:03:54.460847 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:03:54.462914 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 13:03:54.463692 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:03:54.505725 ignition[847]: Ignition 2.22.0 Dec 16 13:03:54.505742 ignition[847]: Stage: kargs Dec 16 13:03:54.505871 ignition[847]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:54.505882 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:03:54.506687 ignition[847]: kargs: kargs passed Dec 16 13:03:54.506726 ignition[847]: Ignition finished successfully Dec 16 13:03:54.512637 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:03:54.514719 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:03:54.553971 ignition[855]: Ignition 2.22.0 Dec 16 13:03:54.553984 ignition[855]: Stage: disks Dec 16 13:03:54.554105 ignition[855]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:54.554115 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:03:54.554798 ignition[855]: disks: disks passed Dec 16 13:03:54.554840 ignition[855]: Ignition finished successfully Dec 16 13:03:54.560456 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:03:54.561372 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:03:54.564974 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:03:54.568728 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:03:54.572513 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:03:54.575570 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:03:54.579527 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:03:54.603287 systemd-fsck[865]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 13:03:54.618363 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:03:54.626533 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:03:54.740173 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:03:54.741052 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:03:54.742258 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:03:54.747330 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:03:54.748730 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:03:54.750808 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:03:54.750847 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:03:54.750866 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:03:54.768401 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:03:54.770338 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:03:54.781806 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (873) Dec 16 13:03:54.781871 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:03:54.781889 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:03:54.787424 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:03:54.787473 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:03:54.789672 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:03:54.809580 initrd-setup-root[897]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:03:54.815427 initrd-setup-root[904]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:03:54.820850 initrd-setup-root[911]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:03:54.826378 initrd-setup-root[918]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:03:54.913907 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:03:54.915543 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:03:54.918832 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:03:54.943163 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:03:54.958315 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:03:54.974350 ignition[987]: INFO : Ignition 2.22.0 Dec 16 13:03:54.974350 ignition[987]: INFO : Stage: mount Dec 16 13:03:54.976824 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:54.976824 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:03:54.976824 ignition[987]: INFO : mount: mount passed Dec 16 13:03:54.976824 ignition[987]: INFO : Ignition finished successfully Dec 16 13:03:54.985378 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:03:54.987062 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:03:55.175846 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:03:55.177524 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:03:55.203176 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (999) Dec 16 13:03:55.206232 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:03:55.206260 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:03:55.209976 kernel: BTRFS info (device vda6): turning on async discard Dec 16 13:03:55.209995 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 13:03:55.211843 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:03:55.254397 ignition[1016]: INFO : Ignition 2.22.0 Dec 16 13:03:55.254397 ignition[1016]: INFO : Stage: files Dec 16 13:03:55.256989 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:55.256989 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:03:55.256989 ignition[1016]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:03:55.256989 ignition[1016]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:03:55.256989 ignition[1016]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:03:55.268355 ignition[1016]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:03:55.270904 ignition[1016]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:03:55.273192 ignition[1016]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:03:55.273192 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:03:55.273192 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:03:55.271324 unknown[1016]: wrote ssh authorized keys file for user: core Dec 16 13:03:55.309323 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:03:55.387157 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:03:55.387157 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:03:55.393218 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 16 13:03:55.473460 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 16 13:03:55.510294 systemd-networkd[842]: eth0: Gained IPv6LL Dec 16 13:03:55.561559 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 16 13:03:55.564852 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:03:55.564852 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:03:55.564852 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:03:55.564852 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:03:55.564852 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:03:55.564852 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:03:55.564852 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:03:55.564852 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:03:55.588767 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:03:55.588767 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:03:55.588767 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:03:55.588767 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:03:55.588767 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:03:55.588767 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 13:03:55.802811 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 16 13:03:56.246924 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 13:03:56.246924 ignition[1016]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 16 13:03:56.253664 ignition[1016]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:03:56.335269 ignition[1016]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:03:56.335269 ignition[1016]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 16 13:03:56.335269 ignition[1016]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 16 13:03:56.335269 ignition[1016]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 13:03:56.349431 ignition[1016]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 13:03:56.349431 ignition[1016]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 16 13:03:56.349431 ignition[1016]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 13:03:56.362270 ignition[1016]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 13:03:56.369395 ignition[1016]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 13:03:56.372149 ignition[1016]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 13:03:56.372149 ignition[1016]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:03:56.376862 ignition[1016]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:03:56.376862 ignition[1016]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:03:56.376862 ignition[1016]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:03:56.376862 ignition[1016]: INFO : files: files passed Dec 16 13:03:56.376862 ignition[1016]: INFO : Ignition finished successfully Dec 16 13:03:56.385227 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:03:56.390819 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:03:56.392284 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:03:56.411625 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:03:56.411767 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:03:56.418215 initrd-setup-root-after-ignition[1045]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 13:03:56.423267 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:03:56.423267 initrd-setup-root-after-ignition[1047]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:03:56.428306 initrd-setup-root-after-ignition[1051]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:03:56.432659 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:03:56.434963 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:03:56.440406 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:03:56.472206 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:03:56.472362 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:03:56.475399 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:03:56.479165 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:03:56.482844 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:03:56.483898 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:03:56.521157 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:03:56.523266 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:03:56.549339 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:03:56.553514 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:03:56.554664 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:03:56.558282 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:03:56.558430 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:03:56.563766 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:03:56.567122 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:03:56.568025 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:03:56.572270 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:03:56.575850 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:03:56.579715 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:03:56.582846 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:03:56.586234 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:03:56.589578 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:03:56.593045 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:03:56.596156 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:03:56.599158 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:03:56.599317 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:03:56.604128 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:03:56.604999 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:03:56.609746 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:03:56.612498 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:03:56.613657 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:03:56.613787 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:03:56.621083 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:03:56.621241 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:03:56.622030 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:03:56.626569 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:03:56.632212 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:03:56.632947 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:03:56.637131 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:03:56.639852 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:03:56.639956 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:03:56.642660 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:03:56.642758 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:03:56.645709 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:03:56.645846 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:03:56.648676 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:03:56.648796 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:03:56.655999 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:03:56.660025 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:03:56.661997 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:03:56.662159 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:03:56.665543 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:03:56.665727 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:03:56.677302 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:03:56.677453 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:03:56.694888 ignition[1071]: INFO : Ignition 2.22.0 Dec 16 13:03:56.694888 ignition[1071]: INFO : Stage: umount Dec 16 13:03:56.697587 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:03:56.697587 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 13:03:56.697587 ignition[1071]: INFO : umount: umount passed Dec 16 13:03:56.697587 ignition[1071]: INFO : Ignition finished successfully Dec 16 13:03:56.699106 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:03:56.699294 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:03:56.700266 systemd[1]: Stopped target network.target - Network. Dec 16 13:03:56.703671 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:03:56.703728 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:03:56.708460 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:03:56.708536 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:03:56.709614 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:03:56.709663 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:03:56.713694 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:03:56.713750 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:03:56.715040 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:03:56.720758 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:03:56.724350 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:03:56.735464 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:03:56.735592 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:03:56.744278 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:03:56.744631 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:03:56.744687 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:03:56.751862 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:03:56.752193 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:03:56.752363 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:03:56.760380 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:03:56.762451 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:03:56.763163 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:03:56.763229 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:03:56.769356 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:03:56.770007 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:03:56.770078 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:03:56.774580 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:03:56.774629 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:03:56.780902 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:03:56.780949 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:03:56.783992 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:03:56.785825 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:03:56.808015 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:03:56.808209 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:03:56.814903 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:03:56.815097 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:03:56.816104 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:03:56.816162 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:03:56.820944 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:03:56.820980 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:03:56.824170 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:03:56.824230 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:03:56.830395 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:03:56.830445 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:03:56.834761 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:03:56.834813 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:03:56.841411 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:03:56.842034 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:03:56.842086 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:03:56.850823 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:03:56.850874 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:03:56.856903 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 13:03:56.856964 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:03:56.864232 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:03:56.864285 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:03:56.865073 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:03:56.865117 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:56.875712 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:03:56.875822 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:03:56.921550 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:03:56.921681 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:03:56.924726 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:03:56.928904 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:03:56.928971 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:03:56.934609 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:03:56.967049 systemd[1]: Switching root. Dec 16 13:03:57.007225 systemd-journald[201]: Journal stopped Dec 16 13:03:58.597437 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 16 13:03:58.597514 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:03:58.597540 kernel: SELinux: policy capability open_perms=1 Dec 16 13:03:58.597556 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:03:58.597577 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:03:58.597592 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:03:58.597614 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:03:58.597632 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:03:58.597646 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:03:58.597664 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:03:58.597678 kernel: audit: type=1403 audit(1765890237.710:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:03:58.597695 systemd[1]: Successfully loaded SELinux policy in 68.890ms. Dec 16 13:03:58.597730 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.403ms. Dec 16 13:03:58.597748 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:03:58.597764 systemd[1]: Detected virtualization kvm. Dec 16 13:03:58.597780 systemd[1]: Detected architecture x86-64. Dec 16 13:03:58.597793 systemd[1]: Detected first boot. Dec 16 13:03:58.597805 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:03:58.597820 zram_generator::config[1117]: No configuration found. Dec 16 13:03:58.597838 kernel: Guest personality initialized and is inactive Dec 16 13:03:58.597849 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:03:58.597860 kernel: Initialized host personality Dec 16 13:03:58.597873 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:03:58.597888 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:03:58.597905 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:03:58.597919 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:03:58.597931 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:03:58.597946 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:03:58.597960 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:03:58.597973 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:03:58.597985 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:03:58.597997 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:03:58.598009 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:03:58.598021 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:03:58.598033 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:03:58.598048 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:03:58.598060 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:03:58.598072 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:03:58.598083 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:03:58.598095 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:03:58.598108 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:03:58.598120 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:03:58.598132 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:03:58.601194 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:03:58.601215 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:03:58.601231 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:03:58.601247 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:03:58.601263 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:03:58.601278 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:03:58.601294 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:03:58.601310 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:03:58.601325 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:03:58.601341 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:03:58.601353 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:03:58.601365 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:03:58.601377 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:03:58.601389 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:03:58.601401 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:03:58.601413 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:03:58.601425 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:03:58.601437 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:03:58.601452 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:03:58.601464 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:03:58.601476 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:58.601488 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:03:58.601500 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:03:58.601512 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:03:58.601524 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:03:58.601537 systemd[1]: Reached target machines.target - Containers. Dec 16 13:03:58.601551 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:03:58.601564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:03:58.601576 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:03:58.601589 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:03:58.601603 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:03:58.601615 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:03:58.601628 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:03:58.601644 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:03:58.601660 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:03:58.601679 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:03:58.601695 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:03:58.601713 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:03:58.601729 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:03:58.601745 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:03:58.601761 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:03:58.601777 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:03:58.601793 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:03:58.601812 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:03:58.601827 kernel: loop: module loaded Dec 16 13:03:58.601843 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:03:58.601858 kernel: fuse: init (API version 7.41) Dec 16 13:03:58.601903 systemd-journald[1181]: Collecting audit messages is disabled. Dec 16 13:03:58.601939 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:03:58.601953 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:03:58.601967 systemd-journald[1181]: Journal started Dec 16 13:03:58.601991 systemd-journald[1181]: Runtime Journal (/run/log/journal/403a87cbc4d64b19a15f389844d766e5) is 5.9M, max 47.9M, 41.9M free. Dec 16 13:03:58.248474 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:03:58.269982 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 13:03:58.270541 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:03:58.604885 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:03:58.604943 systemd[1]: Stopped verity-setup.service. Dec 16 13:03:58.610186 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:58.614173 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:03:58.615793 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:03:58.617745 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:03:58.619747 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:03:58.621710 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:03:58.623792 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:03:58.625965 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:03:58.628053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:03:58.631511 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:03:58.631763 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:03:58.634459 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:03:58.634686 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:03:58.637213 kernel: ACPI: bus type drm_connector registered Dec 16 13:03:58.637885 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:03:58.638265 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:03:58.640730 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:03:58.641017 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:03:58.643339 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:03:58.643612 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:03:58.645752 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:03:58.645982 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:03:58.648106 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:03:58.650320 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:03:58.652680 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:03:58.655314 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:03:58.670544 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:03:58.674096 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:03:58.677182 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:03:58.679017 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:03:58.679046 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:03:58.681689 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:03:58.695269 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:03:58.697114 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:03:58.698534 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:03:58.701465 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:03:58.703402 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:03:58.704461 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:03:58.706444 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:03:58.708854 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:03:58.719555 systemd-journald[1181]: Time spent on flushing to /var/log/journal/403a87cbc4d64b19a15f389844d766e5 is 19.527ms for 1043 entries. Dec 16 13:03:58.719555 systemd-journald[1181]: System Journal (/var/log/journal/403a87cbc4d64b19a15f389844d766e5) is 8M, max 195.6M, 187.6M free. Dec 16 13:03:58.758591 systemd-journald[1181]: Received client request to flush runtime journal. Dec 16 13:03:58.758635 kernel: loop0: detected capacity change from 0 to 110984 Dec 16 13:03:58.713370 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:03:58.717131 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:03:58.728101 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:03:58.733808 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:03:58.736004 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:03:58.744274 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:03:58.747100 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:03:58.755735 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:03:58.759659 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:03:58.761987 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:03:58.764253 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:03:58.780696 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Dec 16 13:03:58.780713 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Dec 16 13:03:58.781179 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:03:58.789971 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:03:58.794276 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:03:58.799871 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:03:58.803231 kernel: loop1: detected capacity change from 0 to 128560 Dec 16 13:03:58.804929 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:03:58.829347 kernel: loop2: detected capacity change from 0 to 229808 Dec 16 13:03:58.834401 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:03:58.837875 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:03:58.861170 kernel: loop3: detected capacity change from 0 to 110984 Dec 16 13:03:58.860731 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Dec 16 13:03:58.860750 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Dec 16 13:03:58.866621 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:03:58.881192 kernel: loop4: detected capacity change from 0 to 128560 Dec 16 13:03:58.889396 kernel: loop5: detected capacity change from 0 to 229808 Dec 16 13:03:58.901570 (sd-merge)[1261]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 16 13:03:58.902662 (sd-merge)[1261]: Merged extensions into '/usr'. Dec 16 13:03:58.908926 systemd[1]: Reload requested from client PID 1225 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:03:58.909070 systemd[1]: Reloading... Dec 16 13:03:58.962181 zram_generator::config[1284]: No configuration found. Dec 16 13:03:59.070441 ldconfig[1216]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:03:59.171562 systemd[1]: Reloading finished in 262 ms. Dec 16 13:03:59.199338 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:03:59.201819 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:03:59.224529 systemd[1]: Starting ensure-sysext.service... Dec 16 13:03:59.226866 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:03:59.234938 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:03:59.239185 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:03:59.243406 systemd[1]: Reload requested from client PID 1325 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:03:59.243421 systemd[1]: Reloading... Dec 16 13:03:59.247447 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:03:59.247608 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:03:59.247905 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:03:59.248182 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:03:59.249058 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:03:59.249408 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Dec 16 13:03:59.249490 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Dec 16 13:03:59.254212 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:03:59.254224 systemd-tmpfiles[1326]: Skipping /boot Dec 16 13:03:59.264420 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:03:59.264562 systemd-tmpfiles[1326]: Skipping /boot Dec 16 13:03:59.293454 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Dec 16 13:03:59.297185 zram_generator::config[1354]: No configuration found. Dec 16 13:03:59.431179 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:03:59.462183 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 13:03:59.468212 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:03:59.480828 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 16 13:03:59.481086 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 13:03:59.481281 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 13:03:59.520311 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 13:03:59.522743 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:03:59.522916 systemd[1]: Reloading finished in 279 ms. Dec 16 13:03:59.535385 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:03:59.548429 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:03:59.606655 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:59.609469 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:03:59.615642 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:03:59.617982 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:03:59.624161 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:03:59.629306 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:03:59.633045 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:03:59.634998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:03:59.637303 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:03:59.639465 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:03:59.640713 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:03:59.647457 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:03:59.651452 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:03:59.656306 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:03:59.660389 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:03:59.662337 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:59.664984 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:03:59.667340 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:03:59.670099 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:03:59.670334 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:03:59.674017 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:03:59.676265 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:03:59.682546 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:03:59.695088 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:03:59.710914 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:03:59.714111 augenrules[1482]: No rules Dec 16 13:03:59.716490 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:03:59.716759 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:03:59.719670 systemd[1]: Finished ensure-sysext.service. Dec 16 13:03:59.725743 kernel: kvm_amd: TSC scaling supported Dec 16 13:03:59.725828 kernel: kvm_amd: Nested Virtualization enabled Dec 16 13:03:59.725863 kernel: kvm_amd: Nested Paging enabled Dec 16 13:03:59.726998 kernel: kvm_amd: LBR virtualization supported Dec 16 13:03:59.727010 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:03:59.729715 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 16 13:03:59.729746 kernel: kvm_amd: Virtual GIF supported Dec 16 13:03:59.734798 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:59.735065 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:03:59.739439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:03:59.743000 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:03:59.750476 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:03:59.755253 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:03:59.757081 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:03:59.757156 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:03:59.759321 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 13:03:59.762220 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:03:59.766169 kernel: EDAC MC: Ver: 3.0.0 Dec 16 13:03:59.768353 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:03:59.770048 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:03:59.770084 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:03:59.772694 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:03:59.774967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:03:59.775201 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:03:59.777344 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:03:59.777574 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:03:59.779550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:03:59.779760 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:03:59.781976 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:03:59.782227 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:03:59.789005 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:03:59.792021 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:03:59.792110 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:03:59.818925 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:03:59.891165 systemd-networkd[1457]: lo: Link UP Dec 16 13:03:59.891177 systemd-networkd[1457]: lo: Gained carrier Dec 16 13:03:59.892755 systemd-networkd[1457]: Enumeration completed Dec 16 13:03:59.892900 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:03:59.893155 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:03:59.893160 systemd-networkd[1457]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:03:59.893687 systemd-networkd[1457]: eth0: Link UP Dec 16 13:03:59.893960 systemd-networkd[1457]: eth0: Gained carrier Dec 16 13:03:59.893973 systemd-networkd[1457]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:03:59.896495 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:03:59.899542 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:03:59.901933 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 13:03:59.904052 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:03:59.904216 systemd-networkd[1457]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 13:03:59.904881 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. Dec 16 13:04:01.774923 systemd-timesyncd[1497]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 13:04:01.774969 systemd-timesyncd[1497]: Initial clock synchronization to Tue 2025-12-16 13:04:01.774859 UTC. Dec 16 13:04:01.780072 systemd-resolved[1459]: Positive Trust Anchors: Dec 16 13:04:01.780090 systemd-resolved[1459]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:04:01.780121 systemd-resolved[1459]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:04:01.784095 systemd-resolved[1459]: Defaulting to hostname 'linux'. Dec 16 13:04:01.785721 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:04:01.787632 systemd[1]: Reached target network.target - Network. Dec 16 13:04:01.789130 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:04:01.791042 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:04:01.792816 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:04:01.794887 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:04:01.796912 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:04:01.798863 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:04:01.800662 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:04:01.802655 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:04:01.804643 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:04:01.804673 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:04:01.806101 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:04:01.808336 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:04:01.811683 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:04:01.815258 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:04:01.817414 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:04:01.819503 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:04:01.826740 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:04:01.829315 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:04:01.832518 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:04:01.834804 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:04:01.838511 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:04:01.840173 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:04:01.841813 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:04:01.841848 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:04:01.843142 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:04:01.845815 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:04:01.848190 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:04:01.850335 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:04:01.852891 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:04:01.854659 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:04:01.856626 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:04:01.861385 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:04:01.862747 jq[1525]: false Dec 16 13:04:01.863285 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:04:01.868317 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:04:01.873259 extend-filesystems[1526]: Found /dev/vda6 Dec 16 13:04:01.874433 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:04:01.875605 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing passwd entry cache Dec 16 13:04:01.875621 oslogin_cache_refresh[1527]: Refreshing passwd entry cache Dec 16 13:04:01.879565 extend-filesystems[1526]: Found /dev/vda9 Dec 16 13:04:01.881465 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:04:01.882920 extend-filesystems[1526]: Checking size of /dev/vda9 Dec 16 13:04:01.884195 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:04:01.885251 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:04:01.885627 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting users, quitting Dec 16 13:04:01.886090 oslogin_cache_refresh[1527]: Failure getting users, quitting Dec 16 13:04:01.886660 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:04:01.886660 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing group entry cache Dec 16 13:04:01.886119 oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:04:01.886180 oslogin_cache_refresh[1527]: Refreshing group entry cache Dec 16 13:04:01.887432 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:04:01.893591 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting groups, quitting Dec 16 13:04:01.893591 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:04:01.892955 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:04:01.892319 oslogin_cache_refresh[1527]: Failure getting groups, quitting Dec 16 13:04:01.892328 oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:04:01.902267 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:04:01.905715 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:04:01.906062 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:04:01.906675 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:04:01.906744 jq[1547]: true Dec 16 13:04:01.906977 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:04:01.909386 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:04:01.909713 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:04:01.913007 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:04:01.913337 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:04:01.917199 update_engine[1542]: I20251216 13:04:01.917121 1542 main.cc:92] Flatcar Update Engine starting Dec 16 13:04:01.930635 (ntainerd)[1553]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:04:01.933147 tar[1551]: linux-amd64/LICENSE Dec 16 13:04:01.935258 tar[1551]: linux-amd64/helm Dec 16 13:04:01.935296 jq[1552]: true Dec 16 13:04:01.960176 dbus-daemon[1523]: [system] SELinux support is enabled Dec 16 13:04:01.960636 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:04:01.961333 systemd-logind[1537]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:04:01.961356 systemd-logind[1537]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:04:01.965449 update_engine[1542]: I20251216 13:04:01.965327 1542 update_check_scheduler.cc:74] Next update check in 5m17s Dec 16 13:04:01.965392 systemd-logind[1537]: New seat seat0. Dec 16 13:04:01.966181 extend-filesystems[1526]: Resized partition /dev/vda9 Dec 16 13:04:01.973297 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:04:01.996471 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:04:01.996498 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:04:01.997492 dbus-daemon[1523]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:04:01.998535 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:04:01.998623 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:04:02.000724 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:04:02.004633 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:04:02.013333 extend-filesystems[1583]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:04:02.100161 locksmithd[1582]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:04:02.142278 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 16 13:04:02.478547 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 16 13:04:02.478627 containerd[1553]: time="2025-12-16T13:04:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:04:02.479856 containerd[1553]: time="2025-12-16T13:04:02.479709459Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:04:02.480725 extend-filesystems[1583]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 13:04:02.480725 extend-filesystems[1583]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 13:04:02.480725 extend-filesystems[1583]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 16 13:04:02.487128 extend-filesystems[1526]: Resized filesystem in /dev/vda9 Dec 16 13:04:02.484710 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:04:02.497075 bash[1581]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:04:02.497188 containerd[1553]: time="2025-12-16T13:04:02.490456519Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.155µs" Dec 16 13:04:02.497188 containerd[1553]: time="2025-12-16T13:04:02.490477849Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:04:02.497188 containerd[1553]: time="2025-12-16T13:04:02.490493208Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:04:02.497188 containerd[1553]: time="2025-12-16T13:04:02.490645313Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:04:02.497188 containerd[1553]: time="2025-12-16T13:04:02.490657977Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:04:02.497188 containerd[1553]: time="2025-12-16T13:04:02.490679527Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:04:02.497188 containerd[1553]: time="2025-12-16T13:04:02.490736985Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:04:02.497188 containerd[1553]: time="2025-12-16T13:04:02.490746393Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:04:02.497188 containerd[1553]: time="2025-12-16T13:04:02.490986423Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:04:02.497188 containerd[1553]: time="2025-12-16T13:04:02.490997904Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:04:02.497188 containerd[1553]: time="2025-12-16T13:04:02.491006821Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:04:02.497188 containerd[1553]: time="2025-12-16T13:04:02.491014165Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:04:02.484994 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:04:02.497988 containerd[1553]: time="2025-12-16T13:04:02.491100737Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:04:02.497988 containerd[1553]: time="2025-12-16T13:04:02.491343623Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:04:02.497988 containerd[1553]: time="2025-12-16T13:04:02.491370072Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:04:02.497988 containerd[1553]: time="2025-12-16T13:04:02.491378989Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:04:02.497988 containerd[1553]: time="2025-12-16T13:04:02.491418203Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:04:02.497988 containerd[1553]: time="2025-12-16T13:04:02.491659425Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:04:02.497988 containerd[1553]: time="2025-12-16T13:04:02.491735868Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:04:02.491801 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:04:02.496154 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 13:04:02.499874 containerd[1553]: time="2025-12-16T13:04:02.499841807Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:04:02.499913 containerd[1553]: time="2025-12-16T13:04:02.499888825Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:04:02.499913 containerd[1553]: time="2025-12-16T13:04:02.499902501Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:04:02.499951 containerd[1553]: time="2025-12-16T13:04:02.499913120Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:04:02.500046 containerd[1553]: time="2025-12-16T13:04:02.499923510Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:04:02.500046 containerd[1553]: time="2025-12-16T13:04:02.499991527Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:04:02.500046 containerd[1553]: time="2025-12-16T13:04:02.500005093Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:04:02.500046 containerd[1553]: time="2025-12-16T13:04:02.500021874Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:04:02.500046 containerd[1553]: time="2025-12-16T13:04:02.500038946Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:04:02.500046 containerd[1553]: time="2025-12-16T13:04:02.500048284Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:04:02.500164 containerd[1553]: time="2025-12-16T13:04:02.500056950Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:04:02.500164 containerd[1553]: time="2025-12-16T13:04:02.500068772Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:04:02.500216 containerd[1553]: time="2025-12-16T13:04:02.500171575Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:04:02.500216 containerd[1553]: time="2025-12-16T13:04:02.500188136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:04:02.500216 containerd[1553]: time="2025-12-16T13:04:02.500199788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:04:02.500285 containerd[1553]: time="2025-12-16T13:04:02.500221960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:04:02.500285 containerd[1553]: time="2025-12-16T13:04:02.500269098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:04:02.500285 containerd[1553]: time="2025-12-16T13:04:02.500280028Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:04:02.500371 containerd[1553]: time="2025-12-16T13:04:02.500290718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:04:02.500371 containerd[1553]: time="2025-12-16T13:04:02.500300557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:04:02.500371 containerd[1553]: time="2025-12-16T13:04:02.500316126Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:04:02.500371 containerd[1553]: time="2025-12-16T13:04:02.500327868Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:04:02.500371 containerd[1553]: time="2025-12-16T13:04:02.500336875Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:04:02.500484 containerd[1553]: time="2025-12-16T13:04:02.500379405Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:04:02.500484 containerd[1553]: time="2025-12-16T13:04:02.500391047Z" level=info msg="Start snapshots syncer" Dec 16 13:04:02.500484 containerd[1553]: time="2025-12-16T13:04:02.500425331Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:04:02.501302 containerd[1553]: time="2025-12-16T13:04:02.501156402Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:04:02.501422 containerd[1553]: time="2025-12-16T13:04:02.501349744Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:04:02.501501 containerd[1553]: time="2025-12-16T13:04:02.501470641Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:04:02.501756 containerd[1553]: time="2025-12-16T13:04:02.501727082Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:04:02.501841 containerd[1553]: time="2025-12-16T13:04:02.501804066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:04:02.501864 containerd[1553]: time="2025-12-16T13:04:02.501837909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:04:02.501916 containerd[1553]: time="2025-12-16T13:04:02.501901929Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:04:02.501989 containerd[1553]: time="2025-12-16T13:04:02.501973814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:04:02.502022 containerd[1553]: time="2025-12-16T13:04:02.502002338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:04:02.502062 containerd[1553]: time="2025-12-16T13:04:02.502045499Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:04:02.502137 containerd[1553]: time="2025-12-16T13:04:02.502116462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:04:02.502137 containerd[1553]: time="2025-12-16T13:04:02.502134335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:04:02.502214 containerd[1553]: time="2025-12-16T13:04:02.502191983Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:04:02.502324 containerd[1553]: time="2025-12-16T13:04:02.502296469Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:04:02.502324 containerd[1553]: time="2025-12-16T13:04:02.502317248Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:04:02.502390 containerd[1553]: time="2025-12-16T13:04:02.502328830Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:04:02.502414 containerd[1553]: time="2025-12-16T13:04:02.502391758Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:04:02.502414 containerd[1553]: time="2025-12-16T13:04:02.502400825Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:04:02.502414 containerd[1553]: time="2025-12-16T13:04:02.502413038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:04:02.502485 containerd[1553]: time="2025-12-16T13:04:02.502468422Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:04:02.502512 containerd[1553]: time="2025-12-16T13:04:02.502488930Z" level=info msg="runtime interface created" Dec 16 13:04:02.502512 containerd[1553]: time="2025-12-16T13:04:02.502497436Z" level=info msg="created NRI interface" Dec 16 13:04:02.502549 containerd[1553]: time="2025-12-16T13:04:02.502537110Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:04:02.502570 containerd[1553]: time="2025-12-16T13:04:02.502550135Z" level=info msg="Connect containerd service" Dec 16 13:04:02.502906 containerd[1553]: time="2025-12-16T13:04:02.502878741Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:04:02.503670 containerd[1553]: time="2025-12-16T13:04:02.503640489Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:04:02.546167 tar[1551]: linux-amd64/README.md Dec 16 13:04:02.570969 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:04:02.583089 containerd[1553]: time="2025-12-16T13:04:02.583038240Z" level=info msg="Start subscribing containerd event" Dec 16 13:04:02.583194 containerd[1553]: time="2025-12-16T13:04:02.583117078Z" level=info msg="Start recovering state" Dec 16 13:04:02.583281 containerd[1553]: time="2025-12-16T13:04:02.583208319Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:04:02.583281 containerd[1553]: time="2025-12-16T13:04:02.583247322Z" level=info msg="Start event monitor" Dec 16 13:04:02.583281 containerd[1553]: time="2025-12-16T13:04:02.583268221Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:04:02.583281 containerd[1553]: time="2025-12-16T13:04:02.583276447Z" level=info msg="Start streaming server" Dec 16 13:04:02.588551 containerd[1553]: time="2025-12-16T13:04:02.583291485Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:04:02.588551 containerd[1553]: time="2025-12-16T13:04:02.583294310Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:04:02.588551 containerd[1553]: time="2025-12-16T13:04:02.583317333Z" level=info msg="runtime interface starting up..." Dec 16 13:04:02.588551 containerd[1553]: time="2025-12-16T13:04:02.583329075Z" level=info msg="starting plugins..." Dec 16 13:04:02.588551 containerd[1553]: time="2025-12-16T13:04:02.583344905Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:04:02.588551 containerd[1553]: time="2025-12-16T13:04:02.583516467Z" level=info msg="containerd successfully booted in 0.128009s" Dec 16 13:04:02.583636 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:04:02.767794 sshd_keygen[1550]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:04:02.792102 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:04:02.795577 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:04:02.817385 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:04:02.817662 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:04:02.820967 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:04:02.842977 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:04:02.846453 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:04:02.849016 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:04:02.850972 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:04:03.203459 systemd-networkd[1457]: eth0: Gained IPv6LL Dec 16 13:04:03.206634 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:04:03.209306 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:04:03.212538 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 13:04:03.215629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:03.235336 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:04:03.254085 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 13:04:03.254385 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 13:04:03.257110 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:04:03.259509 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:04:03.969285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:03.971497 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:04:03.973540 systemd[1]: Startup finished in 2.865s (kernel) + 6.050s (initrd) + 4.459s (userspace) = 13.375s. Dec 16 13:04:03.986671 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:04:04.424755 kubelet[1656]: E1216 13:04:04.424688 1656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:04:04.429009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:04:04.429212 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:04:04.429619 systemd[1]: kubelet.service: Consumed 1.018s CPU time, 269.3M memory peak. Dec 16 13:04:07.036582 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:04:07.038012 systemd[1]: Started sshd@0-10.0.0.80:22-10.0.0.1:41962.service - OpenSSH per-connection server daemon (10.0.0.1:41962). Dec 16 13:04:07.120144 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 41962 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:04:07.122013 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:07.128692 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:04:07.129746 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:04:07.136274 systemd-logind[1537]: New session 1 of user core. Dec 16 13:04:07.154582 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:04:07.157878 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:04:07.173951 (systemd)[1674]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:04:07.176672 systemd-logind[1537]: New session c1 of user core. Dec 16 13:04:07.328355 systemd[1674]: Queued start job for default target default.target. Dec 16 13:04:07.347822 systemd[1674]: Created slice app.slice - User Application Slice. Dec 16 13:04:07.347852 systemd[1674]: Reached target paths.target - Paths. Dec 16 13:04:07.347895 systemd[1674]: Reached target timers.target - Timers. Dec 16 13:04:07.349404 systemd[1674]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:04:07.360195 systemd[1674]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:04:07.360329 systemd[1674]: Reached target sockets.target - Sockets. Dec 16 13:04:07.360365 systemd[1674]: Reached target basic.target - Basic System. Dec 16 13:04:07.360404 systemd[1674]: Reached target default.target - Main User Target. Dec 16 13:04:07.360441 systemd[1674]: Startup finished in 176ms. Dec 16 13:04:07.360750 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:04:07.362459 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:04:07.431489 systemd[1]: Started sshd@1-10.0.0.80:22-10.0.0.1:41974.service - OpenSSH per-connection server daemon (10.0.0.1:41974). Dec 16 13:04:07.497221 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 41974 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:04:07.499055 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:07.503900 systemd-logind[1537]: New session 2 of user core. Dec 16 13:04:07.514447 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:04:07.567949 sshd[1688]: Connection closed by 10.0.0.1 port 41974 Dec 16 13:04:07.568365 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:07.587138 systemd[1]: sshd@1-10.0.0.80:22-10.0.0.1:41974.service: Deactivated successfully. Dec 16 13:04:07.589502 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:04:07.590410 systemd-logind[1537]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:04:07.593937 systemd[1]: Started sshd@2-10.0.0.80:22-10.0.0.1:41988.service - OpenSSH per-connection server daemon (10.0.0.1:41988). Dec 16 13:04:07.594633 systemd-logind[1537]: Removed session 2. Dec 16 13:04:07.649862 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 41988 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:04:07.651995 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:07.657150 systemd-logind[1537]: New session 3 of user core. Dec 16 13:04:07.665467 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:04:07.716952 sshd[1698]: Connection closed by 10.0.0.1 port 41988 Dec 16 13:04:07.717466 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:07.738178 systemd[1]: sshd@2-10.0.0.80:22-10.0.0.1:41988.service: Deactivated successfully. Dec 16 13:04:07.740499 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:04:07.741333 systemd-logind[1537]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:04:07.744522 systemd[1]: Started sshd@3-10.0.0.80:22-10.0.0.1:42004.service - OpenSSH per-connection server daemon (10.0.0.1:42004). Dec 16 13:04:07.745106 systemd-logind[1537]: Removed session 3. Dec 16 13:04:07.799051 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 42004 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:04:07.800888 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:07.805754 systemd-logind[1537]: New session 4 of user core. Dec 16 13:04:07.821483 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:04:07.875134 sshd[1707]: Connection closed by 10.0.0.1 port 42004 Dec 16 13:04:07.875463 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:07.883741 systemd[1]: sshd@3-10.0.0.80:22-10.0.0.1:42004.service: Deactivated successfully. Dec 16 13:04:07.885480 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:04:07.886179 systemd-logind[1537]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:04:07.888780 systemd[1]: Started sshd@4-10.0.0.80:22-10.0.0.1:42010.service - OpenSSH per-connection server daemon (10.0.0.1:42010). Dec 16 13:04:07.889416 systemd-logind[1537]: Removed session 4. Dec 16 13:04:07.953842 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 42010 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:04:07.955772 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:07.960432 systemd-logind[1537]: New session 5 of user core. Dec 16 13:04:07.974404 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:04:08.035898 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:04:08.036304 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:04:08.059543 sudo[1717]: pam_unix(sudo:session): session closed for user root Dec 16 13:04:08.061623 sshd[1716]: Connection closed by 10.0.0.1 port 42010 Dec 16 13:04:08.062130 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:08.081837 systemd[1]: sshd@4-10.0.0.80:22-10.0.0.1:42010.service: Deactivated successfully. Dec 16 13:04:08.083764 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:04:08.084590 systemd-logind[1537]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:04:08.087152 systemd[1]: Started sshd@5-10.0.0.80:22-10.0.0.1:42022.service - OpenSSH per-connection server daemon (10.0.0.1:42022). Dec 16 13:04:08.087876 systemd-logind[1537]: Removed session 5. Dec 16 13:04:08.141912 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 42022 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:04:08.143590 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:08.148702 systemd-logind[1537]: New session 6 of user core. Dec 16 13:04:08.158444 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:04:08.214662 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:04:08.214962 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:04:08.220508 sudo[1728]: pam_unix(sudo:session): session closed for user root Dec 16 13:04:08.226776 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:04:08.227079 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:04:08.237697 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:04:08.290561 augenrules[1750]: No rules Dec 16 13:04:08.292445 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:04:08.292741 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:04:08.294273 sudo[1727]: pam_unix(sudo:session): session closed for user root Dec 16 13:04:08.296073 sshd[1726]: Connection closed by 10.0.0.1 port 42022 Dec 16 13:04:08.296445 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:08.306850 systemd[1]: sshd@5-10.0.0.80:22-10.0.0.1:42022.service: Deactivated successfully. Dec 16 13:04:08.309256 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:04:08.310048 systemd-logind[1537]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:04:08.313594 systemd[1]: Started sshd@6-10.0.0.80:22-10.0.0.1:42028.service - OpenSSH per-connection server daemon (10.0.0.1:42028). Dec 16 13:04:08.314310 systemd-logind[1537]: Removed session 6. Dec 16 13:04:08.376284 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 42028 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:04:08.377650 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:04:08.382620 systemd-logind[1537]: New session 7 of user core. Dec 16 13:04:08.394482 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:04:08.448360 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:04:08.448661 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:04:08.758129 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:04:08.778726 (dockerd)[1783]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:04:09.020092 dockerd[1783]: time="2025-12-16T13:04:09.019929139Z" level=info msg="Starting up" Dec 16 13:04:09.020960 dockerd[1783]: time="2025-12-16T13:04:09.020907974Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:04:09.035258 dockerd[1783]: time="2025-12-16T13:04:09.035169795Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:04:09.491600 dockerd[1783]: time="2025-12-16T13:04:09.491536673Z" level=info msg="Loading containers: start." Dec 16 13:04:09.504257 kernel: Initializing XFRM netlink socket Dec 16 13:04:09.777101 systemd-networkd[1457]: docker0: Link UP Dec 16 13:04:09.782548 dockerd[1783]: time="2025-12-16T13:04:09.782498767Z" level=info msg="Loading containers: done." Dec 16 13:04:09.800401 dockerd[1783]: time="2025-12-16T13:04:09.800339558Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:04:09.800616 dockerd[1783]: time="2025-12-16T13:04:09.800436800Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:04:09.800616 dockerd[1783]: time="2025-12-16T13:04:09.800538732Z" level=info msg="Initializing buildkit" Dec 16 13:04:09.833341 dockerd[1783]: time="2025-12-16T13:04:09.833211182Z" level=info msg="Completed buildkit initialization" Dec 16 13:04:09.840661 dockerd[1783]: time="2025-12-16T13:04:09.840613031Z" level=info msg="Daemon has completed initialization" Dec 16 13:04:09.840749 dockerd[1783]: time="2025-12-16T13:04:09.840699383Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:04:09.840972 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:04:10.526256 containerd[1553]: time="2025-12-16T13:04:10.526183245Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 13:04:11.474484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount86375939.mount: Deactivated successfully. Dec 16 13:04:12.548407 containerd[1553]: time="2025-12-16T13:04:12.548341451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:12.549134 containerd[1553]: time="2025-12-16T13:04:12.549112797Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Dec 16 13:04:12.550491 containerd[1553]: time="2025-12-16T13:04:12.550438874Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:12.553085 containerd[1553]: time="2025-12-16T13:04:12.553028950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:12.553999 containerd[1553]: time="2025-12-16T13:04:12.553970255Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.027696401s" Dec 16 13:04:12.554038 containerd[1553]: time="2025-12-16T13:04:12.554005551Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 13:04:12.554582 containerd[1553]: time="2025-12-16T13:04:12.554557125Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 13:04:13.928527 containerd[1553]: time="2025-12-16T13:04:13.928456136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:13.929854 containerd[1553]: time="2025-12-16T13:04:13.929821646Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Dec 16 13:04:13.931278 containerd[1553]: time="2025-12-16T13:04:13.931210450Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:13.934770 containerd[1553]: time="2025-12-16T13:04:13.934734287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:13.935854 containerd[1553]: time="2025-12-16T13:04:13.935821365Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.381240064s" Dec 16 13:04:13.935854 containerd[1553]: time="2025-12-16T13:04:13.935851853Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 13:04:13.936338 containerd[1553]: time="2025-12-16T13:04:13.936308970Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 13:04:14.476205 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:04:14.478140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:15.335251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:15.362705 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:04:15.449698 kubelet[2070]: E1216 13:04:15.449595 2070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:04:15.457775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:04:15.458036 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:04:15.459156 systemd[1]: kubelet.service: Consumed 233ms CPU time, 110.9M memory peak. Dec 16 13:04:15.958190 containerd[1553]: time="2025-12-16T13:04:15.958097613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:15.959278 containerd[1553]: time="2025-12-16T13:04:15.959247669Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Dec 16 13:04:15.960802 containerd[1553]: time="2025-12-16T13:04:15.960753262Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:15.963829 containerd[1553]: time="2025-12-16T13:04:15.963755972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:15.964675 containerd[1553]: time="2025-12-16T13:04:15.964636934Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 2.028293921s" Dec 16 13:04:15.964675 containerd[1553]: time="2025-12-16T13:04:15.964667812Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 13:04:15.965095 containerd[1553]: time="2025-12-16T13:04:15.965071609Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 13:04:16.857601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3788284796.mount: Deactivated successfully. Dec 16 13:04:17.679096 containerd[1553]: time="2025-12-16T13:04:17.679022547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:17.679957 containerd[1553]: time="2025-12-16T13:04:17.679922855Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Dec 16 13:04:17.683257 containerd[1553]: time="2025-12-16T13:04:17.681459265Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:17.684597 containerd[1553]: time="2025-12-16T13:04:17.684526577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:17.685142 containerd[1553]: time="2025-12-16T13:04:17.685097657Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.720000029s" Dec 16 13:04:17.685188 containerd[1553]: time="2025-12-16T13:04:17.685137893Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 13:04:17.685646 containerd[1553]: time="2025-12-16T13:04:17.685610008Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 13:04:18.176567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2489538817.mount: Deactivated successfully. Dec 16 13:04:18.848818 containerd[1553]: time="2025-12-16T13:04:18.848746108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:18.849519 containerd[1553]: time="2025-12-16T13:04:18.849454777Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Dec 16 13:04:18.850819 containerd[1553]: time="2025-12-16T13:04:18.850754574Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:18.853162 containerd[1553]: time="2025-12-16T13:04:18.853119178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:18.854134 containerd[1553]: time="2025-12-16T13:04:18.854100447Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.168462718s" Dec 16 13:04:18.854134 containerd[1553]: time="2025-12-16T13:04:18.854131876Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 13:04:18.854593 containerd[1553]: time="2025-12-16T13:04:18.854567653Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:04:19.444447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount206890550.mount: Deactivated successfully. Dec 16 13:04:19.451159 containerd[1553]: time="2025-12-16T13:04:19.451103268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:04:19.451904 containerd[1553]: time="2025-12-16T13:04:19.451860918Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 16 13:04:19.453093 containerd[1553]: time="2025-12-16T13:04:19.453055528Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:04:19.455297 containerd[1553]: time="2025-12-16T13:04:19.455262716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:04:19.455914 containerd[1553]: time="2025-12-16T13:04:19.455880555Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 601.288987ms" Dec 16 13:04:19.455914 containerd[1553]: time="2025-12-16T13:04:19.455911062Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:04:19.456460 containerd[1553]: time="2025-12-16T13:04:19.456439042Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 13:04:19.943778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1172623469.mount: Deactivated successfully. Dec 16 13:04:22.218976 containerd[1553]: time="2025-12-16T13:04:22.218908452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:22.219742 containerd[1553]: time="2025-12-16T13:04:22.219682653Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Dec 16 13:04:22.220956 containerd[1553]: time="2025-12-16T13:04:22.220930813Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:22.223561 containerd[1553]: time="2025-12-16T13:04:22.223519517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:22.224441 containerd[1553]: time="2025-12-16T13:04:22.224415066Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.767947952s" Dec 16 13:04:22.224480 containerd[1553]: time="2025-12-16T13:04:22.224448078Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 13:04:25.476278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:04:25.477838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:25.735518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:25.758853 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:04:25.847846 kubelet[2234]: E1216 13:04:25.847748 2234 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:04:25.852022 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:04:25.852255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:04:25.852681 systemd[1]: kubelet.service: Consumed 286ms CPU time, 110.6M memory peak. Dec 16 13:04:26.137326 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:26.137678 systemd[1]: kubelet.service: Consumed 286ms CPU time, 110.6M memory peak. Dec 16 13:04:26.140460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:26.167224 systemd[1]: Reload requested from client PID 2250 ('systemctl') (unit session-7.scope)... Dec 16 13:04:26.167260 systemd[1]: Reloading... Dec 16 13:04:26.262273 zram_generator::config[2295]: No configuration found. Dec 16 13:04:27.461577 systemd[1]: Reloading finished in 1293 ms. Dec 16 13:04:27.536064 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:04:27.536184 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:04:27.536540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:27.536609 systemd[1]: kubelet.service: Consumed 161ms CPU time, 98.2M memory peak. Dec 16 13:04:27.538403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:28.354390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:28.363521 (kubelet)[2340]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:04:28.403517 kubelet[2340]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:04:28.403517 kubelet[2340]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:04:28.403517 kubelet[2340]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:04:28.403517 kubelet[2340]: I1216 13:04:28.403019 2340 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:04:28.649765 kubelet[2340]: I1216 13:04:28.649648 2340 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 13:04:28.649765 kubelet[2340]: I1216 13:04:28.649682 2340 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:04:28.649976 kubelet[2340]: I1216 13:04:28.649951 2340 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:04:28.672264 kubelet[2340]: I1216 13:04:28.670837 2340 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:04:28.672452 kubelet[2340]: E1216 13:04:28.672407 2340 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:04:28.678806 kubelet[2340]: I1216 13:04:28.678769 2340 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:04:28.684890 kubelet[2340]: I1216 13:04:28.684868 2340 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:04:28.685127 kubelet[2340]: I1216 13:04:28.685101 2340 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:04:28.685300 kubelet[2340]: I1216 13:04:28.685127 2340 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:04:28.685400 kubelet[2340]: I1216 13:04:28.685309 2340 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:04:28.685400 kubelet[2340]: I1216 13:04:28.685328 2340 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 13:04:28.686207 kubelet[2340]: I1216 13:04:28.686180 2340 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:04:28.688246 kubelet[2340]: I1216 13:04:28.688204 2340 kubelet.go:480] "Attempting to sync node with API server" Dec 16 13:04:28.688246 kubelet[2340]: I1216 13:04:28.688222 2340 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:04:28.688305 kubelet[2340]: I1216 13:04:28.688267 2340 kubelet.go:386] "Adding apiserver pod source" Dec 16 13:04:28.691466 kubelet[2340]: I1216 13:04:28.691340 2340 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:04:28.696405 kubelet[2340]: I1216 13:04:28.696378 2340 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:04:28.696946 kubelet[2340]: I1216 13:04:28.696917 2340 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:04:28.698139 kubelet[2340]: W1216 13:04:28.697929 2340 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:04:28.698139 kubelet[2340]: E1216 13:04:28.698000 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:04:28.698139 kubelet[2340]: E1216 13:04:28.698075 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:04:28.701136 kubelet[2340]: I1216 13:04:28.701116 2340 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:04:28.701222 kubelet[2340]: I1216 13:04:28.701200 2340 server.go:1289] "Started kubelet" Dec 16 13:04:28.701663 kubelet[2340]: I1216 13:04:28.701611 2340 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:04:28.705341 kubelet[2340]: I1216 13:04:28.705141 2340 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:04:28.706082 kubelet[2340]: I1216 13:04:28.705827 2340 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:04:28.706217 kubelet[2340]: I1216 13:04:28.706197 2340 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:04:28.708396 kubelet[2340]: I1216 13:04:28.708381 2340 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:04:28.708582 kubelet[2340]: I1216 13:04:28.708558 2340 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:04:28.708707 kubelet[2340]: I1216 13:04:28.708695 2340 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:04:28.708793 kubelet[2340]: I1216 13:04:28.708701 2340 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:04:28.709854 kubelet[2340]: I1216 13:04:28.709499 2340 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:04:28.709854 kubelet[2340]: I1216 13:04:28.709624 2340 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:04:28.710892 kubelet[2340]: E1216 13:04:28.710864 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:04:28.713264 kubelet[2340]: E1216 13:04:28.711334 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="200ms" Dec 16 13:04:28.713264 kubelet[2340]: E1216 13:04:28.711857 2340 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:04:28.713264 kubelet[2340]: I1216 13:04:28.712211 2340 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:04:28.714067 kubelet[2340]: E1216 13:04:28.712025 2340 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b3d7e24016cc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 13:04:28.701136588 +0000 UTC m=+0.333456793,LastTimestamp:2025-12-16 13:04:28.701136588 +0000 UTC m=+0.333456793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 13:04:28.714776 kubelet[2340]: I1216 13:04:28.714740 2340 server.go:317] "Adding debug handlers to kubelet server" Dec 16 13:04:28.718626 kubelet[2340]: E1216 13:04:28.716999 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:04:28.726318 kubelet[2340]: I1216 13:04:28.726294 2340 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:04:28.726318 kubelet[2340]: I1216 13:04:28.726310 2340 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:04:28.726318 kubelet[2340]: I1216 13:04:28.726325 2340 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:04:28.731187 kubelet[2340]: I1216 13:04:28.731134 2340 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 13:04:28.732830 kubelet[2340]: I1216 13:04:28.732791 2340 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 13:04:28.732908 kubelet[2340]: I1216 13:04:28.732841 2340 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 13:04:28.732908 kubelet[2340]: I1216 13:04:28.732875 2340 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:04:28.732908 kubelet[2340]: I1216 13:04:28.732895 2340 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 13:04:28.733005 kubelet[2340]: E1216 13:04:28.732952 2340 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:04:28.735227 kubelet[2340]: E1216 13:04:28.734960 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:04:28.811332 kubelet[2340]: E1216 13:04:28.811275 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:04:28.833584 kubelet[2340]: E1216 13:04:28.833489 2340 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 13:04:28.911797 kubelet[2340]: E1216 13:04:28.911655 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:04:28.913306 kubelet[2340]: E1216 13:04:28.913222 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="400ms" Dec 16 13:04:29.011837 kubelet[2340]: E1216 13:04:29.011745 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 13:04:29.013293 kubelet[2340]: I1216 13:04:29.013171 2340 policy_none.go:49] "None policy: Start" Dec 16 13:04:29.013293 kubelet[2340]: I1216 13:04:29.013202 2340 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:04:29.013293 kubelet[2340]: I1216 13:04:29.013217 2340 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:04:29.021171 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:04:29.034015 kubelet[2340]: E1216 13:04:29.033941 2340 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 13:04:29.041723 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:04:29.045700 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:04:29.067700 kubelet[2340]: E1216 13:04:29.067540 2340 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:04:29.067854 kubelet[2340]: I1216 13:04:29.067836 2340 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:04:29.067886 kubelet[2340]: I1216 13:04:29.067851 2340 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:04:29.068247 kubelet[2340]: I1216 13:04:29.068077 2340 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:04:29.069522 kubelet[2340]: E1216 13:04:29.069465 2340 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:04:29.069596 kubelet[2340]: E1216 13:04:29.069558 2340 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 13:04:29.169900 kubelet[2340]: I1216 13:04:29.169772 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:04:29.170184 kubelet[2340]: E1216 13:04:29.170158 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Dec 16 13:04:29.313999 kubelet[2340]: E1216 13:04:29.313939 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="800ms" Dec 16 13:04:29.372065 kubelet[2340]: I1216 13:04:29.372032 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:04:29.372378 kubelet[2340]: E1216 13:04:29.372346 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Dec 16 13:04:29.512323 kubelet[2340]: I1216 13:04:29.512257 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/431d6bf4a6c51145869eb79188539fc3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"431d6bf4a6c51145869eb79188539fc3\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:04:29.512323 kubelet[2340]: I1216 13:04:29.512307 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/431d6bf4a6c51145869eb79188539fc3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"431d6bf4a6c51145869eb79188539fc3\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:04:29.512818 kubelet[2340]: I1216 13:04:29.512339 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/431d6bf4a6c51145869eb79188539fc3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"431d6bf4a6c51145869eb79188539fc3\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:04:29.632560 kubelet[2340]: E1216 13:04:29.632500 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:04:29.708904 systemd[1]: Created slice kubepods-burstable-pod431d6bf4a6c51145869eb79188539fc3.slice - libcontainer container kubepods-burstable-pod431d6bf4a6c51145869eb79188539fc3.slice. Dec 16 13:04:29.714106 kubelet[2340]: I1216 13:04:29.714078 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:04:29.714106 kubelet[2340]: I1216 13:04:29.714107 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:04:29.714226 kubelet[2340]: I1216 13:04:29.714121 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:04:29.714226 kubelet[2340]: I1216 13:04:29.714134 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:04:29.714226 kubelet[2340]: I1216 13:04:29.714149 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:04:29.721038 kubelet[2340]: E1216 13:04:29.721005 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:04:29.721844 containerd[1553]: time="2025-12-16T13:04:29.721797587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:431d6bf4a6c51145869eb79188539fc3,Namespace:kube-system,Attempt:0,}" Dec 16 13:04:29.774201 kubelet[2340]: I1216 13:04:29.774073 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:04:29.774569 kubelet[2340]: E1216 13:04:29.774526 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Dec 16 13:04:29.891793 kubelet[2340]: E1216 13:04:29.891740 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:04:29.945214 kubelet[2340]: E1216 13:04:29.945158 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:04:29.960548 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 16 13:04:29.962336 kubelet[2340]: E1216 13:04:29.962302 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:04:29.962998 containerd[1553]: time="2025-12-16T13:04:29.962963633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 16 13:04:29.973529 kubelet[2340]: E1216 13:04:29.973472 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:04:30.016470 kubelet[2340]: I1216 13:04:30.016394 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 13:04:30.114790 kubelet[2340]: E1216 13:04:30.114649 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="1.6s" Dec 16 13:04:30.234141 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 16 13:04:30.236293 kubelet[2340]: E1216 13:04:30.236226 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:04:30.236953 containerd[1553]: time="2025-12-16T13:04:30.236905303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 16 13:04:30.576628 kubelet[2340]: I1216 13:04:30.576589 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:04:30.577080 kubelet[2340]: E1216 13:04:30.576967 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Dec 16 13:04:30.698210 kubelet[2340]: E1216 13:04:30.698133 2340 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:04:31.716058 kubelet[2340]: E1216 13:04:31.715998 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="3.2s" Dec 16 13:04:31.837897 kubelet[2340]: E1216 13:04:31.837838 2340 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:04:32.079433 containerd[1553]: time="2025-12-16T13:04:32.079309922Z" level=info msg="connecting to shim f404a6afb538286659a04c9dc6dcc16730eb40632569e0bbf4b103a58b0ef655" address="unix:///run/containerd/s/5621045e825272c64546d78e67349b8d3c5eb26b61d8aec7965966e717f501a4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:04:32.086273 containerd[1553]: time="2025-12-16T13:04:32.086049278Z" level=info msg="connecting to shim cd96cb2fea93e71ff576ee97ef55d78710ac11dddc74cad508ab16b97e7c4026" address="unix:///run/containerd/s/3b68d3c189e6a0ceb7f8ffd6c9fee5a5bfb0967d24935b49095079b4d428612f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:04:32.093427 containerd[1553]: time="2025-12-16T13:04:32.093377098Z" level=info msg="connecting to shim ebd399f032d3f6eb14d05c820a906d1bef6b4f793addd2879c4dc9420d823d9e" address="unix:///run/containerd/s/47a56384e887bc21710b077ff809ccdfdc1354d9c852abff51fdcb688e8517fd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:04:32.165682 systemd[1]: Started cri-containerd-f404a6afb538286659a04c9dc6dcc16730eb40632569e0bbf4b103a58b0ef655.scope - libcontainer container f404a6afb538286659a04c9dc6dcc16730eb40632569e0bbf4b103a58b0ef655. Dec 16 13:04:32.178683 kubelet[2340]: I1216 13:04:32.178636 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:04:32.178955 kubelet[2340]: E1216 13:04:32.178931 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Dec 16 13:04:32.183397 systemd[1]: Started cri-containerd-cd96cb2fea93e71ff576ee97ef55d78710ac11dddc74cad508ab16b97e7c4026.scope - libcontainer container cd96cb2fea93e71ff576ee97ef55d78710ac11dddc74cad508ab16b97e7c4026. Dec 16 13:04:32.187331 systemd[1]: Started cri-containerd-ebd399f032d3f6eb14d05c820a906d1bef6b4f793addd2879c4dc9420d823d9e.scope - libcontainer container ebd399f032d3f6eb14d05c820a906d1bef6b4f793addd2879c4dc9420d823d9e. Dec 16 13:04:32.383612 containerd[1553]: time="2025-12-16T13:04:32.383478407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:431d6bf4a6c51145869eb79188539fc3,Namespace:kube-system,Attempt:0,} returns sandbox id \"f404a6afb538286659a04c9dc6dcc16730eb40632569e0bbf4b103a58b0ef655\"" Dec 16 13:04:32.385591 containerd[1553]: time="2025-12-16T13:04:32.385554190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebd399f032d3f6eb14d05c820a906d1bef6b4f793addd2879c4dc9420d823d9e\"" Dec 16 13:04:32.393162 containerd[1553]: time="2025-12-16T13:04:32.393127299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd96cb2fea93e71ff576ee97ef55d78710ac11dddc74cad508ab16b97e7c4026\"" Dec 16 13:04:32.393936 containerd[1553]: time="2025-12-16T13:04:32.393895600Z" level=info msg="CreateContainer within sandbox \"f404a6afb538286659a04c9dc6dcc16730eb40632569e0bbf4b103a58b0ef655\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:04:32.398095 containerd[1553]: time="2025-12-16T13:04:32.398040992Z" level=info msg="CreateContainer within sandbox \"ebd399f032d3f6eb14d05c820a906d1bef6b4f793addd2879c4dc9420d823d9e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:04:32.407477 containerd[1553]: time="2025-12-16T13:04:32.407417854Z" level=info msg="Container d6cac6f401dd4fa486ff02fd1869a4d22397e6a31f26e6d2b158399f0fd7b6f9: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:04:32.410279 containerd[1553]: time="2025-12-16T13:04:32.410003562Z" level=info msg="CreateContainer within sandbox \"cd96cb2fea93e71ff576ee97ef55d78710ac11dddc74cad508ab16b97e7c4026\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:04:32.422676 containerd[1553]: time="2025-12-16T13:04:32.422628314Z" level=info msg="Container a72aefe69428ff7389aa993611ee9d1b1c593c891806f8b667d6146c0b8e6526: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:04:32.425507 containerd[1553]: time="2025-12-16T13:04:32.425445666Z" level=info msg="CreateContainer within sandbox \"f404a6afb538286659a04c9dc6dcc16730eb40632569e0bbf4b103a58b0ef655\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d6cac6f401dd4fa486ff02fd1869a4d22397e6a31f26e6d2b158399f0fd7b6f9\"" Dec 16 13:04:32.426118 containerd[1553]: time="2025-12-16T13:04:32.426090535Z" level=info msg="StartContainer for \"d6cac6f401dd4fa486ff02fd1869a4d22397e6a31f26e6d2b158399f0fd7b6f9\"" Dec 16 13:04:32.427546 containerd[1553]: time="2025-12-16T13:04:32.427509506Z" level=info msg="connecting to shim d6cac6f401dd4fa486ff02fd1869a4d22397e6a31f26e6d2b158399f0fd7b6f9" address="unix:///run/containerd/s/5621045e825272c64546d78e67349b8d3c5eb26b61d8aec7965966e717f501a4" protocol=ttrpc version=3 Dec 16 13:04:32.430779 containerd[1553]: time="2025-12-16T13:04:32.430721428Z" level=info msg="CreateContainer within sandbox \"ebd399f032d3f6eb14d05c820a906d1bef6b4f793addd2879c4dc9420d823d9e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a72aefe69428ff7389aa993611ee9d1b1c593c891806f8b667d6146c0b8e6526\"" Dec 16 13:04:32.431425 containerd[1553]: time="2025-12-16T13:04:32.431372539Z" level=info msg="StartContainer for \"a72aefe69428ff7389aa993611ee9d1b1c593c891806f8b667d6146c0b8e6526\"" Dec 16 13:04:32.432959 containerd[1553]: time="2025-12-16T13:04:32.432921894Z" level=info msg="connecting to shim a72aefe69428ff7389aa993611ee9d1b1c593c891806f8b667d6146c0b8e6526" address="unix:///run/containerd/s/47a56384e887bc21710b077ff809ccdfdc1354d9c852abff51fdcb688e8517fd" protocol=ttrpc version=3 Dec 16 13:04:32.433570 containerd[1553]: time="2025-12-16T13:04:32.433546575Z" level=info msg="Container 082c0dfc563db52d02f16a1c5b867260a8bc2878f1a0ac074adf83bcdaa7d6bb: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:04:32.446362 containerd[1553]: time="2025-12-16T13:04:32.446312872Z" level=info msg="CreateContainer within sandbox \"cd96cb2fea93e71ff576ee97ef55d78710ac11dddc74cad508ab16b97e7c4026\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"082c0dfc563db52d02f16a1c5b867260a8bc2878f1a0ac074adf83bcdaa7d6bb\"" Dec 16 13:04:32.447169 containerd[1553]: time="2025-12-16T13:04:32.447150974Z" level=info msg="StartContainer for \"082c0dfc563db52d02f16a1c5b867260a8bc2878f1a0ac074adf83bcdaa7d6bb\"" Dec 16 13:04:32.451888 containerd[1553]: time="2025-12-16T13:04:32.451809809Z" level=info msg="connecting to shim 082c0dfc563db52d02f16a1c5b867260a8bc2878f1a0ac074adf83bcdaa7d6bb" address="unix:///run/containerd/s/3b68d3c189e6a0ceb7f8ffd6c9fee5a5bfb0967d24935b49095079b4d428612f" protocol=ttrpc version=3 Dec 16 13:04:32.457469 systemd[1]: Started cri-containerd-d6cac6f401dd4fa486ff02fd1869a4d22397e6a31f26e6d2b158399f0fd7b6f9.scope - libcontainer container d6cac6f401dd4fa486ff02fd1869a4d22397e6a31f26e6d2b158399f0fd7b6f9. Dec 16 13:04:32.464282 systemd[1]: Started cri-containerd-a72aefe69428ff7389aa993611ee9d1b1c593c891806f8b667d6146c0b8e6526.scope - libcontainer container a72aefe69428ff7389aa993611ee9d1b1c593c891806f8b667d6146c0b8e6526. Dec 16 13:04:32.488438 systemd[1]: Started cri-containerd-082c0dfc563db52d02f16a1c5b867260a8bc2878f1a0ac074adf83bcdaa7d6bb.scope - libcontainer container 082c0dfc563db52d02f16a1c5b867260a8bc2878f1a0ac074adf83bcdaa7d6bb. Dec 16 13:04:32.884346 containerd[1553]: time="2025-12-16T13:04:32.884213796Z" level=info msg="StartContainer for \"082c0dfc563db52d02f16a1c5b867260a8bc2878f1a0ac074adf83bcdaa7d6bb\" returns successfully" Dec 16 13:04:32.890892 containerd[1553]: time="2025-12-16T13:04:32.890821706Z" level=info msg="StartContainer for \"d6cac6f401dd4fa486ff02fd1869a4d22397e6a31f26e6d2b158399f0fd7b6f9\" returns successfully" Dec 16 13:04:32.898568 containerd[1553]: time="2025-12-16T13:04:32.898491347Z" level=info msg="StartContainer for \"a72aefe69428ff7389aa993611ee9d1b1c593c891806f8b667d6146c0b8e6526\" returns successfully" Dec 16 13:04:32.906669 kubelet[2340]: E1216 13:04:32.906643 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:04:33.910077 kubelet[2340]: E1216 13:04:33.910023 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:04:33.910729 kubelet[2340]: E1216 13:04:33.910398 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:04:33.910793 kubelet[2340]: E1216 13:04:33.910738 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:04:34.698363 kubelet[2340]: I1216 13:04:34.698316 2340 apiserver.go:52] "Watching apiserver" Dec 16 13:04:34.708797 kubelet[2340]: I1216 13:04:34.708751 2340 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:04:34.910746 kubelet[2340]: E1216 13:04:34.910700 2340 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 13:04:35.381667 kubelet[2340]: I1216 13:04:35.381629 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:04:35.777802 kubelet[2340]: I1216 13:04:35.777354 2340 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 13:04:35.811921 kubelet[2340]: I1216 13:04:35.811863 2340 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:04:35.911702 kubelet[2340]: I1216 13:04:35.911668 2340 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 13:04:36.591854 kubelet[2340]: I1216 13:04:36.591777 2340 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 13:04:36.990904 kubelet[2340]: E1216 13:04:36.990659 2340 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 13:04:36.990904 kubelet[2340]: I1216 13:04:36.990695 2340 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 13:04:38.865225 kubelet[2340]: I1216 13:04:38.865156 2340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.865138982 podStartE2EDuration="2.865138982s" podCreationTimestamp="2025-12-16 13:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:04:38.855608737 +0000 UTC m=+10.487928942" watchObservedRunningTime="2025-12-16 13:04:38.865138982 +0000 UTC m=+10.497459187" Dec 16 13:04:38.873856 kubelet[2340]: I1216 13:04:38.873492 2340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.873477852 podStartE2EDuration="2.873477852s" podCreationTimestamp="2025-12-16 13:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:04:38.86547884 +0000 UTC m=+10.497799045" watchObservedRunningTime="2025-12-16 13:04:38.873477852 +0000 UTC m=+10.505798057" Dec 16 13:04:41.132045 systemd[1]: Reload requested from client PID 2628 ('systemctl') (unit session-7.scope)... Dec 16 13:04:41.132066 systemd[1]: Reloading... Dec 16 13:04:41.229305 zram_generator::config[2674]: No configuration found. Dec 16 13:04:41.473643 systemd[1]: Reloading finished in 341 ms. Dec 16 13:04:41.502883 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:41.535142 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:04:41.535558 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:41.535630 systemd[1]: kubelet.service: Consumed 1.000s CPU time, 132.2M memory peak. Dec 16 13:04:41.537934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:04:41.787612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:04:41.802759 (kubelet)[2716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:04:41.842719 kubelet[2716]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:04:41.842719 kubelet[2716]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:04:41.842719 kubelet[2716]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:04:41.843128 kubelet[2716]: I1216 13:04:41.842720 2716 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:04:41.849159 kubelet[2716]: I1216 13:04:41.849115 2716 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 13:04:41.849159 kubelet[2716]: I1216 13:04:41.849150 2716 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:04:41.849468 kubelet[2716]: I1216 13:04:41.849442 2716 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:04:41.850950 kubelet[2716]: I1216 13:04:41.850924 2716 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:04:41.853735 kubelet[2716]: I1216 13:04:41.853711 2716 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:04:41.858729 kubelet[2716]: I1216 13:04:41.858690 2716 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:04:41.863721 kubelet[2716]: I1216 13:04:41.863684 2716 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:04:41.863954 kubelet[2716]: I1216 13:04:41.863913 2716 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:04:41.864100 kubelet[2716]: I1216 13:04:41.863945 2716 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:04:41.864194 kubelet[2716]: I1216 13:04:41.864105 2716 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:04:41.864194 kubelet[2716]: I1216 13:04:41.864115 2716 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 13:04:41.864194 kubelet[2716]: I1216 13:04:41.864171 2716 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:04:41.864375 kubelet[2716]: I1216 13:04:41.864354 2716 kubelet.go:480] "Attempting to sync node with API server" Dec 16 13:04:41.864375 kubelet[2716]: I1216 13:04:41.864369 2716 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:04:41.864448 kubelet[2716]: I1216 13:04:41.864392 2716 kubelet.go:386] "Adding apiserver pod source" Dec 16 13:04:41.864448 kubelet[2716]: I1216 13:04:41.864410 2716 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:04:41.865524 kubelet[2716]: I1216 13:04:41.865482 2716 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:04:41.865977 kubelet[2716]: I1216 13:04:41.865939 2716 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:04:41.873971 kubelet[2716]: I1216 13:04:41.871205 2716 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:04:41.873971 kubelet[2716]: I1216 13:04:41.871276 2716 server.go:1289] "Started kubelet" Dec 16 13:04:41.873971 kubelet[2716]: I1216 13:04:41.871404 2716 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:04:41.873971 kubelet[2716]: I1216 13:04:41.871990 2716 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:04:41.873971 kubelet[2716]: I1216 13:04:41.872358 2716 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:04:41.873971 kubelet[2716]: I1216 13:04:41.872601 2716 server.go:317] "Adding debug handlers to kubelet server" Dec 16 13:04:41.874190 kubelet[2716]: I1216 13:04:41.874077 2716 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:04:41.874400 kubelet[2716]: I1216 13:04:41.874373 2716 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:04:41.874472 kubelet[2716]: I1216 13:04:41.874451 2716 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:04:41.876410 kubelet[2716]: E1216 13:04:41.876388 2716 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:04:41.877447 kubelet[2716]: I1216 13:04:41.877429 2716 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:04:41.878443 kubelet[2716]: I1216 13:04:41.878383 2716 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:04:41.878684 kubelet[2716]: I1216 13:04:41.878660 2716 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:04:41.878813 kubelet[2716]: I1216 13:04:41.878798 2716 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:04:41.880475 kubelet[2716]: I1216 13:04:41.880450 2716 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:04:41.883342 kubelet[2716]: I1216 13:04:41.883309 2716 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 13:04:41.894156 kubelet[2716]: I1216 13:04:41.894112 2716 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 13:04:41.894156 kubelet[2716]: I1216 13:04:41.894137 2716 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 13:04:41.894343 kubelet[2716]: I1216 13:04:41.894180 2716 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:04:41.894343 kubelet[2716]: I1216 13:04:41.894188 2716 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 13:04:41.896329 kubelet[2716]: E1216 13:04:41.896295 2716 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:04:41.917339 kubelet[2716]: I1216 13:04:41.917297 2716 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:04:41.917339 kubelet[2716]: I1216 13:04:41.917316 2716 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:04:41.917339 kubelet[2716]: I1216 13:04:41.917347 2716 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:04:41.917504 kubelet[2716]: I1216 13:04:41.917494 2716 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:04:41.917544 kubelet[2716]: I1216 13:04:41.917503 2716 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:04:41.917544 kubelet[2716]: I1216 13:04:41.917519 2716 policy_none.go:49] "None policy: Start" Dec 16 13:04:41.917544 kubelet[2716]: I1216 13:04:41.917528 2716 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:04:41.917606 kubelet[2716]: I1216 13:04:41.917554 2716 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:04:41.917655 kubelet[2716]: I1216 13:04:41.917640 2716 state_mem.go:75] "Updated machine memory state" Dec 16 13:04:41.921823 kubelet[2716]: E1216 13:04:41.921785 2716 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:04:41.922033 kubelet[2716]: I1216 13:04:41.922018 2716 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:04:41.922073 kubelet[2716]: I1216 13:04:41.922031 2716 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:04:41.922740 kubelet[2716]: I1216 13:04:41.922706 2716 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:04:41.924461 kubelet[2716]: E1216 13:04:41.924430 2716 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:04:41.997727 kubelet[2716]: I1216 13:04:41.997641 2716 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:04:41.997727 kubelet[2716]: I1216 13:04:41.997734 2716 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 13:04:41.997900 kubelet[2716]: I1216 13:04:41.997641 2716 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 13:04:42.026066 kubelet[2716]: I1216 13:04:42.026036 2716 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 13:04:42.031103 kubelet[2716]: E1216 13:04:42.031060 2716 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 13:04:42.031103 kubelet[2716]: E1216 13:04:42.031099 2716 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 13:04:42.031949 kubelet[2716]: E1216 13:04:42.031922 2716 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 13:04:42.037012 kubelet[2716]: I1216 13:04:42.036978 2716 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 13:04:42.037086 kubelet[2716]: I1216 13:04:42.037047 2716 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 13:04:42.111500 sudo[2754]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 16 13:04:42.111902 sudo[2754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 16 13:04:42.179624 kubelet[2716]: I1216 13:04:42.179521 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/431d6bf4a6c51145869eb79188539fc3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"431d6bf4a6c51145869eb79188539fc3\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:04:42.179624 kubelet[2716]: I1216 13:04:42.179613 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:04:42.179778 kubelet[2716]: I1216 13:04:42.179635 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:04:42.179778 kubelet[2716]: I1216 13:04:42.179662 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:04:42.179778 kubelet[2716]: I1216 13:04:42.179678 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/431d6bf4a6c51145869eb79188539fc3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"431d6bf4a6c51145869eb79188539fc3\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:04:42.179778 kubelet[2716]: I1216 13:04:42.179696 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:04:42.179778 kubelet[2716]: I1216 13:04:42.179710 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 13:04:42.179900 kubelet[2716]: I1216 13:04:42.179725 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 13:04:42.179900 kubelet[2716]: I1216 13:04:42.179740 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/431d6bf4a6c51145869eb79188539fc3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"431d6bf4a6c51145869eb79188539fc3\") " pod="kube-system/kube-apiserver-localhost" Dec 16 13:04:42.414994 sudo[2754]: pam_unix(sudo:session): session closed for user root Dec 16 13:04:42.864990 kubelet[2716]: I1216 13:04:42.864907 2716 apiserver.go:52] "Watching apiserver" Dec 16 13:04:42.879502 kubelet[2716]: I1216 13:04:42.879463 2716 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:04:42.903346 kubelet[2716]: I1216 13:04:42.903294 2716 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 13:04:43.017277 kubelet[2716]: E1216 13:04:43.017213 2716 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 13:04:43.910677 sudo[1763]: pam_unix(sudo:session): session closed for user root Dec 16 13:04:43.912634 sshd[1762]: Connection closed by 10.0.0.1 port 42028 Dec 16 13:04:43.913042 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Dec 16 13:04:43.917730 systemd[1]: sshd@6-10.0.0.80:22-10.0.0.1:42028.service: Deactivated successfully. Dec 16 13:04:43.920190 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:04:43.920477 systemd[1]: session-7.scope: Consumed 5.903s CPU time, 259M memory peak. Dec 16 13:04:43.921854 systemd-logind[1537]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:04:43.923111 systemd-logind[1537]: Removed session 7. Dec 16 13:04:45.730107 kubelet[2716]: I1216 13:04:45.730067 2716 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:04:45.730634 kubelet[2716]: I1216 13:04:45.730483 2716 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:04:45.730675 containerd[1553]: time="2025-12-16T13:04:45.730356045Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:04:46.677411 systemd[1]: Created slice kubepods-burstable-poda973bf97_745f_417a_a95e_a0d58f0e45a0.slice - libcontainer container kubepods-burstable-poda973bf97_745f_417a_a95e_a0d58f0e45a0.slice. Dec 16 13:04:46.692563 systemd[1]: Created slice kubepods-besteffort-pod733e01f3_076a_4793_907d_cb697703f76d.slice - libcontainer container kubepods-besteffort-pod733e01f3_076a_4793_907d_cb697703f76d.slice. Dec 16 13:04:46.709399 kubelet[2716]: I1216 13:04:46.709353 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-cilium-run\") pod \"cilium-4d474\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " pod="kube-system/cilium-4d474" Dec 16 13:04:46.709520 kubelet[2716]: I1216 13:04:46.709409 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-hostproc\") pod \"cilium-4d474\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " pod="kube-system/cilium-4d474" Dec 16 13:04:46.709520 kubelet[2716]: I1216 13:04:46.709428 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-xtables-lock\") pod \"cilium-4d474\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " pod="kube-system/cilium-4d474" Dec 16 13:04:46.709520 kubelet[2716]: I1216 13:04:46.709449 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/259fd0c1-46ae-4cbf-8ee8-88cab877be25-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-h9zsr\" (UID: \"259fd0c1-46ae-4cbf-8ee8-88cab877be25\") " pod="kube-system/cilium-operator-6c4d7847fc-h9zsr" Dec 16 13:04:46.709520 kubelet[2716]: I1216 13:04:46.709470 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/733e01f3-076a-4793-907d-cb697703f76d-kube-proxy\") pod \"kube-proxy-dvn25\" (UID: \"733e01f3-076a-4793-907d-cb697703f76d\") " pod="kube-system/kube-proxy-dvn25" Dec 16 13:04:46.709520 kubelet[2716]: I1216 13:04:46.709488 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/733e01f3-076a-4793-907d-cb697703f76d-xtables-lock\") pod \"kube-proxy-dvn25\" (UID: \"733e01f3-076a-4793-907d-cb697703f76d\") " pod="kube-system/kube-proxy-dvn25" Dec 16 13:04:46.709684 kubelet[2716]: I1216 13:04:46.709504 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-bpf-maps\") pod \"cilium-4d474\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " pod="kube-system/cilium-4d474" Dec 16 13:04:46.709684 kubelet[2716]: I1216 13:04:46.709525 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-cilium-cgroup\") pod \"cilium-4d474\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " pod="kube-system/cilium-4d474" Dec 16 13:04:46.709684 kubelet[2716]: I1216 13:04:46.709562 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pk9w\" (UniqueName: \"kubernetes.io/projected/a973bf97-745f-417a-a95e-a0d58f0e45a0-kube-api-access-4pk9w\") pod \"cilium-4d474\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " pod="kube-system/cilium-4d474" Dec 16 13:04:46.709684 kubelet[2716]: I1216 13:04:46.709584 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsrqz\" (UniqueName: \"kubernetes.io/projected/259fd0c1-46ae-4cbf-8ee8-88cab877be25-kube-api-access-nsrqz\") pod \"cilium-operator-6c4d7847fc-h9zsr\" (UID: \"259fd0c1-46ae-4cbf-8ee8-88cab877be25\") " pod="kube-system/cilium-operator-6c4d7847fc-h9zsr" Dec 16 13:04:46.709684 kubelet[2716]: I1216 13:04:46.709603 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-lib-modules\") pod \"cilium-4d474\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " pod="kube-system/cilium-4d474" Dec 16 13:04:46.709791 kubelet[2716]: I1216 13:04:46.709624 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a973bf97-745f-417a-a95e-a0d58f0e45a0-cilium-config-path\") pod \"cilium-4d474\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " pod="kube-system/cilium-4d474" Dec 16 13:04:46.709791 kubelet[2716]: I1216 13:04:46.709642 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/733e01f3-076a-4793-907d-cb697703f76d-lib-modules\") pod \"kube-proxy-dvn25\" (UID: \"733e01f3-076a-4793-907d-cb697703f76d\") " pod="kube-system/kube-proxy-dvn25" Dec 16 13:04:46.709791 kubelet[2716]: I1216 13:04:46.709661 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwqfx\" (UniqueName: \"kubernetes.io/projected/733e01f3-076a-4793-907d-cb697703f76d-kube-api-access-xwqfx\") pod \"kube-proxy-dvn25\" (UID: \"733e01f3-076a-4793-907d-cb697703f76d\") " pod="kube-system/kube-proxy-dvn25" Dec 16 13:04:46.709791 kubelet[2716]: I1216 13:04:46.709679 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-etc-cni-netd\") pod \"cilium-4d474\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " pod="kube-system/cilium-4d474" Dec 16 13:04:46.709791 kubelet[2716]: I1216 13:04:46.709701 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a973bf97-745f-417a-a95e-a0d58f0e45a0-clustermesh-secrets\") pod \"cilium-4d474\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " pod="kube-system/cilium-4d474" Dec 16 13:04:46.709783 systemd[1]: Created slice kubepods-besteffort-pod259fd0c1_46ae_4cbf_8ee8_88cab877be25.slice - libcontainer container kubepods-besteffort-pod259fd0c1_46ae_4cbf_8ee8_88cab877be25.slice. Dec 16 13:04:46.711625 kubelet[2716]: I1216 13:04:46.709732 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-cni-path\") pod \"cilium-4d474\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " pod="kube-system/cilium-4d474" Dec 16 13:04:46.711625 kubelet[2716]: I1216 13:04:46.709755 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-host-proc-sys-net\") pod \"cilium-4d474\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " pod="kube-system/cilium-4d474" Dec 16 13:04:46.711625 kubelet[2716]: I1216 13:04:46.709773 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-host-proc-sys-kernel\") pod \"cilium-4d474\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " pod="kube-system/cilium-4d474" Dec 16 13:04:46.711625 kubelet[2716]: I1216 13:04:46.709795 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a973bf97-745f-417a-a95e-a0d58f0e45a0-hubble-tls\") pod \"cilium-4d474\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " pod="kube-system/cilium-4d474" Dec 16 13:04:46.987209 containerd[1553]: time="2025-12-16T13:04:46.987147907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4d474,Uid:a973bf97-745f-417a-a95e-a0d58f0e45a0,Namespace:kube-system,Attempt:0,}" Dec 16 13:04:47.004880 containerd[1553]: time="2025-12-16T13:04:47.004842190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvn25,Uid:733e01f3-076a-4793-907d-cb697703f76d,Namespace:kube-system,Attempt:0,}" Dec 16 13:04:47.012997 containerd[1553]: time="2025-12-16T13:04:47.012936791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h9zsr,Uid:259fd0c1-46ae-4cbf-8ee8-88cab877be25,Namespace:kube-system,Attempt:0,}" Dec 16 13:04:47.035297 containerd[1553]: time="2025-12-16T13:04:47.035001603Z" level=info msg="connecting to shim d290e4794a7bb564b5ddec87e954d838644fcf42a7c2cb0a6ca410f326287389" address="unix:///run/containerd/s/c6f70bf94c73c871f22be2bfa121cc45bdd191e9aea84a123962cfcd4aa7d518" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:04:47.041425 containerd[1553]: time="2025-12-16T13:04:47.041383942Z" level=info msg="connecting to shim 30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33" address="unix:///run/containerd/s/f7984fd73e6f2907de60371c9246429b284413ba45de398af753bdf9f4ee82a4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:04:47.047576 containerd[1553]: time="2025-12-16T13:04:47.047512110Z" level=info msg="connecting to shim a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5" address="unix:///run/containerd/s/9056fa38ac6fa0cffd194d7f6b533ee4ba2a74520e081fbcb0ada7c496fd225b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:04:47.091410 systemd[1]: Started cri-containerd-a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5.scope - libcontainer container a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5. Dec 16 13:04:47.096203 systemd[1]: Started cri-containerd-30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33.scope - libcontainer container 30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33. Dec 16 13:04:47.098186 systemd[1]: Started cri-containerd-d290e4794a7bb564b5ddec87e954d838644fcf42a7c2cb0a6ca410f326287389.scope - libcontainer container d290e4794a7bb564b5ddec87e954d838644fcf42a7c2cb0a6ca410f326287389. Dec 16 13:04:47.131482 containerd[1553]: time="2025-12-16T13:04:47.131390632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4d474,Uid:a973bf97-745f-417a-a95e-a0d58f0e45a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\"" Dec 16 13:04:47.135597 containerd[1553]: time="2025-12-16T13:04:47.133967343Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 16 13:04:47.153183 containerd[1553]: time="2025-12-16T13:04:47.152132620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvn25,Uid:733e01f3-076a-4793-907d-cb697703f76d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d290e4794a7bb564b5ddec87e954d838644fcf42a7c2cb0a6ca410f326287389\"" Dec 16 13:04:47.156945 containerd[1553]: time="2025-12-16T13:04:47.156910329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h9zsr,Uid:259fd0c1-46ae-4cbf-8ee8-88cab877be25,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5\"" Dec 16 13:04:47.159622 containerd[1553]: time="2025-12-16T13:04:47.159517467Z" level=info msg="CreateContainer within sandbox \"d290e4794a7bb564b5ddec87e954d838644fcf42a7c2cb0a6ca410f326287389\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:04:47.170879 containerd[1553]: time="2025-12-16T13:04:47.170836756Z" level=info msg="Container 7b710550bfa8c8bc647b54fbd9a2a7468ecc5a0060e5c055dd7042fb2123eedc: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:04:47.180386 containerd[1553]: time="2025-12-16T13:04:47.180357349Z" level=info msg="CreateContainer within sandbox \"d290e4794a7bb564b5ddec87e954d838644fcf42a7c2cb0a6ca410f326287389\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7b710550bfa8c8bc647b54fbd9a2a7468ecc5a0060e5c055dd7042fb2123eedc\"" Dec 16 13:04:47.180946 containerd[1553]: time="2025-12-16T13:04:47.180909365Z" level=info msg="StartContainer for \"7b710550bfa8c8bc647b54fbd9a2a7468ecc5a0060e5c055dd7042fb2123eedc\"" Dec 16 13:04:47.182130 containerd[1553]: time="2025-12-16T13:04:47.182108446Z" level=info msg="connecting to shim 7b710550bfa8c8bc647b54fbd9a2a7468ecc5a0060e5c055dd7042fb2123eedc" address="unix:///run/containerd/s/c6f70bf94c73c871f22be2bfa121cc45bdd191e9aea84a123962cfcd4aa7d518" protocol=ttrpc version=3 Dec 16 13:04:47.207566 systemd[1]: Started cri-containerd-7b710550bfa8c8bc647b54fbd9a2a7468ecc5a0060e5c055dd7042fb2123eedc.scope - libcontainer container 7b710550bfa8c8bc647b54fbd9a2a7468ecc5a0060e5c055dd7042fb2123eedc. Dec 16 13:04:47.310640 containerd[1553]: time="2025-12-16T13:04:47.310474302Z" level=info msg="StartContainer for \"7b710550bfa8c8bc647b54fbd9a2a7468ecc5a0060e5c055dd7042fb2123eedc\" returns successfully" Dec 16 13:04:47.598441 update_engine[1542]: I20251216 13:04:47.598304 1542 update_attempter.cc:509] Updating boot flags... Dec 16 13:04:47.924897 kubelet[2716]: I1216 13:04:47.924666 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dvn25" podStartSLOduration=1.924650503 podStartE2EDuration="1.924650503s" podCreationTimestamp="2025-12-16 13:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:04:47.924086785 +0000 UTC m=+6.116914965" watchObservedRunningTime="2025-12-16 13:04:47.924650503 +0000 UTC m=+6.117478683" Dec 16 13:04:55.001085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2214070919.mount: Deactivated successfully. Dec 16 13:04:58.760788 containerd[1553]: time="2025-12-16T13:04:58.760714584Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:58.761771 containerd[1553]: time="2025-12-16T13:04:58.761692799Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Dec 16 13:04:58.762943 containerd[1553]: time="2025-12-16T13:04:58.762888542Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:04:58.764252 containerd[1553]: time="2025-12-16T13:04:58.764181738Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.628582545s" Dec 16 13:04:58.764331 containerd[1553]: time="2025-12-16T13:04:58.764256029Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 16 13:04:58.765514 containerd[1553]: time="2025-12-16T13:04:58.765478081Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 16 13:04:58.770937 containerd[1553]: time="2025-12-16T13:04:58.770865363Z" level=info msg="CreateContainer within sandbox \"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:04:58.780600 containerd[1553]: time="2025-12-16T13:04:58.780535004Z" level=info msg="Container 97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:04:58.788578 containerd[1553]: time="2025-12-16T13:04:58.788526543Z" level=info msg="CreateContainer within sandbox \"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8\"" Dec 16 13:04:58.789348 containerd[1553]: time="2025-12-16T13:04:58.789167561Z" level=info msg="StartContainer for \"97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8\"" Dec 16 13:04:58.790135 containerd[1553]: time="2025-12-16T13:04:58.790100319Z" level=info msg="connecting to shim 97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8" address="unix:///run/containerd/s/f7984fd73e6f2907de60371c9246429b284413ba45de398af753bdf9f4ee82a4" protocol=ttrpc version=3 Dec 16 13:04:58.819530 systemd[1]: Started cri-containerd-97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8.scope - libcontainer container 97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8. Dec 16 13:04:58.854964 containerd[1553]: time="2025-12-16T13:04:58.854910256Z" level=info msg="StartContainer for \"97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8\" returns successfully" Dec 16 13:04:58.866143 systemd[1]: cri-containerd-97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8.scope: Deactivated successfully. Dec 16 13:04:58.867643 containerd[1553]: time="2025-12-16T13:04:58.867609035Z" level=info msg="received container exit event container_id:\"97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8\" id:\"97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8\" pid:3159 exited_at:{seconds:1765890298 nanos:867179064}" Dec 16 13:04:58.888847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8-rootfs.mount: Deactivated successfully. Dec 16 13:04:59.945790 containerd[1553]: time="2025-12-16T13:04:59.945739110Z" level=info msg="CreateContainer within sandbox \"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:04:59.955042 containerd[1553]: time="2025-12-16T13:04:59.954999856Z" level=info msg="Container fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:04:59.961711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603164168.mount: Deactivated successfully. Dec 16 13:04:59.962465 containerd[1553]: time="2025-12-16T13:04:59.962432397Z" level=info msg="CreateContainer within sandbox \"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2\"" Dec 16 13:04:59.962891 containerd[1553]: time="2025-12-16T13:04:59.962873298Z" level=info msg="StartContainer for \"fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2\"" Dec 16 13:04:59.963676 containerd[1553]: time="2025-12-16T13:04:59.963653918Z" level=info msg="connecting to shim fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2" address="unix:///run/containerd/s/f7984fd73e6f2907de60371c9246429b284413ba45de398af753bdf9f4ee82a4" protocol=ttrpc version=3 Dec 16 13:04:59.984380 systemd[1]: Started cri-containerd-fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2.scope - libcontainer container fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2. Dec 16 13:05:00.018747 containerd[1553]: time="2025-12-16T13:05:00.018708129Z" level=info msg="StartContainer for \"fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2\" returns successfully" Dec 16 13:05:00.032704 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:05:00.033250 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:05:00.033449 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:05:00.035263 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:05:00.037783 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:05:00.038310 systemd[1]: cri-containerd-fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2.scope: Deactivated successfully. Dec 16 13:05:00.039123 containerd[1553]: time="2025-12-16T13:05:00.039082870Z" level=info msg="received container exit event container_id:\"fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2\" id:\"fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2\" pid:3204 exited_at:{seconds:1765890300 nanos:38835364}" Dec 16 13:05:00.059507 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:05:00.952863 containerd[1553]: time="2025-12-16T13:05:00.952809492Z" level=info msg="CreateContainer within sandbox \"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:05:00.957294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2-rootfs.mount: Deactivated successfully. Dec 16 13:05:00.965129 containerd[1553]: time="2025-12-16T13:05:00.965084539Z" level=info msg="Container c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:00.969133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3555408534.mount: Deactivated successfully. Dec 16 13:05:00.973669 containerd[1553]: time="2025-12-16T13:05:00.973628570Z" level=info msg="CreateContainer within sandbox \"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac\"" Dec 16 13:05:00.974180 containerd[1553]: time="2025-12-16T13:05:00.974157666Z" level=info msg="StartContainer for \"c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac\"" Dec 16 13:05:00.975391 containerd[1553]: time="2025-12-16T13:05:00.975346194Z" level=info msg="connecting to shim c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac" address="unix:///run/containerd/s/f7984fd73e6f2907de60371c9246429b284413ba45de398af753bdf9f4ee82a4" protocol=ttrpc version=3 Dec 16 13:05:00.997387 systemd[1]: Started cri-containerd-c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac.scope - libcontainer container c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac. Dec 16 13:05:01.091058 systemd[1]: cri-containerd-c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac.scope: Deactivated successfully. Dec 16 13:05:01.551566 containerd[1553]: time="2025-12-16T13:05:01.551487085Z" level=info msg="received container exit event container_id:\"c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac\" id:\"c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac\" pid:3264 exited_at:{seconds:1765890301 nanos:92306893}" Dec 16 13:05:01.564702 containerd[1553]: time="2025-12-16T13:05:01.564646631Z" level=info msg="StartContainer for \"c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac\" returns successfully" Dec 16 13:05:01.580709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac-rootfs.mount: Deactivated successfully. Dec 16 13:05:01.687693 containerd[1553]: time="2025-12-16T13:05:01.687645916Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:01.688517 containerd[1553]: time="2025-12-16T13:05:01.688452515Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Dec 16 13:05:01.689805 containerd[1553]: time="2025-12-16T13:05:01.689782178Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:05:01.690953 containerd[1553]: time="2025-12-16T13:05:01.690922796Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.925407224s" Dec 16 13:05:01.690994 containerd[1553]: time="2025-12-16T13:05:01.690954936Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 16 13:05:01.695887 containerd[1553]: time="2025-12-16T13:05:01.695856655Z" level=info msg="CreateContainer within sandbox \"a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 16 13:05:01.702621 containerd[1553]: time="2025-12-16T13:05:01.702571016Z" level=info msg="Container 3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:01.709616 containerd[1553]: time="2025-12-16T13:05:01.709554354Z" level=info msg="CreateContainer within sandbox \"a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b\"" Dec 16 13:05:01.710037 containerd[1553]: time="2025-12-16T13:05:01.709990955Z" level=info msg="StartContainer for \"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b\"" Dec 16 13:05:01.710899 containerd[1553]: time="2025-12-16T13:05:01.710866995Z" level=info msg="connecting to shim 3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b" address="unix:///run/containerd/s/9056fa38ac6fa0cffd194d7f6b533ee4ba2a74520e081fbcb0ada7c496fd225b" protocol=ttrpc version=3 Dec 16 13:05:01.732458 systemd[1]: Started cri-containerd-3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b.scope - libcontainer container 3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b. Dec 16 13:05:02.007782 containerd[1553]: time="2025-12-16T13:05:02.007723072Z" level=info msg="StartContainer for \"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b\" returns successfully" Dec 16 13:05:02.202051 containerd[1553]: time="2025-12-16T13:05:02.201964643Z" level=info msg="CreateContainer within sandbox \"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:05:02.225475 containerd[1553]: time="2025-12-16T13:05:02.225420412Z" level=info msg="Container abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:02.243326 containerd[1553]: time="2025-12-16T13:05:02.243274488Z" level=info msg="CreateContainer within sandbox \"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329\"" Dec 16 13:05:02.245802 containerd[1553]: time="2025-12-16T13:05:02.245760247Z" level=info msg="StartContainer for \"abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329\"" Dec 16 13:05:02.247678 containerd[1553]: time="2025-12-16T13:05:02.247340010Z" level=info msg="connecting to shim abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329" address="unix:///run/containerd/s/f7984fd73e6f2907de60371c9246429b284413ba45de398af753bdf9f4ee82a4" protocol=ttrpc version=3 Dec 16 13:05:02.276491 systemd[1]: Started cri-containerd-abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329.scope - libcontainer container abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329. Dec 16 13:05:02.354968 systemd[1]: cri-containerd-abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329.scope: Deactivated successfully. Dec 16 13:05:02.357256 containerd[1553]: time="2025-12-16T13:05:02.356761018Z" level=info msg="received container exit event container_id:\"abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329\" id:\"abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329\" pid:3342 exited_at:{seconds:1765890302 nanos:355786253}" Dec 16 13:05:02.374331 containerd[1553]: time="2025-12-16T13:05:02.374217846Z" level=info msg="StartContainer for \"abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329\" returns successfully" Dec 16 13:05:02.957421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329-rootfs.mount: Deactivated successfully. Dec 16 13:05:03.026166 containerd[1553]: time="2025-12-16T13:05:03.026120597Z" level=info msg="CreateContainer within sandbox \"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:05:03.047279 containerd[1553]: time="2025-12-16T13:05:03.047148156Z" level=info msg="Container 030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:03.123541 kubelet[2716]: I1216 13:05:03.123473 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-h9zsr" podStartSLOduration=2.589469463 podStartE2EDuration="17.12345509s" podCreationTimestamp="2025-12-16 13:04:46 +0000 UTC" firstStartedPulling="2025-12-16 13:04:47.157673054 +0000 UTC m=+5.350501244" lastFinishedPulling="2025-12-16 13:05:01.691658691 +0000 UTC m=+19.884486871" observedRunningTime="2025-12-16 13:05:03.033454448 +0000 UTC m=+21.226282648" watchObservedRunningTime="2025-12-16 13:05:03.12345509 +0000 UTC m=+21.316283280" Dec 16 13:05:03.126025 containerd[1553]: time="2025-12-16T13:05:03.125988017Z" level=info msg="CreateContainer within sandbox \"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792\"" Dec 16 13:05:03.126562 containerd[1553]: time="2025-12-16T13:05:03.126505941Z" level=info msg="StartContainer for \"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792\"" Dec 16 13:05:03.127891 containerd[1553]: time="2025-12-16T13:05:03.127859017Z" level=info msg="connecting to shim 030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792" address="unix:///run/containerd/s/f7984fd73e6f2907de60371c9246429b284413ba45de398af753bdf9f4ee82a4" protocol=ttrpc version=3 Dec 16 13:05:03.153465 systemd[1]: Started cri-containerd-030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792.scope - libcontainer container 030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792. Dec 16 13:05:03.251875 containerd[1553]: time="2025-12-16T13:05:03.251829574Z" level=info msg="StartContainer for \"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792\" returns successfully" Dec 16 13:05:03.489741 kubelet[2716]: I1216 13:05:03.489682 2716 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:05:03.550640 systemd[1]: Created slice kubepods-burstable-pod4ccf0918_9935_45a3_af79_a06f5b5f5bc0.slice - libcontainer container kubepods-burstable-pod4ccf0918_9935_45a3_af79_a06f5b5f5bc0.slice. Dec 16 13:05:03.560023 systemd[1]: Created slice kubepods-burstable-pod9cf146bd_4551_426a_b1d8_253e16ec46fe.slice - libcontainer container kubepods-burstable-pod9cf146bd_4551_426a_b1d8_253e16ec46fe.slice. Dec 16 13:05:03.716416 kubelet[2716]: I1216 13:05:03.716355 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ccf0918-9935-45a3-af79-a06f5b5f5bc0-config-volume\") pod \"coredns-674b8bbfcf-wwz8d\" (UID: \"4ccf0918-9935-45a3-af79-a06f5b5f5bc0\") " pod="kube-system/coredns-674b8bbfcf-wwz8d" Dec 16 13:05:03.716416 kubelet[2716]: I1216 13:05:03.716407 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cf146bd-4551-426a-b1d8-253e16ec46fe-config-volume\") pod \"coredns-674b8bbfcf-sh5j8\" (UID: \"9cf146bd-4551-426a-b1d8-253e16ec46fe\") " pod="kube-system/coredns-674b8bbfcf-sh5j8" Dec 16 13:05:03.716635 kubelet[2716]: I1216 13:05:03.716431 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlq6m\" (UniqueName: \"kubernetes.io/projected/4ccf0918-9935-45a3-af79-a06f5b5f5bc0-kube-api-access-nlq6m\") pod \"coredns-674b8bbfcf-wwz8d\" (UID: \"4ccf0918-9935-45a3-af79-a06f5b5f5bc0\") " pod="kube-system/coredns-674b8bbfcf-wwz8d" Dec 16 13:05:03.716635 kubelet[2716]: I1216 13:05:03.716467 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmgqf\" (UniqueName: \"kubernetes.io/projected/9cf146bd-4551-426a-b1d8-253e16ec46fe-kube-api-access-kmgqf\") pod \"coredns-674b8bbfcf-sh5j8\" (UID: \"9cf146bd-4551-426a-b1d8-253e16ec46fe\") " pod="kube-system/coredns-674b8bbfcf-sh5j8" Dec 16 13:05:03.856082 kubelet[2716]: E1216 13:05:03.855783 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:03.856777 containerd[1553]: time="2025-12-16T13:05:03.856746714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wwz8d,Uid:4ccf0918-9935-45a3-af79-a06f5b5f5bc0,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:03.863395 kubelet[2716]: E1216 13:05:03.863357 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:03.863952 containerd[1553]: time="2025-12-16T13:05:03.863917558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sh5j8,Uid:9cf146bd-4551-426a-b1d8-253e16ec46fe,Namespace:kube-system,Attempt:0,}" Dec 16 13:05:04.044365 kubelet[2716]: E1216 13:05:04.044313 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:04.044505 kubelet[2716]: E1216 13:05:04.044417 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:04.332551 kubelet[2716]: I1216 13:05:04.332475 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4d474" podStartSLOduration=6.700935202 podStartE2EDuration="18.33245545s" podCreationTimestamp="2025-12-16 13:04:46 +0000 UTC" firstStartedPulling="2025-12-16 13:04:47.133730755 +0000 UTC m=+5.326558935" lastFinishedPulling="2025-12-16 13:04:58.765251003 +0000 UTC m=+16.958079183" observedRunningTime="2025-12-16 13:05:04.331661216 +0000 UTC m=+22.524489396" watchObservedRunningTime="2025-12-16 13:05:04.33245545 +0000 UTC m=+22.525283630" Dec 16 13:05:05.047984 kubelet[2716]: E1216 13:05:05.047956 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:06.050061 kubelet[2716]: E1216 13:05:06.050006 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:06.347948 systemd-networkd[1457]: cilium_host: Link UP Dec 16 13:05:06.348144 systemd-networkd[1457]: cilium_net: Link UP Dec 16 13:05:06.348699 systemd-networkd[1457]: cilium_net: Gained carrier Dec 16 13:05:06.349171 systemd-networkd[1457]: cilium_host: Gained carrier Dec 16 13:05:06.458916 systemd-networkd[1457]: cilium_vxlan: Link UP Dec 16 13:05:06.459804 systemd-networkd[1457]: cilium_vxlan: Gained carrier Dec 16 13:05:06.539412 systemd-networkd[1457]: cilium_net: Gained IPv6LL Dec 16 13:05:06.708270 kernel: NET: Registered PF_ALG protocol family Dec 16 13:05:06.803473 systemd-networkd[1457]: cilium_host: Gained IPv6LL Dec 16 13:05:07.355193 systemd-networkd[1457]: lxc_health: Link UP Dec 16 13:05:07.357911 systemd-networkd[1457]: lxc_health: Gained carrier Dec 16 13:05:07.715432 systemd-networkd[1457]: cilium_vxlan: Gained IPv6LL Dec 16 13:05:07.856805 systemd-networkd[1457]: lxcded33a51619d: Link UP Dec 16 13:05:07.889295 kernel: eth0: renamed from tmp6e46e Dec 16 13:05:07.889497 systemd-networkd[1457]: lxcded33a51619d: Gained carrier Dec 16 13:05:07.925532 systemd-networkd[1457]: lxc97a36b54fa15: Link UP Dec 16 13:05:07.940465 kernel: eth0: renamed from tmp1eaff Dec 16 13:05:07.943696 systemd-networkd[1457]: lxc97a36b54fa15: Gained carrier Dec 16 13:05:08.988269 kubelet[2716]: E1216 13:05:08.988148 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:09.251453 systemd-networkd[1457]: lxc_health: Gained IPv6LL Dec 16 13:05:09.379448 systemd-networkd[1457]: lxc97a36b54fa15: Gained IPv6LL Dec 16 13:05:09.699401 systemd-networkd[1457]: lxcded33a51619d: Gained IPv6LL Dec 16 13:05:11.356798 containerd[1553]: time="2025-12-16T13:05:11.356316376Z" level=info msg="connecting to shim 6e46eb636853a04211ad9ef9fe1d8ad1d8fc9b1fbce38d826c80f59ebb6200f3" address="unix:///run/containerd/s/ea5bec126fa3499591e435a278991cb650d4006a31ccb505a3227438fe19f605" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:11.364388 containerd[1553]: time="2025-12-16T13:05:11.364338921Z" level=info msg="connecting to shim 1eafff96553de3bc2d74e1f83bfde5384c8b345dee0dfa7eda2053b150efd181" address="unix:///run/containerd/s/030ae295959e474709ddfa9f8aec367dd865300e596559eb75ceee5c4e88b967" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:05:11.389400 systemd[1]: Started cri-containerd-6e46eb636853a04211ad9ef9fe1d8ad1d8fc9b1fbce38d826c80f59ebb6200f3.scope - libcontainer container 6e46eb636853a04211ad9ef9fe1d8ad1d8fc9b1fbce38d826c80f59ebb6200f3. Dec 16 13:05:11.392754 systemd[1]: Started cri-containerd-1eafff96553de3bc2d74e1f83bfde5384c8b345dee0dfa7eda2053b150efd181.scope - libcontainer container 1eafff96553de3bc2d74e1f83bfde5384c8b345dee0dfa7eda2053b150efd181. Dec 16 13:05:11.408523 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:05:11.415952 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 13:05:11.445097 containerd[1553]: time="2025-12-16T13:05:11.445055051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sh5j8,Uid:9cf146bd-4551-426a-b1d8-253e16ec46fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"1eafff96553de3bc2d74e1f83bfde5384c8b345dee0dfa7eda2053b150efd181\"" Dec 16 13:05:11.446058 kubelet[2716]: E1216 13:05:11.446030 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:11.455975 containerd[1553]: time="2025-12-16T13:05:11.455079868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wwz8d,Uid:4ccf0918-9935-45a3-af79-a06f5b5f5bc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e46eb636853a04211ad9ef9fe1d8ad1d8fc9b1fbce38d826c80f59ebb6200f3\"" Dec 16 13:05:11.456112 kubelet[2716]: E1216 13:05:11.455657 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:11.460544 containerd[1553]: time="2025-12-16T13:05:11.460512556Z" level=info msg="CreateContainer within sandbox \"6e46eb636853a04211ad9ef9fe1d8ad1d8fc9b1fbce38d826c80f59ebb6200f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:05:11.465724 containerd[1553]: time="2025-12-16T13:05:11.465683252Z" level=info msg="CreateContainer within sandbox \"1eafff96553de3bc2d74e1f83bfde5384c8b345dee0dfa7eda2053b150efd181\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:05:11.478523 containerd[1553]: time="2025-12-16T13:05:11.478475459Z" level=info msg="Container cfed29e1b549b5ba0988b1698e69d511d2965c1b77259d5d8d14a6af532f9c14: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:11.486202 containerd[1553]: time="2025-12-16T13:05:11.486150181Z" level=info msg="CreateContainer within sandbox \"6e46eb636853a04211ad9ef9fe1d8ad1d8fc9b1fbce38d826c80f59ebb6200f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cfed29e1b549b5ba0988b1698e69d511d2965c1b77259d5d8d14a6af532f9c14\"" Dec 16 13:05:11.486705 containerd[1553]: time="2025-12-16T13:05:11.486673234Z" level=info msg="StartContainer for \"cfed29e1b549b5ba0988b1698e69d511d2965c1b77259d5d8d14a6af532f9c14\"" Dec 16 13:05:11.488669 containerd[1553]: time="2025-12-16T13:05:11.488642464Z" level=info msg="connecting to shim cfed29e1b549b5ba0988b1698e69d511d2965c1b77259d5d8d14a6af532f9c14" address="unix:///run/containerd/s/ea5bec126fa3499591e435a278991cb650d4006a31ccb505a3227438fe19f605" protocol=ttrpc version=3 Dec 16 13:05:11.489915 containerd[1553]: time="2025-12-16T13:05:11.489876603Z" level=info msg="Container 8a3b5e95d5f61b7226a60b330a72635798174d351a92d76391b2513f0cc81cf7: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:05:11.507965 containerd[1553]: time="2025-12-16T13:05:11.507359274Z" level=info msg="CreateContainer within sandbox \"1eafff96553de3bc2d74e1f83bfde5384c8b345dee0dfa7eda2053b150efd181\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a3b5e95d5f61b7226a60b330a72635798174d351a92d76391b2513f0cc81cf7\"" Dec 16 13:05:11.508116 containerd[1553]: time="2025-12-16T13:05:11.508023922Z" level=info msg="StartContainer for \"8a3b5e95d5f61b7226a60b330a72635798174d351a92d76391b2513f0cc81cf7\"" Dec 16 13:05:11.509053 containerd[1553]: time="2025-12-16T13:05:11.509013813Z" level=info msg="connecting to shim 8a3b5e95d5f61b7226a60b330a72635798174d351a92d76391b2513f0cc81cf7" address="unix:///run/containerd/s/030ae295959e474709ddfa9f8aec367dd865300e596559eb75ceee5c4e88b967" protocol=ttrpc version=3 Dec 16 13:05:11.511627 systemd[1]: Started cri-containerd-cfed29e1b549b5ba0988b1698e69d511d2965c1b77259d5d8d14a6af532f9c14.scope - libcontainer container cfed29e1b549b5ba0988b1698e69d511d2965c1b77259d5d8d14a6af532f9c14. Dec 16 13:05:11.538393 systemd[1]: Started cri-containerd-8a3b5e95d5f61b7226a60b330a72635798174d351a92d76391b2513f0cc81cf7.scope - libcontainer container 8a3b5e95d5f61b7226a60b330a72635798174d351a92d76391b2513f0cc81cf7. Dec 16 13:05:11.573380 containerd[1553]: time="2025-12-16T13:05:11.570629112Z" level=info msg="StartContainer for \"cfed29e1b549b5ba0988b1698e69d511d2965c1b77259d5d8d14a6af532f9c14\" returns successfully" Dec 16 13:05:11.581381 containerd[1553]: time="2025-12-16T13:05:11.581344536Z" level=info msg="StartContainer for \"8a3b5e95d5f61b7226a60b330a72635798174d351a92d76391b2513f0cc81cf7\" returns successfully" Dec 16 13:05:12.062610 kubelet[2716]: E1216 13:05:12.062508 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:12.064708 kubelet[2716]: E1216 13:05:12.064491 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:12.362842 kubelet[2716]: I1216 13:05:12.362558 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sh5j8" podStartSLOduration=26.360040677 podStartE2EDuration="26.360040677s" podCreationTimestamp="2025-12-16 13:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:12.246875773 +0000 UTC m=+30.439703963" watchObservedRunningTime="2025-12-16 13:05:12.360040677 +0000 UTC m=+30.552868857" Dec 16 13:05:12.377394 kubelet[2716]: I1216 13:05:12.377150 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wwz8d" podStartSLOduration=26.377126298 podStartE2EDuration="26.377126298s" podCreationTimestamp="2025-12-16 13:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:05:12.362723908 +0000 UTC m=+30.555552088" watchObservedRunningTime="2025-12-16 13:05:12.377126298 +0000 UTC m=+30.569954478" Dec 16 13:05:13.066427 kubelet[2716]: E1216 13:05:13.066396 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:13.066427 kubelet[2716]: E1216 13:05:13.066443 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:14.068858 kubelet[2716]: E1216 13:05:14.068818 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:14.069369 kubelet[2716]: E1216 13:05:14.069023 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:14.885596 systemd[1]: Started sshd@7-10.0.0.80:22-10.0.0.1:51224.service - OpenSSH per-connection server daemon (10.0.0.1:51224). Dec 16 13:05:14.951060 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 51224 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:05:14.952799 sshd-session[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:14.958026 systemd-logind[1537]: New session 8 of user core. Dec 16 13:05:14.966385 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:05:15.159623 sshd[4057]: Connection closed by 10.0.0.1 port 51224 Dec 16 13:05:15.159899 sshd-session[4054]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:15.165035 systemd[1]: sshd@7-10.0.0.80:22-10.0.0.1:51224.service: Deactivated successfully. Dec 16 13:05:15.166953 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:05:15.167962 systemd-logind[1537]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:05:15.169323 systemd-logind[1537]: Removed session 8. Dec 16 13:05:15.283017 kubelet[2716]: I1216 13:05:15.282957 2716 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:05:15.283531 kubelet[2716]: E1216 13:05:15.283506 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:16.073442 kubelet[2716]: E1216 13:05:16.073390 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:20.176033 systemd[1]: Started sshd@8-10.0.0.80:22-10.0.0.1:42074.service - OpenSSH per-connection server daemon (10.0.0.1:42074). Dec 16 13:05:20.233296 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 42074 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:05:20.235029 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:20.239428 systemd-logind[1537]: New session 9 of user core. Dec 16 13:05:20.250398 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:05:20.373245 sshd[4077]: Connection closed by 10.0.0.1 port 42074 Dec 16 13:05:20.373614 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:20.377709 systemd[1]: sshd@8-10.0.0.80:22-10.0.0.1:42074.service: Deactivated successfully. Dec 16 13:05:20.379735 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:05:20.380720 systemd-logind[1537]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:05:20.382131 systemd-logind[1537]: Removed session 9. Dec 16 13:05:25.386039 systemd[1]: Started sshd@9-10.0.0.80:22-10.0.0.1:42090.service - OpenSSH per-connection server daemon (10.0.0.1:42090). Dec 16 13:05:25.450408 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 42090 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:05:25.452149 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:25.457811 systemd-logind[1537]: New session 10 of user core. Dec 16 13:05:25.467427 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:05:25.588009 sshd[4096]: Connection closed by 10.0.0.1 port 42090 Dec 16 13:05:25.588437 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:25.593188 systemd[1]: sshd@9-10.0.0.80:22-10.0.0.1:42090.service: Deactivated successfully. Dec 16 13:05:25.595200 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:05:25.596012 systemd-logind[1537]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:05:25.597745 systemd-logind[1537]: Removed session 10. Dec 16 13:05:30.605067 systemd[1]: Started sshd@10-10.0.0.80:22-10.0.0.1:49750.service - OpenSSH per-connection server daemon (10.0.0.1:49750). Dec 16 13:05:30.667732 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 49750 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:05:30.669316 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:30.673553 systemd-logind[1537]: New session 11 of user core. Dec 16 13:05:30.684394 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:05:30.795404 sshd[4113]: Connection closed by 10.0.0.1 port 49750 Dec 16 13:05:30.795728 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:30.799541 systemd[1]: sshd@10-10.0.0.80:22-10.0.0.1:49750.service: Deactivated successfully. Dec 16 13:05:30.801355 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:05:30.802215 systemd-logind[1537]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:05:30.803492 systemd-logind[1537]: Removed session 11. Dec 16 13:05:35.810504 systemd[1]: Started sshd@11-10.0.0.80:22-10.0.0.1:49764.service - OpenSSH per-connection server daemon (10.0.0.1:49764). Dec 16 13:05:35.871208 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 49764 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:05:35.872496 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:35.876724 systemd-logind[1537]: New session 12 of user core. Dec 16 13:05:35.886369 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:05:36.002528 sshd[4131]: Connection closed by 10.0.0.1 port 49764 Dec 16 13:05:36.002932 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:36.016054 systemd[1]: sshd@11-10.0.0.80:22-10.0.0.1:49764.service: Deactivated successfully. Dec 16 13:05:36.018020 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:05:36.018744 systemd-logind[1537]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:05:36.021455 systemd[1]: Started sshd@12-10.0.0.80:22-10.0.0.1:49770.service - OpenSSH per-connection server daemon (10.0.0.1:49770). Dec 16 13:05:36.022180 systemd-logind[1537]: Removed session 12. Dec 16 13:05:36.080158 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 49770 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:05:36.081819 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:36.086449 systemd-logind[1537]: New session 13 of user core. Dec 16 13:05:36.100414 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:05:36.274260 sshd[4148]: Connection closed by 10.0.0.1 port 49770 Dec 16 13:05:36.276224 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:36.288538 systemd[1]: sshd@12-10.0.0.80:22-10.0.0.1:49770.service: Deactivated successfully. Dec 16 13:05:36.291605 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:05:36.293178 systemd-logind[1537]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:05:36.297217 systemd[1]: Started sshd@13-10.0.0.80:22-10.0.0.1:49776.service - OpenSSH per-connection server daemon (10.0.0.1:49776). Dec 16 13:05:36.298251 systemd-logind[1537]: Removed session 13. Dec 16 13:05:36.357988 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 49776 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:05:36.359861 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:36.364270 systemd-logind[1537]: New session 14 of user core. Dec 16 13:05:36.374373 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:05:36.487674 sshd[4163]: Connection closed by 10.0.0.1 port 49776 Dec 16 13:05:36.488004 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:36.492831 systemd[1]: sshd@13-10.0.0.80:22-10.0.0.1:49776.service: Deactivated successfully. Dec 16 13:05:36.495206 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:05:36.496254 systemd-logind[1537]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:05:36.497896 systemd-logind[1537]: Removed session 14. Dec 16 13:05:41.510617 systemd[1]: Started sshd@14-10.0.0.80:22-10.0.0.1:42242.service - OpenSSH per-connection server daemon (10.0.0.1:42242). Dec 16 13:05:41.579269 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 42242 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:05:41.581476 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:41.586466 systemd-logind[1537]: New session 15 of user core. Dec 16 13:05:41.592382 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:05:41.706865 sshd[4179]: Connection closed by 10.0.0.1 port 42242 Dec 16 13:05:41.707378 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:41.711338 systemd[1]: sshd@14-10.0.0.80:22-10.0.0.1:42242.service: Deactivated successfully. Dec 16 13:05:41.713486 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:05:41.714379 systemd-logind[1537]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:05:41.715567 systemd-logind[1537]: Removed session 15. Dec 16 13:05:46.730844 systemd[1]: Started sshd@15-10.0.0.80:22-10.0.0.1:42254.service - OpenSSH per-connection server daemon (10.0.0.1:42254). Dec 16 13:05:46.793503 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 42254 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:05:46.794897 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:46.799704 systemd-logind[1537]: New session 16 of user core. Dec 16 13:05:46.815435 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:05:46.929267 sshd[4197]: Connection closed by 10.0.0.1 port 42254 Dec 16 13:05:46.929772 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:46.940109 systemd[1]: sshd@15-10.0.0.80:22-10.0.0.1:42254.service: Deactivated successfully. Dec 16 13:05:46.942055 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:05:46.943228 systemd-logind[1537]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:05:46.946881 systemd[1]: Started sshd@16-10.0.0.80:22-10.0.0.1:42268.service - OpenSSH per-connection server daemon (10.0.0.1:42268). Dec 16 13:05:46.947580 systemd-logind[1537]: Removed session 16. Dec 16 13:05:47.004578 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 42268 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:05:47.005939 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:47.010672 systemd-logind[1537]: New session 17 of user core. Dec 16 13:05:47.021373 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:05:47.721879 sshd[4214]: Connection closed by 10.0.0.1 port 42268 Dec 16 13:05:47.722363 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:47.732215 systemd[1]: sshd@16-10.0.0.80:22-10.0.0.1:42268.service: Deactivated successfully. Dec 16 13:05:47.734703 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:05:47.735543 systemd-logind[1537]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:05:47.739676 systemd[1]: Started sshd@17-10.0.0.80:22-10.0.0.1:42270.service - OpenSSH per-connection server daemon (10.0.0.1:42270). Dec 16 13:05:47.740916 systemd-logind[1537]: Removed session 17. Dec 16 13:05:47.804029 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 42270 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:05:47.806050 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:47.811452 systemd-logind[1537]: New session 18 of user core. Dec 16 13:05:47.820384 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:05:48.352659 sshd[4231]: Connection closed by 10.0.0.1 port 42270 Dec 16 13:05:48.354406 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:48.372401 systemd[1]: sshd@17-10.0.0.80:22-10.0.0.1:42270.service: Deactivated successfully. Dec 16 13:05:48.374341 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:05:48.375433 systemd-logind[1537]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:05:48.379781 systemd[1]: Started sshd@18-10.0.0.80:22-10.0.0.1:42282.service - OpenSSH per-connection server daemon (10.0.0.1:42282). Dec 16 13:05:48.381160 systemd-logind[1537]: Removed session 18. Dec 16 13:05:48.436206 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 42282 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:05:48.437857 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:48.442847 systemd-logind[1537]: New session 19 of user core. Dec 16 13:05:48.456385 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:05:48.707377 sshd[4252]: Connection closed by 10.0.0.1 port 42282 Dec 16 13:05:48.708011 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:48.723457 systemd[1]: sshd@18-10.0.0.80:22-10.0.0.1:42282.service: Deactivated successfully. Dec 16 13:05:48.726165 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:05:48.729528 systemd-logind[1537]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:05:48.731807 systemd[1]: Started sshd@19-10.0.0.80:22-10.0.0.1:42288.service - OpenSSH per-connection server daemon (10.0.0.1:42288). Dec 16 13:05:48.733835 systemd-logind[1537]: Removed session 19. Dec 16 13:05:48.787803 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 42288 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:05:48.789430 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:48.794562 systemd-logind[1537]: New session 20 of user core. Dec 16 13:05:48.809405 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:05:48.931810 sshd[4267]: Connection closed by 10.0.0.1 port 42288 Dec 16 13:05:48.932214 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:48.937085 systemd[1]: sshd@19-10.0.0.80:22-10.0.0.1:42288.service: Deactivated successfully. Dec 16 13:05:48.939110 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:05:48.940082 systemd-logind[1537]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:05:48.941781 systemd-logind[1537]: Removed session 20. Dec 16 13:05:53.898552 kubelet[2716]: E1216 13:05:53.898514 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:53.946819 systemd[1]: Started sshd@20-10.0.0.80:22-10.0.0.1:33216.service - OpenSSH per-connection server daemon (10.0.0.1:33216). Dec 16 13:05:54.009332 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 33216 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:05:54.011324 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:54.017414 systemd-logind[1537]: New session 21 of user core. Dec 16 13:05:54.027531 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:05:54.147421 sshd[4284]: Connection closed by 10.0.0.1 port 33216 Dec 16 13:05:54.147730 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:54.152493 systemd[1]: sshd@20-10.0.0.80:22-10.0.0.1:33216.service: Deactivated successfully. Dec 16 13:05:54.155166 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:05:54.156005 systemd-logind[1537]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:05:54.157151 systemd-logind[1537]: Removed session 21. Dec 16 13:05:56.895990 kubelet[2716]: E1216 13:05:56.895904 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:58.895036 kubelet[2716]: E1216 13:05:58.895000 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:05:59.159982 systemd[1]: Started sshd@21-10.0.0.80:22-10.0.0.1:33222.service - OpenSSH per-connection server daemon (10.0.0.1:33222). Dec 16 13:05:59.220323 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 33222 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:05:59.221814 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:05:59.225937 systemd-logind[1537]: New session 22 of user core. Dec 16 13:05:59.235365 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:05:59.346842 sshd[4303]: Connection closed by 10.0.0.1 port 33222 Dec 16 13:05:59.347289 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Dec 16 13:05:59.351516 systemd[1]: sshd@21-10.0.0.80:22-10.0.0.1:33222.service: Deactivated successfully. Dec 16 13:05:59.353873 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:05:59.354752 systemd-logind[1537]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:05:59.355961 systemd-logind[1537]: Removed session 22. Dec 16 13:05:59.895888 kubelet[2716]: E1216 13:05:59.895837 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:04.360190 systemd[1]: Started sshd@22-10.0.0.80:22-10.0.0.1:45224.service - OpenSSH per-connection server daemon (10.0.0.1:45224). Dec 16 13:06:04.424992 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 45224 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:06:04.426840 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:04.431680 systemd-logind[1537]: New session 23 of user core. Dec 16 13:06:04.441481 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:06:04.560527 sshd[4320]: Connection closed by 10.0.0.1 port 45224 Dec 16 13:06:04.561055 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:04.571463 systemd[1]: sshd@22-10.0.0.80:22-10.0.0.1:45224.service: Deactivated successfully. Dec 16 13:06:04.573485 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:06:04.574257 systemd-logind[1537]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:06:04.576861 systemd[1]: Started sshd@23-10.0.0.80:22-10.0.0.1:45232.service - OpenSSH per-connection server daemon (10.0.0.1:45232). Dec 16 13:06:04.577731 systemd-logind[1537]: Removed session 23. Dec 16 13:06:04.643477 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 45232 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:06:04.645294 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:04.649817 systemd-logind[1537]: New session 24 of user core. Dec 16 13:06:04.661422 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:06:06.838010 containerd[1553]: time="2025-12-16T13:06:06.837903845Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:06:06.844105 containerd[1553]: time="2025-12-16T13:06:06.844052989Z" level=info msg="StopContainer for \"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792\" with timeout 2 (s)" Dec 16 13:06:06.844380 containerd[1553]: time="2025-12-16T13:06:06.844343361Z" level=info msg="Stop container \"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792\" with signal terminated" Dec 16 13:06:06.853548 systemd-networkd[1457]: lxc_health: Link DOWN Dec 16 13:06:06.853560 systemd-networkd[1457]: lxc_health: Lost carrier Dec 16 13:06:06.870637 systemd[1]: cri-containerd-030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792.scope: Deactivated successfully. Dec 16 13:06:06.871413 systemd[1]: cri-containerd-030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792.scope: Consumed 6.625s CPU time, 122.6M memory peak, 224K read from disk, 13.3M written to disk. Dec 16 13:06:06.873042 containerd[1553]: time="2025-12-16T13:06:06.872983506Z" level=info msg="received container exit event container_id:\"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792\" id:\"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792\" pid:3378 exited_at:{seconds:1765890366 nanos:872670672}" Dec 16 13:06:06.899713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792-rootfs.mount: Deactivated successfully. Dec 16 13:06:06.947585 kubelet[2716]: E1216 13:06:06.947520 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:06:07.168769 containerd[1553]: time="2025-12-16T13:06:07.168523388Z" level=info msg="StopContainer for \"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b\" with timeout 30 (s)" Dec 16 13:06:07.169287 containerd[1553]: time="2025-12-16T13:06:07.169198480Z" level=info msg="Stop container \"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b\" with signal terminated" Dec 16 13:06:07.179971 systemd[1]: cri-containerd-3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b.scope: Deactivated successfully. Dec 16 13:06:07.181400 containerd[1553]: time="2025-12-16T13:06:07.181360532Z" level=info msg="received container exit event container_id:\"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b\" id:\"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b\" pid:3307 exited_at:{seconds:1765890367 nanos:181056243}" Dec 16 13:06:07.205795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b-rootfs.mount: Deactivated successfully. Dec 16 13:06:07.425104 containerd[1553]: time="2025-12-16T13:06:07.424962164Z" level=info msg="StopContainer for \"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792\" returns successfully" Dec 16 13:06:07.427609 containerd[1553]: time="2025-12-16T13:06:07.427568078Z" level=info msg="StopPodSandbox for \"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\"" Dec 16 13:06:07.428964 containerd[1553]: time="2025-12-16T13:06:07.428934485Z" level=info msg="Container to stop \"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:06:07.428964 containerd[1553]: time="2025-12-16T13:06:07.428953381Z" level=info msg="Container to stop \"97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:06:07.428964 containerd[1553]: time="2025-12-16T13:06:07.428961697Z" level=info msg="Container to stop \"fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:06:07.429054 containerd[1553]: time="2025-12-16T13:06:07.428969852Z" level=info msg="Container to stop \"c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:06:07.429054 containerd[1553]: time="2025-12-16T13:06:07.428979270Z" level=info msg="Container to stop \"abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:06:07.437205 systemd[1]: cri-containerd-30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33.scope: Deactivated successfully. Dec 16 13:06:07.444907 containerd[1553]: time="2025-12-16T13:06:07.444863817Z" level=info msg="received sandbox exit event container_id:\"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\" id:\"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\" exit_status:137 exited_at:{seconds:1765890367 nanos:444657776}" monitor_name=podsandbox Dec 16 13:06:07.469500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33-rootfs.mount: Deactivated successfully. Dec 16 13:06:07.737721 containerd[1553]: time="2025-12-16T13:06:07.737669188Z" level=info msg="shim disconnected" id=30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33 namespace=k8s.io Dec 16 13:06:07.737895 containerd[1553]: time="2025-12-16T13:06:07.737868857Z" level=warning msg="cleaning up after shim disconnected" id=30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33 namespace=k8s.io Dec 16 13:06:07.748731 containerd[1553]: time="2025-12-16T13:06:07.737885539Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:06:07.766487 containerd[1553]: time="2025-12-16T13:06:07.766430285Z" level=info msg="TearDown network for sandbox \"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\" successfully" Dec 16 13:06:07.766487 containerd[1553]: time="2025-12-16T13:06:07.766463478Z" level=info msg="StopPodSandbox for \"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\" returns successfully" Dec 16 13:06:07.768370 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33-shm.mount: Deactivated successfully. Dec 16 13:06:07.777783 containerd[1553]: time="2025-12-16T13:06:07.777718383Z" level=info msg="received sandbox container exit event sandbox_id:\"30a340126618ec10b5b807b313d1788dcfee57088972b536335ba2cd425bec33\" exit_status:137 exited_at:{seconds:1765890367 nanos:444657776}" monitor_name=criService Dec 16 13:06:07.801952 containerd[1553]: time="2025-12-16T13:06:07.801840070Z" level=info msg="StopContainer for \"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b\" returns successfully" Dec 16 13:06:07.805671 containerd[1553]: time="2025-12-16T13:06:07.805593315Z" level=info msg="StopPodSandbox for \"a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5\"" Dec 16 13:06:07.805827 containerd[1553]: time="2025-12-16T13:06:07.805679259Z" level=info msg="Container to stop \"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 16 13:06:07.812743 systemd[1]: cri-containerd-a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5.scope: Deactivated successfully. Dec 16 13:06:07.813726 containerd[1553]: time="2025-12-16T13:06:07.813569959Z" level=info msg="received sandbox exit event container_id:\"a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5\" id:\"a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5\" exit_status:137 exited_at:{seconds:1765890367 nanos:813294035}" monitor_name=podsandbox Dec 16 13:06:07.836442 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5-rootfs.mount: Deactivated successfully. Dec 16 13:06:07.898722 kubelet[2716]: I1216 13:06:07.898679 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a973bf97-745f-417a-a95e-a0d58f0e45a0-clustermesh-secrets\") pod \"a973bf97-745f-417a-a95e-a0d58f0e45a0\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " Dec 16 13:06:07.898722 kubelet[2716]: I1216 13:06:07.898714 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-lib-modules\") pod \"a973bf97-745f-417a-a95e-a0d58f0e45a0\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " Dec 16 13:06:07.898889 kubelet[2716]: I1216 13:06:07.898739 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a973bf97-745f-417a-a95e-a0d58f0e45a0-hubble-tls\") pod \"a973bf97-745f-417a-a95e-a0d58f0e45a0\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " Dec 16 13:06:07.898889 kubelet[2716]: I1216 13:06:07.898755 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-xtables-lock\") pod \"a973bf97-745f-417a-a95e-a0d58f0e45a0\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " Dec 16 13:06:07.898889 kubelet[2716]: I1216 13:06:07.898794 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a973bf97-745f-417a-a95e-a0d58f0e45a0" (UID: "a973bf97-745f-417a-a95e-a0d58f0e45a0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:06:07.898889 kubelet[2716]: I1216 13:06:07.898803 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a973bf97-745f-417a-a95e-a0d58f0e45a0" (UID: "a973bf97-745f-417a-a95e-a0d58f0e45a0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:06:07.898889 kubelet[2716]: I1216 13:06:07.898836 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-cilium-cgroup\") pod \"a973bf97-745f-417a-a95e-a0d58f0e45a0\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " Dec 16 13:06:07.898889 kubelet[2716]: I1216 13:06:07.898858 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-etc-cni-netd\") pod \"a973bf97-745f-417a-a95e-a0d58f0e45a0\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " Dec 16 13:06:07.899041 kubelet[2716]: I1216 13:06:07.898885 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-host-proc-sys-net\") pod \"a973bf97-745f-417a-a95e-a0d58f0e45a0\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " Dec 16 13:06:07.899041 kubelet[2716]: I1216 13:06:07.898925 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a973bf97-745f-417a-a95e-a0d58f0e45a0-cilium-config-path\") pod \"a973bf97-745f-417a-a95e-a0d58f0e45a0\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " Dec 16 13:06:07.899041 kubelet[2716]: I1216 13:06:07.898952 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pk9w\" (UniqueName: \"kubernetes.io/projected/a973bf97-745f-417a-a95e-a0d58f0e45a0-kube-api-access-4pk9w\") pod \"a973bf97-745f-417a-a95e-a0d58f0e45a0\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " Dec 16 13:06:07.899041 kubelet[2716]: I1216 13:06:07.898969 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-host-proc-sys-kernel\") pod \"a973bf97-745f-417a-a95e-a0d58f0e45a0\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " Dec 16 13:06:07.899041 kubelet[2716]: I1216 13:06:07.899003 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-cilium-run\") pod \"a973bf97-745f-417a-a95e-a0d58f0e45a0\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " Dec 16 13:06:07.899041 kubelet[2716]: I1216 13:06:07.899023 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-hostproc\") pod \"a973bf97-745f-417a-a95e-a0d58f0e45a0\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " Dec 16 13:06:07.899251 kubelet[2716]: I1216 13:06:07.899041 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-bpf-maps\") pod \"a973bf97-745f-417a-a95e-a0d58f0e45a0\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " Dec 16 13:06:07.899251 kubelet[2716]: I1216 13:06:07.899059 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-cni-path\") pod \"a973bf97-745f-417a-a95e-a0d58f0e45a0\" (UID: \"a973bf97-745f-417a-a95e-a0d58f0e45a0\") " Dec 16 13:06:07.899251 kubelet[2716]: I1216 13:06:07.899096 2716 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 16 13:06:07.899251 kubelet[2716]: I1216 13:06:07.899109 2716 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 16 13:06:07.899251 kubelet[2716]: I1216 13:06:07.899130 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-cni-path" (OuterVolumeSpecName: "cni-path") pod "a973bf97-745f-417a-a95e-a0d58f0e45a0" (UID: "a973bf97-745f-417a-a95e-a0d58f0e45a0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:06:07.899251 kubelet[2716]: I1216 13:06:07.899150 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a973bf97-745f-417a-a95e-a0d58f0e45a0" (UID: "a973bf97-745f-417a-a95e-a0d58f0e45a0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:06:07.899392 kubelet[2716]: I1216 13:06:07.899166 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a973bf97-745f-417a-a95e-a0d58f0e45a0" (UID: "a973bf97-745f-417a-a95e-a0d58f0e45a0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:06:07.899392 kubelet[2716]: I1216 13:06:07.899184 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a973bf97-745f-417a-a95e-a0d58f0e45a0" (UID: "a973bf97-745f-417a-a95e-a0d58f0e45a0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:06:07.899392 kubelet[2716]: I1216 13:06:07.899193 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a973bf97-745f-417a-a95e-a0d58f0e45a0" (UID: "a973bf97-745f-417a-a95e-a0d58f0e45a0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:06:07.899392 kubelet[2716]: I1216 13:06:07.899201 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a973bf97-745f-417a-a95e-a0d58f0e45a0" (UID: "a973bf97-745f-417a-a95e-a0d58f0e45a0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:06:07.899392 kubelet[2716]: I1216 13:06:07.899215 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-hostproc" (OuterVolumeSpecName: "hostproc") pod "a973bf97-745f-417a-a95e-a0d58f0e45a0" (UID: "a973bf97-745f-417a-a95e-a0d58f0e45a0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:06:07.899509 kubelet[2716]: I1216 13:06:07.899256 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a973bf97-745f-417a-a95e-a0d58f0e45a0" (UID: "a973bf97-745f-417a-a95e-a0d58f0e45a0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 16 13:06:07.902177 kubelet[2716]: I1216 13:06:07.902145 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a973bf97-745f-417a-a95e-a0d58f0e45a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a973bf97-745f-417a-a95e-a0d58f0e45a0" (UID: "a973bf97-745f-417a-a95e-a0d58f0e45a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:06:07.923304 kubelet[2716]: I1216 13:06:07.923253 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a973bf97-745f-417a-a95e-a0d58f0e45a0-kube-api-access-4pk9w" (OuterVolumeSpecName: "kube-api-access-4pk9w") pod "a973bf97-745f-417a-a95e-a0d58f0e45a0" (UID: "a973bf97-745f-417a-a95e-a0d58f0e45a0"). InnerVolumeSpecName "kube-api-access-4pk9w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:06:07.923788 kubelet[2716]: I1216 13:06:07.923756 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a973bf97-745f-417a-a95e-a0d58f0e45a0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a973bf97-745f-417a-a95e-a0d58f0e45a0" (UID: "a973bf97-745f-417a-a95e-a0d58f0e45a0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:06:07.924029 kubelet[2716]: I1216 13:06:07.924000 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a973bf97-745f-417a-a95e-a0d58f0e45a0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a973bf97-745f-417a-a95e-a0d58f0e45a0" (UID: "a973bf97-745f-417a-a95e-a0d58f0e45a0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:06:07.924287 systemd[1]: var-lib-kubelet-pods-a973bf97\x2d745f\x2d417a\x2da95e\x2da0d58f0e45a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4pk9w.mount: Deactivated successfully. Dec 16 13:06:07.924398 systemd[1]: var-lib-kubelet-pods-a973bf97\x2d745f\x2d417a\x2da95e\x2da0d58f0e45a0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 16 13:06:07.924474 systemd[1]: var-lib-kubelet-pods-a973bf97\x2d745f\x2d417a\x2da95e\x2da0d58f0e45a0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 16 13:06:07.963928 containerd[1553]: time="2025-12-16T13:06:07.963886040Z" level=info msg="shim disconnected" id=a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5 namespace=k8s.io Dec 16 13:06:07.963928 containerd[1553]: time="2025-12-16T13:06:07.963919603Z" level=warning msg="cleaning up after shim disconnected" id=a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5 namespace=k8s.io Dec 16 13:06:07.964429 containerd[1553]: time="2025-12-16T13:06:07.963928911Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 13:06:07.978854 containerd[1553]: time="2025-12-16T13:06:07.978435369Z" level=info msg="received sandbox container exit event sandbox_id:\"a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5\" exit_status:137 exited_at:{seconds:1765890367 nanos:813294035}" monitor_name=criService Dec 16 13:06:07.978854 containerd[1553]: time="2025-12-16T13:06:07.978621493Z" level=info msg="TearDown network for sandbox \"a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5\" successfully" Dec 16 13:06:07.978854 containerd[1553]: time="2025-12-16T13:06:07.978651840Z" level=info msg="StopPodSandbox for \"a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5\" returns successfully" Dec 16 13:06:07.980538 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5bcc985d1db7dacfa225dee46aa0565f6597acc6635d123cea3bc2bc8c170c5-shm.mount: Deactivated successfully. Dec 16 13:06:07.999952 kubelet[2716]: I1216 13:06:07.999809 2716 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a973bf97-745f-417a-a95e-a0d58f0e45a0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 16 13:06:07.999952 kubelet[2716]: I1216 13:06:07.999845 2716 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a973bf97-745f-417a-a95e-a0d58f0e45a0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 16 13:06:07.999952 kubelet[2716]: I1216 13:06:07.999858 2716 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 16 13:06:07.999952 kubelet[2716]: I1216 13:06:07.999869 2716 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 16 13:06:07.999952 kubelet[2716]: I1216 13:06:07.999880 2716 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 16 13:06:07.999952 kubelet[2716]: I1216 13:06:07.999892 2716 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a973bf97-745f-417a-a95e-a0d58f0e45a0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 16 13:06:07.999952 kubelet[2716]: I1216 13:06:07.999903 2716 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4pk9w\" (UniqueName: \"kubernetes.io/projected/a973bf97-745f-417a-a95e-a0d58f0e45a0-kube-api-access-4pk9w\") on node \"localhost\" DevicePath \"\"" Dec 16 13:06:07.999952 kubelet[2716]: I1216 13:06:07.999917 2716 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 16 13:06:08.000651 kubelet[2716]: I1216 13:06:07.999962 2716 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 16 13:06:08.000651 kubelet[2716]: I1216 13:06:07.999975 2716 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 16 13:06:08.000651 kubelet[2716]: I1216 13:06:07.999987 2716 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 16 13:06:08.000651 kubelet[2716]: I1216 13:06:08.000010 2716 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a973bf97-745f-417a-a95e-a0d58f0e45a0-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 16 13:06:08.101257 kubelet[2716]: I1216 13:06:08.101187 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsrqz\" (UniqueName: \"kubernetes.io/projected/259fd0c1-46ae-4cbf-8ee8-88cab877be25-kube-api-access-nsrqz\") pod \"259fd0c1-46ae-4cbf-8ee8-88cab877be25\" (UID: \"259fd0c1-46ae-4cbf-8ee8-88cab877be25\") " Dec 16 13:06:08.101438 kubelet[2716]: I1216 13:06:08.101283 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/259fd0c1-46ae-4cbf-8ee8-88cab877be25-cilium-config-path\") pod \"259fd0c1-46ae-4cbf-8ee8-88cab877be25\" (UID: \"259fd0c1-46ae-4cbf-8ee8-88cab877be25\") " Dec 16 13:06:08.104804 kubelet[2716]: I1216 13:06:08.104737 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/259fd0c1-46ae-4cbf-8ee8-88cab877be25-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "259fd0c1-46ae-4cbf-8ee8-88cab877be25" (UID: "259fd0c1-46ae-4cbf-8ee8-88cab877be25"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:06:08.105050 kubelet[2716]: I1216 13:06:08.105024 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/259fd0c1-46ae-4cbf-8ee8-88cab877be25-kube-api-access-nsrqz" (OuterVolumeSpecName: "kube-api-access-nsrqz") pod "259fd0c1-46ae-4cbf-8ee8-88cab877be25" (UID: "259fd0c1-46ae-4cbf-8ee8-88cab877be25"). InnerVolumeSpecName "kube-api-access-nsrqz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:06:08.106248 systemd[1]: var-lib-kubelet-pods-259fd0c1\x2d46ae\x2d4cbf\x2d8ee8\x2d88cab877be25-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnsrqz.mount: Deactivated successfully. Dec 16 13:06:08.185125 sshd[4337]: Connection closed by 10.0.0.1 port 45232 Dec 16 13:06:08.185804 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:08.192565 kubelet[2716]: I1216 13:06:08.192534 2716 scope.go:117] "RemoveContainer" containerID="030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792" Dec 16 13:06:08.194221 containerd[1553]: time="2025-12-16T13:06:08.194177417Z" level=info msg="RemoveContainer for \"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792\"" Dec 16 13:06:08.196488 systemd[1]: sshd@23-10.0.0.80:22-10.0.0.1:45232.service: Deactivated successfully. Dec 16 13:06:08.198842 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:06:08.199724 systemd-logind[1537]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:06:08.202407 kubelet[2716]: I1216 13:06:08.202091 2716 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/259fd0c1-46ae-4cbf-8ee8-88cab877be25-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 16 13:06:08.202407 kubelet[2716]: I1216 13:06:08.202114 2716 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nsrqz\" (UniqueName: \"kubernetes.io/projected/259fd0c1-46ae-4cbf-8ee8-88cab877be25-kube-api-access-nsrqz\") on node \"localhost\" DevicePath \"\"" Dec 16 13:06:08.203755 systemd[1]: Started sshd@24-10.0.0.80:22-10.0.0.1:45240.service - OpenSSH per-connection server daemon (10.0.0.1:45240). Dec 16 13:06:08.204943 systemd-logind[1537]: Removed session 24. Dec 16 13:06:08.221556 systemd[1]: Removed slice kubepods-burstable-poda973bf97_745f_417a_a95e_a0d58f0e45a0.slice - libcontainer container kubepods-burstable-poda973bf97_745f_417a_a95e_a0d58f0e45a0.slice. Dec 16 13:06:08.221676 systemd[1]: kubepods-burstable-poda973bf97_745f_417a_a95e_a0d58f0e45a0.slice: Consumed 6.748s CPU time, 123M memory peak, 232K read from disk, 13.3M written to disk. Dec 16 13:06:08.273685 systemd[1]: Removed slice kubepods-besteffort-pod259fd0c1_46ae_4cbf_8ee8_88cab877be25.slice - libcontainer container kubepods-besteffort-pod259fd0c1_46ae_4cbf_8ee8_88cab877be25.slice. Dec 16 13:06:08.316273 sshd[4483]: Accepted publickey for core from 10.0.0.1 port 45240 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:06:08.318041 sshd-session[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:08.322327 systemd-logind[1537]: New session 25 of user core. Dec 16 13:06:08.328391 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:06:08.375769 containerd[1553]: time="2025-12-16T13:06:08.375694608Z" level=info msg="RemoveContainer for \"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792\" returns successfully" Dec 16 13:06:08.382554 kubelet[2716]: I1216 13:06:08.382519 2716 scope.go:117] "RemoveContainer" containerID="abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329" Dec 16 13:06:08.384831 containerd[1553]: time="2025-12-16T13:06:08.384280234Z" level=info msg="RemoveContainer for \"abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329\"" Dec 16 13:06:08.393310 containerd[1553]: time="2025-12-16T13:06:08.392802250Z" level=info msg="RemoveContainer for \"abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329\" returns successfully" Dec 16 13:06:08.393465 kubelet[2716]: I1216 13:06:08.393145 2716 scope.go:117] "RemoveContainer" containerID="c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac" Dec 16 13:06:08.395776 containerd[1553]: time="2025-12-16T13:06:08.395718953Z" level=info msg="RemoveContainer for \"c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac\"" Dec 16 13:06:08.403385 containerd[1553]: time="2025-12-16T13:06:08.402878519Z" level=info msg="RemoveContainer for \"c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac\" returns successfully" Dec 16 13:06:08.403522 kubelet[2716]: I1216 13:06:08.403171 2716 scope.go:117] "RemoveContainer" containerID="fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2" Dec 16 13:06:08.405395 containerd[1553]: time="2025-12-16T13:06:08.405363091Z" level=info msg="RemoveContainer for \"fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2\"" Dec 16 13:06:08.412758 containerd[1553]: time="2025-12-16T13:06:08.412690246Z" level=info msg="RemoveContainer for \"fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2\" returns successfully" Dec 16 13:06:08.413022 kubelet[2716]: I1216 13:06:08.412979 2716 scope.go:117] "RemoveContainer" containerID="97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8" Dec 16 13:06:08.414755 containerd[1553]: time="2025-12-16T13:06:08.414717749Z" level=info msg="RemoveContainer for \"97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8\"" Dec 16 13:06:08.419575 containerd[1553]: time="2025-12-16T13:06:08.419538491Z" level=info msg="RemoveContainer for \"97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8\" returns successfully" Dec 16 13:06:08.419759 kubelet[2716]: I1216 13:06:08.419718 2716 scope.go:117] "RemoveContainer" containerID="030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792" Dec 16 13:06:08.420242 containerd[1553]: time="2025-12-16T13:06:08.420164651Z" level=error msg="ContainerStatus for \"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792\": not found" Dec 16 13:06:08.420448 kubelet[2716]: E1216 13:06:08.420406 2716 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792\": not found" containerID="030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792" Dec 16 13:06:08.420515 kubelet[2716]: I1216 13:06:08.420459 2716 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792"} err="failed to get container status \"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792\": rpc error: code = NotFound desc = an error occurred when try to find container \"030c08726c40d5628e833efb89123e6a1bd3641f05ca38bcf655d45c7d1bb792\": not found" Dec 16 13:06:08.420551 kubelet[2716]: I1216 13:06:08.420515 2716 scope.go:117] "RemoveContainer" containerID="abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329" Dec 16 13:06:08.420790 containerd[1553]: time="2025-12-16T13:06:08.420728141Z" level=error msg="ContainerStatus for \"abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329\": not found" Dec 16 13:06:08.420873 kubelet[2716]: E1216 13:06:08.420850 2716 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329\": not found" containerID="abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329" Dec 16 13:06:08.420910 kubelet[2716]: I1216 13:06:08.420868 2716 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329"} err="failed to get container status \"abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329\": rpc error: code = NotFound desc = an error occurred when try to find container \"abedf6cae1d7dbd5e79cba78498c2d9116ab465b81169c6c5bbbd627dd347329\": not found" Dec 16 13:06:08.420937 kubelet[2716]: I1216 13:06:08.420917 2716 scope.go:117] "RemoveContainer" containerID="c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac" Dec 16 13:06:08.421149 containerd[1553]: time="2025-12-16T13:06:08.421123473Z" level=error msg="ContainerStatus for \"c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac\": not found" Dec 16 13:06:08.421410 kubelet[2716]: E1216 13:06:08.421385 2716 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac\": not found" containerID="c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac" Dec 16 13:06:08.421465 kubelet[2716]: I1216 13:06:08.421411 2716 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac"} err="failed to get container status \"c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac\": rpc error: code = NotFound desc = an error occurred when try to find container \"c98f9141172269a6f88eb7e0cc3bf2530b48e481a691b2f17b073b722f0a8fac\": not found" Dec 16 13:06:08.421465 kubelet[2716]: I1216 13:06:08.421426 2716 scope.go:117] "RemoveContainer" containerID="fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2" Dec 16 13:06:08.421697 containerd[1553]: time="2025-12-16T13:06:08.421630627Z" level=error msg="ContainerStatus for \"fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2\": not found" Dec 16 13:06:08.421849 kubelet[2716]: E1216 13:06:08.421822 2716 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2\": not found" containerID="fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2" Dec 16 13:06:08.421885 kubelet[2716]: I1216 13:06:08.421848 2716 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2"} err="failed to get container status \"fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc9d5f47fecc3f5e5e66d9857d35bacfe8121e602ed56167f5954cf74eb637e2\": not found" Dec 16 13:06:08.421885 kubelet[2716]: I1216 13:06:08.421864 2716 scope.go:117] "RemoveContainer" containerID="97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8" Dec 16 13:06:08.422067 containerd[1553]: time="2025-12-16T13:06:08.422028402Z" level=error msg="ContainerStatus for \"97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8\": not found" Dec 16 13:06:08.422248 kubelet[2716]: E1216 13:06:08.422171 2716 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8\": not found" containerID="97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8" Dec 16 13:06:08.422278 kubelet[2716]: I1216 13:06:08.422257 2716 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8"} err="failed to get container status \"97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"97f5b1890c5fd19771a4529957f22300b42d40d138c7898732326103dcc9b2c8\": not found" Dec 16 13:06:08.422302 kubelet[2716]: I1216 13:06:08.422283 2716 scope.go:117] "RemoveContainer" containerID="3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b" Dec 16 13:06:08.423775 containerd[1553]: time="2025-12-16T13:06:08.423695621Z" level=info msg="RemoveContainer for \"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b\"" Dec 16 13:06:08.428343 containerd[1553]: time="2025-12-16T13:06:08.428303127Z" level=info msg="RemoveContainer for \"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b\" returns successfully" Dec 16 13:06:08.428522 kubelet[2716]: I1216 13:06:08.428478 2716 scope.go:117] "RemoveContainer" containerID="3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b" Dec 16 13:06:08.428785 containerd[1553]: time="2025-12-16T13:06:08.428741650Z" level=error msg="ContainerStatus for \"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b\": not found" Dec 16 13:06:08.428924 kubelet[2716]: E1216 13:06:08.428901 2716 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b\": not found" containerID="3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b" Dec 16 13:06:08.428965 kubelet[2716]: I1216 13:06:08.428946 2716 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b"} err="failed to get container status \"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c3a2ad63db5c535c7749ca41905b4f926e1bc90fec64161dfdc68bdbea7f58b\": not found" Dec 16 13:06:08.915402 sshd[4486]: Connection closed by 10.0.0.1 port 45240 Dec 16 13:06:08.915842 sshd-session[4483]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:08.929055 systemd[1]: sshd@24-10.0.0.80:22-10.0.0.1:45240.service: Deactivated successfully. Dec 16 13:06:08.931504 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:06:08.933814 systemd-logind[1537]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:06:08.938834 systemd[1]: Started sshd@25-10.0.0.80:22-10.0.0.1:45242.service - OpenSSH per-connection server daemon (10.0.0.1:45242). Dec 16 13:06:08.940724 systemd-logind[1537]: Removed session 25. Dec 16 13:06:08.978935 systemd[1]: Created slice kubepods-burstable-pod5e27ece8_3fcf_43f9_a6c5_e1decbe2c870.slice - libcontainer container kubepods-burstable-pod5e27ece8_3fcf_43f9_a6c5_e1decbe2c870.slice. Dec 16 13:06:09.008510 sshd[4498]: Accepted publickey for core from 10.0.0.1 port 45242 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:06:09.010038 sshd-session[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:09.014529 systemd-logind[1537]: New session 26 of user core. Dec 16 13:06:09.024529 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 13:06:09.076153 sshd[4501]: Connection closed by 10.0.0.1 port 45242 Dec 16 13:06:09.076586 sshd-session[4498]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:09.092495 systemd[1]: sshd@25-10.0.0.80:22-10.0.0.1:45242.service: Deactivated successfully. Dec 16 13:06:09.095363 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 13:06:09.096343 systemd-logind[1537]: Session 26 logged out. Waiting for processes to exit. Dec 16 13:06:09.100751 systemd[1]: Started sshd@26-10.0.0.80:22-10.0.0.1:45252.service - OpenSSH per-connection server daemon (10.0.0.1:45252). Dec 16 13:06:09.101772 systemd-logind[1537]: Removed session 26. Dec 16 13:06:09.106827 kubelet[2716]: I1216 13:06:09.106757 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e27ece8-3fcf-43f9-a6c5-e1decbe2c870-lib-modules\") pod \"cilium-jdrkr\" (UID: \"5e27ece8-3fcf-43f9-a6c5-e1decbe2c870\") " pod="kube-system/cilium-jdrkr" Dec 16 13:06:09.106827 kubelet[2716]: I1216 13:06:09.106814 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e27ece8-3fcf-43f9-a6c5-e1decbe2c870-cni-path\") pod \"cilium-jdrkr\" (UID: \"5e27ece8-3fcf-43f9-a6c5-e1decbe2c870\") " pod="kube-system/cilium-jdrkr" Dec 16 13:06:09.106827 kubelet[2716]: I1216 13:06:09.106836 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e27ece8-3fcf-43f9-a6c5-e1decbe2c870-cilium-run\") pod \"cilium-jdrkr\" (UID: \"5e27ece8-3fcf-43f9-a6c5-e1decbe2c870\") " pod="kube-system/cilium-jdrkr" Dec 16 13:06:09.107415 kubelet[2716]: I1216 13:06:09.106921 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e27ece8-3fcf-43f9-a6c5-e1decbe2c870-xtables-lock\") pod \"cilium-jdrkr\" (UID: \"5e27ece8-3fcf-43f9-a6c5-e1decbe2c870\") " pod="kube-system/cilium-jdrkr" Dec 16 13:06:09.107415 kubelet[2716]: I1216 13:06:09.106980 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e27ece8-3fcf-43f9-a6c5-e1decbe2c870-hostproc\") pod \"cilium-jdrkr\" (UID: \"5e27ece8-3fcf-43f9-a6c5-e1decbe2c870\") " pod="kube-system/cilium-jdrkr" Dec 16 13:06:09.107415 kubelet[2716]: I1216 13:06:09.107014 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e27ece8-3fcf-43f9-a6c5-e1decbe2c870-host-proc-sys-net\") pod \"cilium-jdrkr\" (UID: \"5e27ece8-3fcf-43f9-a6c5-e1decbe2c870\") " pod="kube-system/cilium-jdrkr" Dec 16 13:06:09.107415 kubelet[2716]: I1216 13:06:09.107036 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e27ece8-3fcf-43f9-a6c5-e1decbe2c870-host-proc-sys-kernel\") pod \"cilium-jdrkr\" (UID: \"5e27ece8-3fcf-43f9-a6c5-e1decbe2c870\") " pod="kube-system/cilium-jdrkr" Dec 16 13:06:09.107415 kubelet[2716]: I1216 13:06:09.107057 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e27ece8-3fcf-43f9-a6c5-e1decbe2c870-hubble-tls\") pod \"cilium-jdrkr\" (UID: \"5e27ece8-3fcf-43f9-a6c5-e1decbe2c870\") " pod="kube-system/cilium-jdrkr" Dec 16 13:06:09.107415 kubelet[2716]: I1216 13:06:09.107103 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e27ece8-3fcf-43f9-a6c5-e1decbe2c870-bpf-maps\") pod \"cilium-jdrkr\" (UID: \"5e27ece8-3fcf-43f9-a6c5-e1decbe2c870\") " pod="kube-system/cilium-jdrkr" Dec 16 13:06:09.107613 kubelet[2716]: I1216 13:06:09.107147 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e27ece8-3fcf-43f9-a6c5-e1decbe2c870-cilium-config-path\") pod \"cilium-jdrkr\" (UID: \"5e27ece8-3fcf-43f9-a6c5-e1decbe2c870\") " pod="kube-system/cilium-jdrkr" Dec 16 13:06:09.107613 kubelet[2716]: I1216 13:06:09.107194 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmnxk\" (UniqueName: \"kubernetes.io/projected/5e27ece8-3fcf-43f9-a6c5-e1decbe2c870-kube-api-access-fmnxk\") pod \"cilium-jdrkr\" (UID: \"5e27ece8-3fcf-43f9-a6c5-e1decbe2c870\") " pod="kube-system/cilium-jdrkr" Dec 16 13:06:09.107613 kubelet[2716]: I1216 13:06:09.107225 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e27ece8-3fcf-43f9-a6c5-e1decbe2c870-cilium-cgroup\") pod \"cilium-jdrkr\" (UID: \"5e27ece8-3fcf-43f9-a6c5-e1decbe2c870\") " pod="kube-system/cilium-jdrkr" Dec 16 13:06:09.107613 kubelet[2716]: I1216 13:06:09.107264 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e27ece8-3fcf-43f9-a6c5-e1decbe2c870-clustermesh-secrets\") pod \"cilium-jdrkr\" (UID: \"5e27ece8-3fcf-43f9-a6c5-e1decbe2c870\") " pod="kube-system/cilium-jdrkr" Dec 16 13:06:09.107613 kubelet[2716]: I1216 13:06:09.107286 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5e27ece8-3fcf-43f9-a6c5-e1decbe2c870-cilium-ipsec-secrets\") pod \"cilium-jdrkr\" (UID: \"5e27ece8-3fcf-43f9-a6c5-e1decbe2c870\") " pod="kube-system/cilium-jdrkr" Dec 16 13:06:09.107775 kubelet[2716]: I1216 13:06:09.107308 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e27ece8-3fcf-43f9-a6c5-e1decbe2c870-etc-cni-netd\") pod \"cilium-jdrkr\" (UID: \"5e27ece8-3fcf-43f9-a6c5-e1decbe2c870\") " pod="kube-system/cilium-jdrkr" Dec 16 13:06:09.158570 sshd[4508]: Accepted publickey for core from 10.0.0.1 port 45252 ssh2: RSA SHA256:U5R1V2YL8grSrRz9PVaqQqCOxjm1DLwZWE3rSGcR9eI Dec 16 13:06:09.160326 sshd-session[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:06:09.165114 systemd-logind[1537]: New session 27 of user core. Dec 16 13:06:09.177508 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 13:06:09.285046 kubelet[2716]: E1216 13:06:09.284987 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:09.286066 containerd[1553]: time="2025-12-16T13:06:09.286015462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jdrkr,Uid:5e27ece8-3fcf-43f9-a6c5-e1decbe2c870,Namespace:kube-system,Attempt:0,}" Dec 16 13:06:09.308337 containerd[1553]: time="2025-12-16T13:06:09.308281202Z" level=info msg="connecting to shim 320f29a281a887cec1eb3bcf4eb3b4e8785bc297f15ec77922856c10f64a48bc" address="unix:///run/containerd/s/c370b25ec51a0e25b655747567ea5c4a3aa34808fa1d2f92e7ad6550230df4db" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:06:09.338492 systemd[1]: Started cri-containerd-320f29a281a887cec1eb3bcf4eb3b4e8785bc297f15ec77922856c10f64a48bc.scope - libcontainer container 320f29a281a887cec1eb3bcf4eb3b4e8785bc297f15ec77922856c10f64a48bc. Dec 16 13:06:09.369880 containerd[1553]: time="2025-12-16T13:06:09.369841156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jdrkr,Uid:5e27ece8-3fcf-43f9-a6c5-e1decbe2c870,Namespace:kube-system,Attempt:0,} returns sandbox id \"320f29a281a887cec1eb3bcf4eb3b4e8785bc297f15ec77922856c10f64a48bc\"" Dec 16 13:06:09.370670 kubelet[2716]: E1216 13:06:09.370633 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:09.375709 containerd[1553]: time="2025-12-16T13:06:09.375647076Z" level=info msg="CreateContainer within sandbox \"320f29a281a887cec1eb3bcf4eb3b4e8785bc297f15ec77922856c10f64a48bc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 16 13:06:09.383290 containerd[1553]: time="2025-12-16T13:06:09.383257535Z" level=info msg="Container c0fc2051e2f958c105d289990d9bae9c25913c60f0db894063b7fd03163ff4d5: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:09.390360 containerd[1553]: time="2025-12-16T13:06:09.390319651Z" level=info msg="CreateContainer within sandbox \"320f29a281a887cec1eb3bcf4eb3b4e8785bc297f15ec77922856c10f64a48bc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c0fc2051e2f958c105d289990d9bae9c25913c60f0db894063b7fd03163ff4d5\"" Dec 16 13:06:09.390889 containerd[1553]: time="2025-12-16T13:06:09.390844519Z" level=info msg="StartContainer for \"c0fc2051e2f958c105d289990d9bae9c25913c60f0db894063b7fd03163ff4d5\"" Dec 16 13:06:09.391777 containerd[1553]: time="2025-12-16T13:06:09.391730422Z" level=info msg="connecting to shim c0fc2051e2f958c105d289990d9bae9c25913c60f0db894063b7fd03163ff4d5" address="unix:///run/containerd/s/c370b25ec51a0e25b655747567ea5c4a3aa34808fa1d2f92e7ad6550230df4db" protocol=ttrpc version=3 Dec 16 13:06:09.417405 systemd[1]: Started cri-containerd-c0fc2051e2f958c105d289990d9bae9c25913c60f0db894063b7fd03163ff4d5.scope - libcontainer container c0fc2051e2f958c105d289990d9bae9c25913c60f0db894063b7fd03163ff4d5. Dec 16 13:06:09.448545 containerd[1553]: time="2025-12-16T13:06:09.448450961Z" level=info msg="StartContainer for \"c0fc2051e2f958c105d289990d9bae9c25913c60f0db894063b7fd03163ff4d5\" returns successfully" Dec 16 13:06:09.459242 systemd[1]: cri-containerd-c0fc2051e2f958c105d289990d9bae9c25913c60f0db894063b7fd03163ff4d5.scope: Deactivated successfully. Dec 16 13:06:09.462197 containerd[1553]: time="2025-12-16T13:06:09.462145198Z" level=info msg="received container exit event container_id:\"c0fc2051e2f958c105d289990d9bae9c25913c60f0db894063b7fd03163ff4d5\" id:\"c0fc2051e2f958c105d289990d9bae9c25913c60f0db894063b7fd03163ff4d5\" pid:4582 exited_at:{seconds:1765890369 nanos:461780054}" Dec 16 13:06:09.898158 kubelet[2716]: I1216 13:06:09.898072 2716 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="259fd0c1-46ae-4cbf-8ee8-88cab877be25" path="/var/lib/kubelet/pods/259fd0c1-46ae-4cbf-8ee8-88cab877be25/volumes" Dec 16 13:06:09.905098 kubelet[2716]: I1216 13:06:09.905039 2716 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a973bf97-745f-417a-a95e-a0d58f0e45a0" path="/var/lib/kubelet/pods/a973bf97-745f-417a-a95e-a0d58f0e45a0/volumes" Dec 16 13:06:10.204508 kubelet[2716]: E1216 13:06:10.204348 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:10.330016 containerd[1553]: time="2025-12-16T13:06:10.329963773Z" level=info msg="CreateContainer within sandbox \"320f29a281a887cec1eb3bcf4eb3b4e8785bc297f15ec77922856c10f64a48bc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 16 13:06:10.491254 containerd[1553]: time="2025-12-16T13:06:10.489157772Z" level=info msg="Container 97a721196eedd5c934786bba362dd2660119ce0f50b51c5c6d0b8756d17c9cfa: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:10.780934 containerd[1553]: time="2025-12-16T13:06:10.780778480Z" level=info msg="CreateContainer within sandbox \"320f29a281a887cec1eb3bcf4eb3b4e8785bc297f15ec77922856c10f64a48bc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"97a721196eedd5c934786bba362dd2660119ce0f50b51c5c6d0b8756d17c9cfa\"" Dec 16 13:06:10.781445 containerd[1553]: time="2025-12-16T13:06:10.781404219Z" level=info msg="StartContainer for \"97a721196eedd5c934786bba362dd2660119ce0f50b51c5c6d0b8756d17c9cfa\"" Dec 16 13:06:10.782654 containerd[1553]: time="2025-12-16T13:06:10.782614176Z" level=info msg="connecting to shim 97a721196eedd5c934786bba362dd2660119ce0f50b51c5c6d0b8756d17c9cfa" address="unix:///run/containerd/s/c370b25ec51a0e25b655747567ea5c4a3aa34808fa1d2f92e7ad6550230df4db" protocol=ttrpc version=3 Dec 16 13:06:10.815534 systemd[1]: Started cri-containerd-97a721196eedd5c934786bba362dd2660119ce0f50b51c5c6d0b8756d17c9cfa.scope - libcontainer container 97a721196eedd5c934786bba362dd2660119ce0f50b51c5c6d0b8756d17c9cfa. Dec 16 13:06:10.855396 systemd[1]: cri-containerd-97a721196eedd5c934786bba362dd2660119ce0f50b51c5c6d0b8756d17c9cfa.scope: Deactivated successfully. Dec 16 13:06:11.153577 containerd[1553]: time="2025-12-16T13:06:11.153418589Z" level=info msg="received container exit event container_id:\"97a721196eedd5c934786bba362dd2660119ce0f50b51c5c6d0b8756d17c9cfa\" id:\"97a721196eedd5c934786bba362dd2660119ce0f50b51c5c6d0b8756d17c9cfa\" pid:4627 exited_at:{seconds:1765890370 nanos:855543888}" Dec 16 13:06:11.155255 containerd[1553]: time="2025-12-16T13:06:11.155209749Z" level=info msg="StartContainer for \"97a721196eedd5c934786bba362dd2660119ce0f50b51c5c6d0b8756d17c9cfa\" returns successfully" Dec 16 13:06:11.176559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97a721196eedd5c934786bba362dd2660119ce0f50b51c5c6d0b8756d17c9cfa-rootfs.mount: Deactivated successfully. Dec 16 13:06:11.208255 kubelet[2716]: E1216 13:06:11.208204 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:11.949009 kubelet[2716]: E1216 13:06:11.948947 2716 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 16 13:06:12.212622 kubelet[2716]: E1216 13:06:12.212324 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:12.244648 containerd[1553]: time="2025-12-16T13:06:12.244456884Z" level=info msg="CreateContainer within sandbox \"320f29a281a887cec1eb3bcf4eb3b4e8785bc297f15ec77922856c10f64a48bc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 16 13:06:12.256449 containerd[1553]: time="2025-12-16T13:06:12.256392079Z" level=info msg="Container 1aef930afa4d32196438a03fee6faad1135cfe9ccfac6485e0c693f911d37af8: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:12.267675 containerd[1553]: time="2025-12-16T13:06:12.267614410Z" level=info msg="CreateContainer within sandbox \"320f29a281a887cec1eb3bcf4eb3b4e8785bc297f15ec77922856c10f64a48bc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1aef930afa4d32196438a03fee6faad1135cfe9ccfac6485e0c693f911d37af8\"" Dec 16 13:06:12.268191 containerd[1553]: time="2025-12-16T13:06:12.268164283Z" level=info msg="StartContainer for \"1aef930afa4d32196438a03fee6faad1135cfe9ccfac6485e0c693f911d37af8\"" Dec 16 13:06:12.269632 containerd[1553]: time="2025-12-16T13:06:12.269603325Z" level=info msg="connecting to shim 1aef930afa4d32196438a03fee6faad1135cfe9ccfac6485e0c693f911d37af8" address="unix:///run/containerd/s/c370b25ec51a0e25b655747567ea5c4a3aa34808fa1d2f92e7ad6550230df4db" protocol=ttrpc version=3 Dec 16 13:06:12.291423 systemd[1]: Started cri-containerd-1aef930afa4d32196438a03fee6faad1135cfe9ccfac6485e0c693f911d37af8.scope - libcontainer container 1aef930afa4d32196438a03fee6faad1135cfe9ccfac6485e0c693f911d37af8. Dec 16 13:06:12.390490 containerd[1553]: time="2025-12-16T13:06:12.390443596Z" level=info msg="StartContainer for \"1aef930afa4d32196438a03fee6faad1135cfe9ccfac6485e0c693f911d37af8\" returns successfully" Dec 16 13:06:12.393090 systemd[1]: cri-containerd-1aef930afa4d32196438a03fee6faad1135cfe9ccfac6485e0c693f911d37af8.scope: Deactivated successfully. Dec 16 13:06:12.394596 containerd[1553]: time="2025-12-16T13:06:12.394562624Z" level=info msg="received container exit event container_id:\"1aef930afa4d32196438a03fee6faad1135cfe9ccfac6485e0c693f911d37af8\" id:\"1aef930afa4d32196438a03fee6faad1135cfe9ccfac6485e0c693f911d37af8\" pid:4671 exited_at:{seconds:1765890372 nanos:394360981}" Dec 16 13:06:12.419739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1aef930afa4d32196438a03fee6faad1135cfe9ccfac6485e0c693f911d37af8-rootfs.mount: Deactivated successfully. Dec 16 13:06:13.220880 kubelet[2716]: E1216 13:06:13.220841 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:13.343944 containerd[1553]: time="2025-12-16T13:06:13.343885487Z" level=info msg="CreateContainer within sandbox \"320f29a281a887cec1eb3bcf4eb3b4e8785bc297f15ec77922856c10f64a48bc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 16 13:06:13.424468 containerd[1553]: time="2025-12-16T13:06:13.424416933Z" level=info msg="Container de68d0953dd195b03ef342c5ef406b1469538f262b47bb21ad83c1f07493fe7d: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:13.478760 containerd[1553]: time="2025-12-16T13:06:13.478716621Z" level=info msg="CreateContainer within sandbox \"320f29a281a887cec1eb3bcf4eb3b4e8785bc297f15ec77922856c10f64a48bc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"de68d0953dd195b03ef342c5ef406b1469538f262b47bb21ad83c1f07493fe7d\"" Dec 16 13:06:13.480311 containerd[1553]: time="2025-12-16T13:06:13.480276130Z" level=info msg="StartContainer for \"de68d0953dd195b03ef342c5ef406b1469538f262b47bb21ad83c1f07493fe7d\"" Dec 16 13:06:13.481127 containerd[1553]: time="2025-12-16T13:06:13.481092200Z" level=info msg="connecting to shim de68d0953dd195b03ef342c5ef406b1469538f262b47bb21ad83c1f07493fe7d" address="unix:///run/containerd/s/c370b25ec51a0e25b655747567ea5c4a3aa34808fa1d2f92e7ad6550230df4db" protocol=ttrpc version=3 Dec 16 13:06:13.500407 systemd[1]: Started cri-containerd-de68d0953dd195b03ef342c5ef406b1469538f262b47bb21ad83c1f07493fe7d.scope - libcontainer container de68d0953dd195b03ef342c5ef406b1469538f262b47bb21ad83c1f07493fe7d. Dec 16 13:06:13.532620 systemd[1]: cri-containerd-de68d0953dd195b03ef342c5ef406b1469538f262b47bb21ad83c1f07493fe7d.scope: Deactivated successfully. Dec 16 13:06:13.536209 containerd[1553]: time="2025-12-16T13:06:13.536145087Z" level=info msg="received container exit event container_id:\"de68d0953dd195b03ef342c5ef406b1469538f262b47bb21ad83c1f07493fe7d\" id:\"de68d0953dd195b03ef342c5ef406b1469538f262b47bb21ad83c1f07493fe7d\" pid:4713 exited_at:{seconds:1765890373 nanos:534502110}" Dec 16 13:06:13.546147 containerd[1553]: time="2025-12-16T13:06:13.546096831Z" level=info msg="StartContainer for \"de68d0953dd195b03ef342c5ef406b1469538f262b47bb21ad83c1f07493fe7d\" returns successfully" Dec 16 13:06:13.563387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de68d0953dd195b03ef342c5ef406b1469538f262b47bb21ad83c1f07493fe7d-rootfs.mount: Deactivated successfully. Dec 16 13:06:14.179032 kubelet[2716]: I1216 13:06:14.178943 2716 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-16T13:06:14Z","lastTransitionTime":"2025-12-16T13:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 16 13:06:14.225884 kubelet[2716]: E1216 13:06:14.225855 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:14.230514 containerd[1553]: time="2025-12-16T13:06:14.230473612Z" level=info msg="CreateContainer within sandbox \"320f29a281a887cec1eb3bcf4eb3b4e8785bc297f15ec77922856c10f64a48bc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 16 13:06:14.260084 containerd[1553]: time="2025-12-16T13:06:14.259314515Z" level=info msg="Container b27356b01264db578bf71f152198292477031794a3fed3f750f5b5958607f1a0: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:06:14.266335 containerd[1553]: time="2025-12-16T13:06:14.266296421Z" level=info msg="CreateContainer within sandbox \"320f29a281a887cec1eb3bcf4eb3b4e8785bc297f15ec77922856c10f64a48bc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b27356b01264db578bf71f152198292477031794a3fed3f750f5b5958607f1a0\"" Dec 16 13:06:14.266880 containerd[1553]: time="2025-12-16T13:06:14.266744271Z" level=info msg="StartContainer for \"b27356b01264db578bf71f152198292477031794a3fed3f750f5b5958607f1a0\"" Dec 16 13:06:14.267724 containerd[1553]: time="2025-12-16T13:06:14.267686298Z" level=info msg="connecting to shim b27356b01264db578bf71f152198292477031794a3fed3f750f5b5958607f1a0" address="unix:///run/containerd/s/c370b25ec51a0e25b655747567ea5c4a3aa34808fa1d2f92e7ad6550230df4db" protocol=ttrpc version=3 Dec 16 13:06:14.292404 systemd[1]: Started cri-containerd-b27356b01264db578bf71f152198292477031794a3fed3f750f5b5958607f1a0.scope - libcontainer container b27356b01264db578bf71f152198292477031794a3fed3f750f5b5958607f1a0. Dec 16 13:06:14.348983 containerd[1553]: time="2025-12-16T13:06:14.348936566Z" level=info msg="StartContainer for \"b27356b01264db578bf71f152198292477031794a3fed3f750f5b5958607f1a0\" returns successfully" Dec 16 13:06:14.839305 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Dec 16 13:06:15.231706 kubelet[2716]: E1216 13:06:15.231625 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:15.250039 kubelet[2716]: I1216 13:06:15.249962 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jdrkr" podStartSLOduration=7.249947409 podStartE2EDuration="7.249947409s" podCreationTimestamp="2025-12-16 13:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:06:15.249928643 +0000 UTC m=+93.442756833" watchObservedRunningTime="2025-12-16 13:06:15.249947409 +0000 UTC m=+93.442775589" Dec 16 13:06:16.232882 kubelet[2716]: E1216 13:06:16.232836 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:17.975706 systemd-networkd[1457]: lxc_health: Link UP Dec 16 13:06:17.977754 systemd-networkd[1457]: lxc_health: Gained carrier Dec 16 13:06:19.139496 systemd-networkd[1457]: lxc_health: Gained IPv6LL Dec 16 13:06:19.288812 kubelet[2716]: E1216 13:06:19.288760 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:20.240620 kubelet[2716]: E1216 13:06:20.240584 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:21.242264 kubelet[2716]: E1216 13:06:21.242128 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 13:06:24.197207 sshd[4512]: Connection closed by 10.0.0.1 port 45252 Dec 16 13:06:24.197740 sshd-session[4508]: pam_unix(sshd:session): session closed for user core Dec 16 13:06:24.203058 systemd[1]: sshd@26-10.0.0.80:22-10.0.0.1:45252.service: Deactivated successfully. Dec 16 13:06:24.205118 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 13:06:24.205981 systemd-logind[1537]: Session 27 logged out. Waiting for processes to exit. Dec 16 13:06:24.207159 systemd-logind[1537]: Removed session 27.