Jan 20 00:38:16.012279 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:42:14 -00 2026 Jan 20 00:38:16.012298 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:38:16.012309 kernel: BIOS-provided physical RAM map: Jan 20 00:38:16.012315 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 00:38:16.012320 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 00:38:16.012326 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 00:38:16.012332 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 00:38:16.012337 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 00:38:16.012343 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 20 00:38:16.012348 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 20 00:38:16.012356 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 20 00:38:16.012361 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 20 00:38:16.012366 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 20 00:38:16.012372 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 20 00:38:16.012379 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 20 00:38:16.012385 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 00:38:16.012393 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 20 00:38:16.012398 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 20 00:38:16.012404 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 00:38:16.012410 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 00:38:16.012415 kernel: NX (Execute Disable) protection: active Jan 20 00:38:16.012421 kernel: APIC: Static calls initialized Jan 20 00:38:16.012426 kernel: efi: EFI v2.7 by EDK II Jan 20 00:38:16.012432 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 20 00:38:16.012438 kernel: SMBIOS 2.8 present. Jan 20 00:38:16.012443 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 20 00:38:16.012449 kernel: Hypervisor detected: KVM Jan 20 00:38:16.012457 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 00:38:16.012462 kernel: kvm-clock: using sched offset of 5392349100 cycles Jan 20 00:38:16.012468 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 00:38:16.012474 kernel: tsc: Detected 2445.426 MHz processor Jan 20 00:38:16.012480 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 00:38:16.012487 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 00:38:16.012492 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 20 00:38:16.012498 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 00:38:16.012504 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 00:38:16.012512 kernel: Using GB pages for direct mapping Jan 20 00:38:16.012518 kernel: Secure boot disabled Jan 20 00:38:16.012524 kernel: ACPI: Early table checksum verification disabled Jan 20 00:38:16.012530 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 20 00:38:16.012540 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 20 00:38:16.012546 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:38:16.012552 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:38:16.012561 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 20 00:38:16.012567 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:38:16.012573 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:38:16.012579 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:38:16.012585 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:38:16.012591 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 00:38:16.012598 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 20 00:38:16.012606 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 20 00:38:16.012612 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 20 00:38:16.012618 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 20 00:38:16.012625 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 20 00:38:16.012631 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 20 00:38:16.012637 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 20 00:38:16.012643 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 20 00:38:16.012649 kernel: No NUMA configuration found Jan 20 00:38:16.012655 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 20 00:38:16.012663 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 20 00:38:16.012669 kernel: Zone ranges: Jan 20 00:38:16.012675 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 00:38:16.012681 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 20 00:38:16.012687 kernel: Normal empty Jan 20 00:38:16.012693 kernel: Movable zone start for each node Jan 20 00:38:16.012699 kernel: Early memory node ranges Jan 20 00:38:16.012706 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 00:38:16.012712 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 20 00:38:16.012718 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 20 00:38:16.012726 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 20 00:38:16.012732 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 20 00:38:16.012738 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 20 00:38:16.012780 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 20 00:38:16.012787 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:38:16.012793 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 00:38:16.012799 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 20 00:38:16.012805 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:38:16.012811 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 20 00:38:16.012821 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 20 00:38:16.012827 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 20 00:38:16.012833 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 00:38:16.012839 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 00:38:16.012845 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 00:38:16.012851 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 00:38:16.012857 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 00:38:16.012863 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 00:38:16.012869 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 00:38:16.012875 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 00:38:16.012884 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 00:38:16.012890 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 00:38:16.012896 kernel: TSC deadline timer available Jan 20 00:38:16.012902 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 20 00:38:16.012908 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 00:38:16.012914 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 00:38:16.012920 kernel: kvm-guest: setup PV sched yield Jan 20 00:38:16.012926 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 20 00:38:16.012932 kernel: Booting paravirtualized kernel on KVM Jan 20 00:38:16.012941 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 00:38:16.012947 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 00:38:16.012953 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 20 00:38:16.012959 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 20 00:38:16.012966 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 00:38:16.012972 kernel: kvm-guest: PV spinlocks enabled Jan 20 00:38:16.012978 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 00:38:16.012985 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:38:16.012993 kernel: random: crng init done Jan 20 00:38:16.013000 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 00:38:16.013006 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 00:38:16.013012 kernel: Fallback order for Node 0: 0 Jan 20 00:38:16.013018 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 20 00:38:16.013024 kernel: Policy zone: DMA32 Jan 20 00:38:16.013030 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 00:38:16.013037 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42880K init, 2316K bss, 166124K reserved, 0K cma-reserved) Jan 20 00:38:16.013049 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 00:38:16.013066 kernel: ftrace: allocating 37989 entries in 149 pages Jan 20 00:38:16.013079 kernel: ftrace: allocated 149 pages with 4 groups Jan 20 00:38:16.013091 kernel: Dynamic Preempt: voluntary Jan 20 00:38:16.013158 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 00:38:16.013177 kernel: rcu: RCU event tracing is enabled. Jan 20 00:38:16.013187 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 00:38:16.013193 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 00:38:16.013200 kernel: Rude variant of Tasks RCU enabled. Jan 20 00:38:16.013206 kernel: Tracing variant of Tasks RCU enabled. Jan 20 00:38:16.013213 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 00:38:16.013219 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 00:38:16.013226 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 00:38:16.013234 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 00:38:16.013241 kernel: Console: colour dummy device 80x25 Jan 20 00:38:16.013247 kernel: printk: console [ttyS0] enabled Jan 20 00:38:16.013254 kernel: ACPI: Core revision 20230628 Jan 20 00:38:16.013260 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 00:38:16.013269 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 00:38:16.013275 kernel: x2apic enabled Jan 20 00:38:16.013282 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 00:38:16.013288 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 00:38:16.013295 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 00:38:16.013301 kernel: kvm-guest: setup PV IPIs Jan 20 00:38:16.013307 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 00:38:16.013314 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 20 00:38:16.013320 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 00:38:16.013329 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 00:38:16.013335 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 00:38:16.013342 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 00:38:16.013348 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 00:38:16.013355 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 00:38:16.013361 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 00:38:16.013368 kernel: Speculative Store Bypass: Vulnerable Jan 20 00:38:16.013374 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 00:38:16.013381 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 00:38:16.013390 kernel: active return thunk: srso_alias_return_thunk Jan 20 00:38:16.013396 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 00:38:16.013403 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 00:38:16.013409 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 00:38:16.013415 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 00:38:16.013422 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 00:38:16.013428 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 00:38:16.013435 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 00:38:16.013443 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 00:38:16.013450 kernel: Freeing SMP alternatives memory: 32K Jan 20 00:38:16.013456 kernel: pid_max: default: 32768 minimum: 301 Jan 20 00:38:16.013463 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 00:38:16.013469 kernel: landlock: Up and running. Jan 20 00:38:16.013475 kernel: SELinux: Initializing. Jan 20 00:38:16.013482 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:38:16.013488 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:38:16.013495 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 00:38:16.013503 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:38:16.013510 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:38:16.013516 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:38:16.013523 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 00:38:16.013529 kernel: signal: max sigframe size: 1776 Jan 20 00:38:16.013535 kernel: rcu: Hierarchical SRCU implementation. Jan 20 00:38:16.013542 kernel: rcu: Max phase no-delay instances is 400. Jan 20 00:38:16.013548 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 00:38:16.013555 kernel: smp: Bringing up secondary CPUs ... Jan 20 00:38:16.013563 kernel: smpboot: x86: Booting SMP configuration: Jan 20 00:38:16.013570 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 00:38:16.013576 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 00:38:16.013582 kernel: smpboot: Max logical packages: 1 Jan 20 00:38:16.013589 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 00:38:16.013595 kernel: devtmpfs: initialized Jan 20 00:38:16.013601 kernel: x86/mm: Memory block size: 128MB Jan 20 00:38:16.013608 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 20 00:38:16.013614 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 20 00:38:16.013623 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 20 00:38:16.013630 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 20 00:38:16.013636 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 20 00:38:16.013642 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 00:38:16.013649 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 00:38:16.013656 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 00:38:16.013662 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 00:38:16.013668 kernel: audit: initializing netlink subsys (disabled) Jan 20 00:38:16.013675 kernel: audit: type=2000 audit(1768869495.455:1): state=initialized audit_enabled=0 res=1 Jan 20 00:38:16.013683 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 00:38:16.013690 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 00:38:16.013696 kernel: cpuidle: using governor menu Jan 20 00:38:16.013702 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 00:38:16.013709 kernel: dca service started, version 1.12.1 Jan 20 00:38:16.013715 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 20 00:38:16.013722 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 00:38:16.013728 kernel: PCI: Using configuration type 1 for base access Jan 20 00:38:16.013734 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 00:38:16.013774 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 00:38:16.013781 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 00:38:16.013788 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 00:38:16.013794 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 00:38:16.013801 kernel: ACPI: Added _OSI(Module Device) Jan 20 00:38:16.013807 kernel: ACPI: Added _OSI(Processor Device) Jan 20 00:38:16.013814 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 00:38:16.013820 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 00:38:16.013826 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 20 00:38:16.013835 kernel: ACPI: Interpreter enabled Jan 20 00:38:16.013842 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 00:38:16.013848 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 00:38:16.013855 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 00:38:16.013861 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 00:38:16.013868 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 00:38:16.013874 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 00:38:16.014070 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 00:38:16.014267 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 00:38:16.014402 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 00:38:16.014412 kernel: PCI host bridge to bus 0000:00 Jan 20 00:38:16.014608 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 00:38:16.014732 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 00:38:16.014909 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 00:38:16.015021 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 00:38:16.015223 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 00:38:16.015351 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 20 00:38:16.015465 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 00:38:16.015663 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 20 00:38:16.015878 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 20 00:38:16.016021 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 20 00:38:16.016376 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 20 00:38:16.016600 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 20 00:38:16.016854 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 20 00:38:16.017085 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 00:38:16.017369 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 20 00:38:16.017595 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 20 00:38:16.017814 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 20 00:38:16.017951 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 20 00:38:16.018147 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 20 00:38:16.018284 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 20 00:38:16.018408 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 20 00:38:16.018528 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 20 00:38:16.018655 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 20 00:38:16.018837 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 20 00:38:16.018960 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 20 00:38:16.019109 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 20 00:38:16.019271 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 20 00:38:16.019400 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 20 00:38:16.019520 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 00:38:16.019647 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 20 00:38:16.019815 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 20 00:38:16.019937 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 20 00:38:16.020084 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 20 00:38:16.020256 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 20 00:38:16.020268 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 00:38:16.020275 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 00:38:16.020282 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 00:38:16.020288 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 00:38:16.020299 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 00:38:16.020306 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 00:38:16.020312 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 00:38:16.020319 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 00:38:16.020325 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 00:38:16.020332 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 00:38:16.020338 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 00:38:16.020345 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 00:38:16.020351 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 00:38:16.020360 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 00:38:16.020367 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 00:38:16.020373 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 00:38:16.020380 kernel: iommu: Default domain type: Translated Jan 20 00:38:16.020386 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 00:38:16.020393 kernel: efivars: Registered efivars operations Jan 20 00:38:16.020399 kernel: PCI: Using ACPI for IRQ routing Jan 20 00:38:16.020406 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 00:38:16.020413 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 20 00:38:16.020422 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 20 00:38:16.020428 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 20 00:38:16.020434 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 20 00:38:16.020553 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 00:38:16.020671 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 00:38:16.020841 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 00:38:16.020852 kernel: vgaarb: loaded Jan 20 00:38:16.020859 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 00:38:16.020865 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 00:38:16.020876 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 00:38:16.020883 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 00:38:16.020889 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 00:38:16.020896 kernel: pnp: PnP ACPI init Jan 20 00:38:16.021027 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 00:38:16.021040 kernel: pnp: PnP ACPI: found 6 devices Jan 20 00:38:16.021054 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 00:38:16.021067 kernel: NET: Registered PF_INET protocol family Jan 20 00:38:16.021087 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 00:38:16.021101 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 00:38:16.021114 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 00:38:16.021121 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 00:38:16.021151 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 00:38:16.021158 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 00:38:16.021165 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:38:16.021171 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:38:16.021178 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 00:38:16.021188 kernel: NET: Registered PF_XDP protocol family Jan 20 00:38:16.021323 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 20 00:38:16.021446 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 20 00:38:16.021561 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 00:38:16.021670 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 00:38:16.021830 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 00:38:16.021943 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 00:38:16.022072 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 00:38:16.022240 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 20 00:38:16.022252 kernel: PCI: CLS 0 bytes, default 64 Jan 20 00:38:16.022258 kernel: Initialise system trusted keyrings Jan 20 00:38:16.022265 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 00:38:16.022272 kernel: Key type asymmetric registered Jan 20 00:38:16.022279 kernel: Asymmetric key parser 'x509' registered Jan 20 00:38:16.022285 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 20 00:38:16.022292 kernel: io scheduler mq-deadline registered Jan 20 00:38:16.022302 kernel: io scheduler kyber registered Jan 20 00:38:16.022309 kernel: io scheduler bfq registered Jan 20 00:38:16.022315 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 00:38:16.022322 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 00:38:16.022329 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 00:38:16.022336 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 00:38:16.022342 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 00:38:16.022349 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 00:38:16.022356 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 00:38:16.022365 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 00:38:16.022372 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 00:38:16.022501 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 00:38:16.022511 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 00:38:16.022622 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 00:38:16.022735 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T00:38:15 UTC (1768869495) Jan 20 00:38:16.022895 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 00:38:16.022905 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 00:38:16.022916 kernel: efifb: probing for efifb Jan 20 00:38:16.022923 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 20 00:38:16.022929 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 20 00:38:16.022936 kernel: efifb: scrolling: redraw Jan 20 00:38:16.022942 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 20 00:38:16.022949 kernel: Console: switching to colour frame buffer device 100x37 Jan 20 00:38:16.022955 kernel: fb0: EFI VGA frame buffer device Jan 20 00:38:16.022962 kernel: pstore: Using crash dump compression: deflate Jan 20 00:38:16.022968 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 00:38:16.022977 kernel: NET: Registered PF_INET6 protocol family Jan 20 00:38:16.022984 kernel: Segment Routing with IPv6 Jan 20 00:38:16.022990 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 00:38:16.022997 kernel: NET: Registered PF_PACKET protocol family Jan 20 00:38:16.023003 kernel: Key type dns_resolver registered Jan 20 00:38:16.023010 kernel: IPI shorthand broadcast: enabled Jan 20 00:38:16.023035 kernel: sched_clock: Marking stable (951014036, 399579614)->(1575188093, -224594443) Jan 20 00:38:16.023051 kernel: registered taskstats version 1 Jan 20 00:38:16.023065 kernel: Loading compiled-in X.509 certificates Jan 20 00:38:16.023082 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: ea2d429b6f340e470c7de035feb011ab349763d1' Jan 20 00:38:16.023096 kernel: Key type .fscrypt registered Jan 20 00:38:16.023110 kernel: Key type fscrypt-provisioning registered Jan 20 00:38:16.023120 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 00:38:16.023150 kernel: ima: Allocated hash algorithm: sha1 Jan 20 00:38:16.023157 kernel: ima: No architecture policies found Jan 20 00:38:16.023163 kernel: clk: Disabling unused clocks Jan 20 00:38:16.023171 kernel: Freeing unused kernel image (initmem) memory: 42880K Jan 20 00:38:16.023177 kernel: Write protecting the kernel read-only data: 36864k Jan 20 00:38:16.023187 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 20 00:38:16.023194 kernel: Run /init as init process Jan 20 00:38:16.023201 kernel: with arguments: Jan 20 00:38:16.023208 kernel: /init Jan 20 00:38:16.023214 kernel: with environment: Jan 20 00:38:16.023221 kernel: HOME=/ Jan 20 00:38:16.023228 kernel: TERM=linux Jan 20 00:38:16.023237 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:38:16.023248 systemd[1]: Detected virtualization kvm. Jan 20 00:38:16.023256 systemd[1]: Detected architecture x86-64. Jan 20 00:38:16.023263 systemd[1]: Running in initrd. Jan 20 00:38:16.023269 systemd[1]: No hostname configured, using default hostname. Jan 20 00:38:16.023276 systemd[1]: Hostname set to . Jan 20 00:38:16.023283 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:38:16.023291 systemd[1]: Queued start job for default target initrd.target. Jan 20 00:38:16.023298 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:38:16.023307 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:38:16.023316 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 00:38:16.023323 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:38:16.023331 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 00:38:16.023341 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 00:38:16.023352 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 00:38:16.023359 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 00:38:16.023366 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:38:16.023373 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:38:16.023380 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:38:16.023387 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:38:16.023397 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:38:16.023406 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:38:16.023413 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:38:16.023421 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:38:16.023428 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 00:38:16.023435 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 00:38:16.023442 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:38:16.023449 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:38:16.023457 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:38:16.023466 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:38:16.023474 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 00:38:16.023481 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:38:16.023488 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 00:38:16.023495 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 00:38:16.023502 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:38:16.023509 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:38:16.023517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:38:16.023524 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 00:38:16.023556 systemd-journald[194]: Collecting audit messages is disabled. Jan 20 00:38:16.023572 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:38:16.023580 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 00:38:16.023591 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:38:16.023599 systemd-journald[194]: Journal started Jan 20 00:38:16.023615 systemd-journald[194]: Runtime Journal (/run/log/journal/f639c0d4cef54d8da729537ab205c0c3) is 6.0M, max 48.3M, 42.2M free. Jan 20 00:38:16.027014 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:38:16.027781 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:38:16.029961 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:38:16.031446 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:38:16.055336 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:38:16.059786 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:38:16.067333 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:38:16.079497 systemd-modules-load[195]: Inserted module 'overlay' Jan 20 00:38:16.086966 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:38:16.104733 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:38:16.120809 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 00:38:16.124973 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 20 00:38:16.127859 kernel: Bridge firewalling registered Jan 20 00:38:16.133966 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 00:38:16.135631 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:38:16.141607 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:38:16.149503 dracut-cmdline[222]: dracut-dracut-053 Jan 20 00:38:16.153812 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:38:16.166637 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:38:16.182006 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:38:16.216085 systemd-resolved[258]: Positive Trust Anchors: Jan 20 00:38:16.216116 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:38:16.216164 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:38:16.218525 systemd-resolved[258]: Defaulting to hostname 'linux'. Jan 20 00:38:16.219689 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:38:16.224912 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:38:16.258819 kernel: SCSI subsystem initialized Jan 20 00:38:16.267845 kernel: Loading iSCSI transport class v2.0-870. Jan 20 00:38:16.278835 kernel: iscsi: registered transport (tcp) Jan 20 00:38:16.299472 kernel: iscsi: registered transport (qla4xxx) Jan 20 00:38:16.299526 kernel: QLogic iSCSI HBA Driver Jan 20 00:38:16.346957 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 00:38:16.361967 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 00:38:16.391019 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 00:38:16.391072 kernel: device-mapper: uevent: version 1.0.3 Jan 20 00:38:16.393637 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 00:38:16.437830 kernel: raid6: avx2x4 gen() 32365 MB/s Jan 20 00:38:16.455823 kernel: raid6: avx2x2 gen() 25144 MB/s Jan 20 00:38:16.475233 kernel: raid6: avx2x1 gen() 25957 MB/s Jan 20 00:38:16.475277 kernel: raid6: using algorithm avx2x4 gen() 32365 MB/s Jan 20 00:38:16.494716 kernel: raid6: .... xor() 5037 MB/s, rmw enabled Jan 20 00:38:16.494819 kernel: raid6: using avx2x2 recovery algorithm Jan 20 00:38:16.514811 kernel: xor: automatically using best checksumming function avx Jan 20 00:38:16.656840 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 00:38:16.669904 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:38:16.686223 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:38:16.700979 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 20 00:38:16.706623 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:38:16.720919 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 00:38:16.734032 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jan 20 00:38:16.765903 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:38:16.781965 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:38:16.857264 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:38:16.867972 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 00:38:16.880155 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 00:38:16.886424 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:38:16.893194 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:38:16.899931 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:38:16.912914 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 00:38:16.925610 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 00:38:16.926091 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 00:38:16.920709 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:38:16.946499 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 00:38:16.946894 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 00:38:16.946929 kernel: GPT:9289727 != 19775487 Jan 20 00:38:16.946976 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 00:38:16.946998 kernel: GPT:9289727 != 19775487 Jan 20 00:38:16.947018 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 00:38:16.947039 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:38:16.920858 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:38:16.927263 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:38:16.946455 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:38:16.947104 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:38:16.954852 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:38:16.973829 kernel: libata version 3.00 loaded. Jan 20 00:38:16.983525 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 00:38:16.983835 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 00:38:16.987578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:38:17.009986 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 20 00:38:17.010407 kernel: AVX2 version of gcm_enc/dec engaged. Jan 20 00:38:17.010428 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 00:38:17.011575 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:38:17.034474 kernel: AES CTR mode by8 optimization enabled Jan 20 00:38:17.034503 kernel: scsi host0: ahci Jan 20 00:38:17.034819 kernel: BTRFS: device fsid ea39c6ab-04c2-4917-8268-943d4ecb2b5c devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (461) Jan 20 00:38:17.034840 kernel: scsi host1: ahci Jan 20 00:38:17.035064 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Jan 20 00:38:17.035089 kernel: scsi host2: ahci Jan 20 00:38:17.037517 kernel: scsi host3: ahci Jan 20 00:38:17.037721 kernel: scsi host4: ahci Jan 20 00:38:17.042892 kernel: scsi host5: ahci Jan 20 00:38:17.043075 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 20 00:38:17.043087 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 20 00:38:17.045821 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 20 00:38:17.045849 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 20 00:38:17.050826 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 20 00:38:17.050850 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 20 00:38:17.062276 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 00:38:17.067332 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 00:38:17.076978 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 00:38:17.091425 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 00:38:17.096484 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:38:17.112012 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 00:38:17.113527 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:38:17.113591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:38:17.118827 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:38:17.125196 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:38:17.142864 disk-uuid[558]: Primary Header is updated. Jan 20 00:38:17.142864 disk-uuid[558]: Secondary Entries is updated. Jan 20 00:38:17.142864 disk-uuid[558]: Secondary Header is updated. Jan 20 00:38:17.156294 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:38:17.156322 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:38:17.149881 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:38:17.163002 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:38:17.184879 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:38:17.361808 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 00:38:17.364799 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 00:38:17.364828 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 00:38:17.366788 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 00:38:17.372804 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 00:38:17.372828 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 00:38:17.374833 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 00:38:17.378417 kernel: ata3.00: applying bridge limits Jan 20 00:38:17.380661 kernel: ata3.00: configured for UDMA/100 Jan 20 00:38:17.383815 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 00:38:17.436855 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 00:38:17.437083 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 00:38:17.449816 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 00:38:18.156829 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:38:18.157326 disk-uuid[560]: The operation has completed successfully. Jan 20 00:38:18.189320 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 00:38:18.189486 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 00:38:18.223014 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 00:38:18.229309 sh[598]: Success Jan 20 00:38:18.241802 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 20 00:38:18.280785 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 00:38:18.298236 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 00:38:18.302928 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 00:38:18.318917 kernel: BTRFS info (device dm-0): first mount of filesystem ea39c6ab-04c2-4917-8268-943d4ecb2b5c Jan 20 00:38:18.318952 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:38:18.318966 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 00:38:18.321562 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 00:38:18.323472 kernel: BTRFS info (device dm-0): using free space tree Jan 20 00:38:18.334287 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 00:38:18.339567 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 00:38:18.355086 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 00:38:18.361625 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 00:38:18.377637 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:38:18.377695 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:38:18.377709 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:38:18.382836 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:38:18.393055 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 00:38:18.398801 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:38:18.408176 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 00:38:18.422957 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 00:38:18.479338 ignition[702]: Ignition 2.19.0 Jan 20 00:38:18.479367 ignition[702]: Stage: fetch-offline Jan 20 00:38:18.479409 ignition[702]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:38:18.479420 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:38:18.479515 ignition[702]: parsed url from cmdline: "" Jan 20 00:38:18.479519 ignition[702]: no config URL provided Jan 20 00:38:18.479525 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 00:38:18.479534 ignition[702]: no config at "/usr/lib/ignition/user.ign" Jan 20 00:38:18.479559 ignition[702]: op(1): [started] loading QEMU firmware config module Jan 20 00:38:18.479565 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 00:38:18.488587 ignition[702]: op(1): [finished] loading QEMU firmware config module Jan 20 00:38:18.520030 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:38:18.534106 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:38:18.561977 systemd-networkd[786]: lo: Link UP Jan 20 00:38:18.562010 systemd-networkd[786]: lo: Gained carrier Jan 20 00:38:18.564620 systemd-networkd[786]: Enumeration completed Jan 20 00:38:18.565030 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:38:18.565866 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:38:18.565872 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:38:18.567212 systemd-networkd[786]: eth0: Link UP Jan 20 00:38:18.567219 systemd-networkd[786]: eth0: Gained carrier Jan 20 00:38:18.567230 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:38:18.569426 systemd[1]: Reached target network.target - Network. Jan 20 00:38:18.594816 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:38:18.686205 ignition[702]: parsing config with SHA512: aef70196b1f8c728e1d8127c4e4c548d42f28ceb2ae3494432271fb52f9a1c506f1f4ddda6abcde3512ded24de6b7b7405456f24172980c2e78a55768c2684bb Jan 20 00:38:18.691510 unknown[702]: fetched base config from "system" Jan 20 00:38:18.691529 unknown[702]: fetched user config from "qemu" Jan 20 00:38:18.694437 ignition[702]: fetch-offline: fetch-offline passed Jan 20 00:38:18.696950 ignition[702]: Ignition finished successfully Jan 20 00:38:18.702815 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:38:18.706468 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 00:38:18.719037 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 00:38:18.734046 ignition[790]: Ignition 2.19.0 Jan 20 00:38:18.734072 ignition[790]: Stage: kargs Jan 20 00:38:18.734335 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:38:18.738318 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 00:38:18.734353 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:38:18.735611 ignition[790]: kargs: kargs passed Jan 20 00:38:18.735674 ignition[790]: Ignition finished successfully Jan 20 00:38:18.752018 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 00:38:18.770555 ignition[798]: Ignition 2.19.0 Jan 20 00:38:18.770588 ignition[798]: Stage: disks Jan 20 00:38:18.770899 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:38:18.773948 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 00:38:18.770918 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:38:18.777418 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 00:38:18.772315 ignition[798]: disks: disks passed Jan 20 00:38:18.782206 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 00:38:18.772374 ignition[798]: Ignition finished successfully Jan 20 00:38:18.788229 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:38:18.791037 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:38:18.793692 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:38:18.810961 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 00:38:18.830564 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 20 00:38:18.833211 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 00:38:18.857943 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 00:38:18.953824 kernel: EXT4-fs (vda9): mounted filesystem 3f4cac35-b37d-4410-a45a-1329edafa0f9 r/w with ordered data mode. Quota mode: none. Jan 20 00:38:18.954305 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 00:38:18.957595 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 00:38:18.976867 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:38:18.992621 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Jan 20 00:38:18.992654 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:38:18.992672 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:38:18.992690 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:38:18.980844 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 00:38:19.004419 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:38:18.995420 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 00:38:18.995465 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 00:38:18.995500 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:38:19.005650 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:38:19.014121 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 00:38:19.038991 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 00:38:19.075683 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 00:38:19.080161 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 20 00:38:19.086661 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 00:38:19.091095 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 00:38:19.189731 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 00:38:19.203884 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 00:38:19.207019 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 00:38:19.219820 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:38:19.239201 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 00:38:19.244666 ignition[927]: INFO : Ignition 2.19.0 Jan 20 00:38:19.244666 ignition[927]: INFO : Stage: mount Jan 20 00:38:19.248843 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:38:19.248843 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:38:19.248843 ignition[927]: INFO : mount: mount passed Jan 20 00:38:19.248843 ignition[927]: INFO : Ignition finished successfully Jan 20 00:38:19.258659 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 00:38:19.278876 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 00:38:19.314870 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 00:38:19.329947 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:38:19.339833 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Jan 20 00:38:19.339871 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:38:19.344735 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:38:19.344894 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:38:19.350816 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:38:19.352646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:38:19.372828 ignition[959]: INFO : Ignition 2.19.0 Jan 20 00:38:19.372828 ignition[959]: INFO : Stage: files Jan 20 00:38:19.377047 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:38:19.377047 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:38:19.377047 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 20 00:38:19.377047 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 00:38:19.377047 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 00:38:19.392184 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 00:38:19.392184 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 00:38:19.392184 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 00:38:19.392184 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 20 00:38:19.392184 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 20 00:38:19.392184 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 00:38:19.392184 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 00:38:19.379080 unknown[959]: wrote ssh authorized keys file for user: core Jan 20 00:38:19.432614 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 20 00:38:19.566187 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 00:38:19.566187 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 00:38:19.575167 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 20 00:38:19.627645 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 20 00:38:19.725084 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 00:38:19.725084 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 20 00:38:19.736071 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 00:38:19.736071 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:38:19.736071 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:38:19.736071 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:38:19.736071 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:38:19.736071 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:38:19.736071 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:38:19.736071 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:38:19.736071 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:38:19.736071 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:38:19.736071 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:38:19.736071 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:38:19.736071 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 00:38:19.945256 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 20 00:38:19.976939 systemd-networkd[786]: eth0: Gained IPv6LL Jan 20 00:38:20.693563 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:38:20.693563 ignition[959]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 20 00:38:20.703948 ignition[959]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 20 00:38:20.703948 ignition[959]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 20 00:38:20.703948 ignition[959]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 20 00:38:20.703948 ignition[959]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 20 00:38:20.703948 ignition[959]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:38:20.703948 ignition[959]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:38:20.703948 ignition[959]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 20 00:38:20.703948 ignition[959]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 20 00:38:20.703948 ignition[959]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:38:20.703948 ignition[959]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:38:20.703948 ignition[959]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 20 00:38:20.703948 ignition[959]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 00:38:20.780423 ignition[959]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:38:20.787548 ignition[959]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:38:20.795707 ignition[959]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 00:38:20.795707 ignition[959]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 20 00:38:20.795707 ignition[959]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 00:38:20.795707 ignition[959]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:38:20.795707 ignition[959]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:38:20.795707 ignition[959]: INFO : files: files passed Jan 20 00:38:20.795707 ignition[959]: INFO : Ignition finished successfully Jan 20 00:38:20.789959 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 00:38:20.817010 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 00:38:20.822250 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 00:38:20.826840 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 00:38:20.853336 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 00:38:20.827018 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 00:38:20.859910 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:38:20.859910 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:38:20.840457 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:38:20.870893 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:38:20.847543 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 00:38:20.873990 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 00:38:20.897472 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 00:38:20.897693 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 00:38:20.904318 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 00:38:20.910349 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 00:38:20.913421 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 00:38:20.914676 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 00:38:20.933587 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:38:20.949938 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 00:38:20.960718 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:38:20.964557 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:38:20.971321 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 00:38:20.976225 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 00:38:20.976350 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:38:20.981814 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 00:38:20.986125 systemd[1]: Stopped target basic.target - Basic System. Jan 20 00:38:20.991245 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 00:38:20.996195 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:38:21.001215 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 00:38:21.006529 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 00:38:21.011862 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:38:21.017391 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 00:38:21.022375 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 00:38:21.027836 systemd[1]: Stopped target swap.target - Swaps. Jan 20 00:38:21.032394 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 00:38:21.032519 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:38:21.038092 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:38:21.042091 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:38:21.047128 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 00:38:21.047405 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:38:21.052658 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 00:38:21.052833 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 00:38:21.058619 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 00:38:21.058779 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:38:21.063873 systemd[1]: Stopped target paths.target - Path Units. Jan 20 00:38:21.068209 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 00:38:21.071845 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:38:21.077011 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 00:38:21.081988 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 00:38:21.086967 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 00:38:21.087069 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:38:21.092039 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 00:38:21.092175 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:38:21.096583 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 00:38:21.096701 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:38:21.132029 ignition[1013]: INFO : Ignition 2.19.0 Jan 20 00:38:21.132029 ignition[1013]: INFO : Stage: umount Jan 20 00:38:21.102242 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 00:38:21.144331 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:38:21.144331 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:38:21.144331 ignition[1013]: INFO : umount: umount passed Jan 20 00:38:21.144331 ignition[1013]: INFO : Ignition finished successfully Jan 20 00:38:21.102358 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 00:38:21.117957 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 00:38:21.122034 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 00:38:21.126119 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 00:38:21.126526 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:38:21.137196 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 00:38:21.137324 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:38:21.144603 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 00:38:21.144738 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 00:38:21.150861 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 00:38:21.152996 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 00:38:21.153131 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 00:38:21.159718 systemd[1]: Stopped target network.target - Network. Jan 20 00:38:21.163888 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 00:38:21.163968 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 00:38:21.169464 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 00:38:21.169528 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 00:38:21.174495 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 00:38:21.174555 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 00:38:21.179503 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 00:38:21.179585 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 00:38:21.185107 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 00:38:21.190663 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 00:38:21.203636 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 00:38:21.203865 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 00:38:21.206327 systemd-networkd[786]: eth0: DHCPv6 lease lost Jan 20 00:38:21.209710 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 00:38:21.209927 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 00:38:21.215272 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 00:38:21.215340 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:38:21.235905 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 00:38:21.238647 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 00:38:21.238703 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:38:21.239672 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:38:21.239720 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:38:21.240491 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 00:38:21.240536 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 00:38:21.241478 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 00:38:21.241527 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:38:21.242412 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:38:21.243062 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 00:38:21.243196 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 00:38:21.247277 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 00:38:21.247355 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 00:38:21.259357 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 00:38:21.259490 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 00:38:21.264592 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 00:38:21.396790 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 20 00:38:21.264808 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:38:21.270380 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 00:38:21.270448 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 00:38:21.275316 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 00:38:21.275357 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:38:21.276588 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 00:38:21.276638 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:38:21.279617 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 00:38:21.279666 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 00:38:21.282684 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:38:21.282733 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:38:21.297964 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 00:38:21.304982 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 00:38:21.305060 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:38:21.309575 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 00:38:21.309637 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:38:21.317236 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 00:38:21.317287 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:38:21.321861 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:38:21.321911 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:38:21.330134 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 00:38:21.330294 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 00:38:21.338174 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 00:38:21.354990 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 00:38:21.362380 systemd[1]: Switching root. Jan 20 00:38:21.466244 systemd-journald[194]: Journal stopped Jan 20 00:38:22.685568 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 00:38:22.685639 kernel: SELinux: policy capability open_perms=1 Jan 20 00:38:22.685652 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 00:38:22.685667 kernel: SELinux: policy capability always_check_network=0 Jan 20 00:38:22.685687 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 00:38:22.685697 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 00:38:22.685707 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 00:38:22.685722 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 00:38:22.685732 kernel: audit: type=1403 audit(1768869501.627:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 00:38:22.685779 systemd[1]: Successfully loaded SELinux policy in 53.967ms. Jan 20 00:38:22.685800 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.810ms. Jan 20 00:38:22.685812 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:38:22.685823 systemd[1]: Detected virtualization kvm. Jan 20 00:38:22.685834 systemd[1]: Detected architecture x86-64. Jan 20 00:38:22.685846 systemd[1]: Detected first boot. Jan 20 00:38:22.685859 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:38:22.685870 zram_generator::config[1079]: No configuration found. Jan 20 00:38:22.685882 systemd[1]: Populated /etc with preset unit settings. Jan 20 00:38:22.685893 systemd[1]: Queued start job for default target multi-user.target. Jan 20 00:38:22.685904 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 00:38:22.685915 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 00:38:22.685926 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 00:38:22.685938 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 00:38:22.685952 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 00:38:22.685962 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 00:38:22.685973 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 00:38:22.685984 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 00:38:22.685995 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 00:38:22.686006 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:38:22.686017 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:38:22.686027 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 00:38:22.686038 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 00:38:22.686051 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 00:38:22.686062 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:38:22.686072 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 00:38:22.686083 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:38:22.686094 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 00:38:22.686104 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:38:22.686115 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:38:22.686125 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:38:22.686136 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:38:22.686181 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 00:38:22.686193 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 00:38:22.686206 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 00:38:22.686216 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 00:38:22.686227 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:38:22.686238 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:38:22.686249 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:38:22.686260 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 00:38:22.686271 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 00:38:22.686284 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 00:38:22.686303 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 00:38:22.686331 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:38:22.686350 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 00:38:22.686373 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 00:38:22.686390 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 00:38:22.686406 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 00:38:22.686423 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:38:22.686444 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:38:22.686461 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 00:38:22.686478 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:38:22.686494 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:38:22.686511 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:38:22.686529 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 00:38:22.686549 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:38:22.686566 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 00:38:22.686589 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 20 00:38:22.686609 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 20 00:38:22.686626 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:38:22.686643 kernel: loop: module loaded Jan 20 00:38:22.686661 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:38:22.686681 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 00:38:22.686722 systemd-journald[1168]: Collecting audit messages is disabled. Jan 20 00:38:22.686806 kernel: fuse: init (API version 7.39) Jan 20 00:38:22.686854 kernel: ACPI: bus type drm_connector registered Jan 20 00:38:22.686868 systemd-journald[1168]: Journal started Jan 20 00:38:22.686888 systemd-journald[1168]: Runtime Journal (/run/log/journal/f639c0d4cef54d8da729537ab205c0c3) is 6.0M, max 48.3M, 42.2M free. Jan 20 00:38:22.694248 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 00:38:22.715791 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:38:22.722803 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:38:22.727119 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:38:22.729933 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 00:38:22.732517 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 00:38:22.735275 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 00:38:22.737904 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 00:38:22.740584 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 00:38:22.743371 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 00:38:22.746113 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 00:38:22.749295 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:38:22.752573 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 00:38:22.752835 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 00:38:22.756012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:38:22.756253 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:38:22.759325 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:38:22.759535 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:38:22.762412 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:38:22.762618 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:38:22.765895 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 00:38:22.766097 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 00:38:22.769049 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:38:22.769333 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:38:22.772310 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:38:22.775407 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 00:38:22.778987 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 00:38:22.793141 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 00:38:22.804944 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 00:38:22.810944 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 00:38:22.813689 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 00:38:22.815465 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 00:38:22.824940 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 00:38:22.828041 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:38:22.831692 systemd-journald[1168]: Time spent on flushing to /var/log/journal/f639c0d4cef54d8da729537ab205c0c3 is 13.548ms for 975 entries. Jan 20 00:38:22.831692 systemd-journald[1168]: System Journal (/var/log/journal/f639c0d4cef54d8da729537ab205c0c3) is 8.0M, max 195.6M, 187.6M free. Jan 20 00:38:22.855802 systemd-journald[1168]: Received client request to flush runtime journal. Jan 20 00:38:22.831899 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 00:38:22.837028 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:38:22.838466 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:38:22.842906 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:38:22.850709 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 00:38:22.854129 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 00:38:22.860447 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 00:38:22.864529 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 00:38:22.872440 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 00:38:22.884344 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:38:22.892232 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Jan 20 00:38:22.892247 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Jan 20 00:38:22.898692 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:38:22.902100 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:38:22.910910 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 00:38:22.913653 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 20 00:38:22.927954 udevadm[1231]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 20 00:38:22.939257 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 00:38:22.949010 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:38:22.965698 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 20 00:38:22.965730 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 20 00:38:22.971847 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:38:23.231397 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 00:38:23.251934 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:38:23.291194 systemd-udevd[1241]: Using default interface naming scheme 'v255'. Jan 20 00:38:23.312803 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:38:23.323325 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:38:23.338902 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 00:38:23.349820 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 20 00:38:23.365040 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1242) Jan 20 00:38:23.418441 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 00:38:23.426829 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 20 00:38:23.437683 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 20 00:38:23.438006 kernel: ACPI: button: Power Button [PWRF] Jan 20 00:38:23.438022 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 00:38:23.440971 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:38:23.445213 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 20 00:38:23.445584 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 00:38:23.459970 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 00:38:23.495806 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 00:38:23.517898 systemd-networkd[1248]: lo: Link UP Jan 20 00:38:23.517923 systemd-networkd[1248]: lo: Gained carrier Jan 20 00:38:23.519703 systemd-networkd[1248]: Enumeration completed Jan 20 00:38:23.520472 systemd-networkd[1248]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:38:23.520476 systemd-networkd[1248]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:38:23.521453 systemd-networkd[1248]: eth0: Link UP Jan 20 00:38:23.521459 systemd-networkd[1248]: eth0: Gained carrier Jan 20 00:38:23.521472 systemd-networkd[1248]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:38:23.526617 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:38:23.531301 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:38:23.539499 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 00:38:23.546059 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:38:23.546734 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:38:23.564427 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:38:23.598863 systemd-networkd[1248]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:38:23.613731 kernel: kvm_amd: TSC scaling supported Jan 20 00:38:23.613815 kernel: kvm_amd: Nested Virtualization enabled Jan 20 00:38:23.613829 kernel: kvm_amd: Nested Paging enabled Jan 20 00:38:23.613840 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 00:38:23.616838 kernel: kvm_amd: PMU virtualization is disabled Jan 20 00:38:23.672817 kernel: EDAC MC: Ver: 3.0.0 Jan 20 00:38:23.686026 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:38:23.705602 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 20 00:38:23.723000 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 20 00:38:23.736460 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:38:23.770479 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 20 00:38:23.774529 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:38:23.792955 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 20 00:38:23.803309 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:38:23.841517 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 20 00:38:23.847416 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 00:38:23.851987 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 00:38:23.852062 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:38:23.855961 systemd[1]: Reached target machines.target - Containers. Jan 20 00:38:23.860477 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 20 00:38:23.877086 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 00:38:23.883703 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 00:38:23.887527 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:38:23.888724 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 00:38:23.894893 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 20 00:38:23.903920 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 00:38:23.909108 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 00:38:23.913021 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 00:38:23.925841 kernel: loop0: detected capacity change from 0 to 140768 Jan 20 00:38:23.930140 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 00:38:23.930998 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 20 00:38:23.952831 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 00:38:23.982992 kernel: loop1: detected capacity change from 0 to 142488 Jan 20 00:38:24.026845 kernel: loop2: detected capacity change from 0 to 224512 Jan 20 00:38:24.073793 kernel: loop3: detected capacity change from 0 to 140768 Jan 20 00:38:24.090829 kernel: loop4: detected capacity change from 0 to 142488 Jan 20 00:38:24.105860 kernel: loop5: detected capacity change from 0 to 224512 Jan 20 00:38:24.116101 (sd-merge)[1316]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 00:38:24.116887 (sd-merge)[1316]: Merged extensions into '/usr'. Jan 20 00:38:24.120952 systemd[1]: Reloading requested from client PID 1305 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 00:38:24.120982 systemd[1]: Reloading... Jan 20 00:38:24.160818 zram_generator::config[1344]: No configuration found. Jan 20 00:38:24.182939 ldconfig[1301]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 00:38:24.297717 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:38:24.354312 systemd[1]: Reloading finished in 232 ms. Jan 20 00:38:24.373374 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 00:38:24.376695 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 00:38:24.401902 systemd[1]: Starting ensure-sysext.service... Jan 20 00:38:24.405205 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:38:24.411048 systemd[1]: Reloading requested from client PID 1388 ('systemctl') (unit ensure-sysext.service)... Jan 20 00:38:24.411075 systemd[1]: Reloading... Jan 20 00:38:24.427236 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 00:38:24.427581 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 00:38:24.428564 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 00:38:24.428913 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Jan 20 00:38:24.429007 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Jan 20 00:38:24.432627 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:38:24.432653 systemd-tmpfiles[1389]: Skipping /boot Jan 20 00:38:24.446578 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:38:24.446592 systemd-tmpfiles[1389]: Skipping /boot Jan 20 00:38:24.469880 zram_generator::config[1418]: No configuration found. Jan 20 00:38:24.580407 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:38:24.640263 systemd[1]: Reloading finished in 228 ms. Jan 20 00:38:24.664067 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:38:24.686000 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:38:24.691338 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 00:38:24.695830 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 00:38:24.704723 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:38:24.712092 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 00:38:24.722579 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:38:24.724398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:38:24.726085 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:38:24.743997 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:38:24.752811 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:38:24.756692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:38:24.757490 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:38:24.759997 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 00:38:24.766335 augenrules[1490]: No rules Jan 20 00:38:24.767338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:38:24.767645 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:38:24.773596 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:38:24.778027 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:38:24.778347 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:38:24.784443 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:38:24.784869 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:38:24.797537 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 00:38:24.799733 systemd-resolved[1473]: Positive Trust Anchors: Jan 20 00:38:24.799799 systemd-resolved[1473]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:38:24.799826 systemd-resolved[1473]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:38:24.804243 systemd-resolved[1473]: Defaulting to hostname 'linux'. Jan 20 00:38:24.805543 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 00:38:24.809929 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:38:24.815408 systemd[1]: Reached target network.target - Network. Jan 20 00:38:24.817708 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:38:24.820739 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:38:24.821008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:38:24.831058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:38:24.834993 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:38:24.838701 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:38:24.841536 systemd-networkd[1248]: eth0: Gained IPv6LL Jan 20 00:38:24.844998 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:38:24.847888 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:38:24.849459 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 00:38:24.852139 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 00:38:24.852269 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:38:24.853575 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 00:38:24.857522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:38:24.857737 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:38:24.861115 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:38:24.861354 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:38:24.864816 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:38:24.865030 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:38:24.868913 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:38:24.869149 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:38:24.872478 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 00:38:24.878239 systemd[1]: Finished ensure-sysext.service. Jan 20 00:38:24.884937 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 00:38:24.888061 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:38:24.888133 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:38:24.900976 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 00:38:24.969134 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 00:38:26.321559 systemd-resolved[1473]: Clock change detected. Flushing caches. Jan 20 00:38:26.321612 systemd-timesyncd[1527]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 00:38:26.321671 systemd-timesyncd[1527]: Initial clock synchronization to Tue 2026-01-20 00:38:26.321468 UTC. Jan 20 00:38:26.324304 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:38:26.327363 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 00:38:26.330704 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 00:38:26.334029 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 00:38:26.337873 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 00:38:26.337940 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:38:26.340644 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 00:38:26.343755 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 00:38:26.346840 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 00:38:26.350108 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:38:26.353310 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 00:38:26.358048 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 00:38:26.362102 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 00:38:26.367139 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 00:38:26.370034 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:38:26.372514 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:38:26.375157 systemd[1]: System is tainted: cgroupsv1 Jan 20 00:38:26.375213 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:38:26.375235 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:38:26.376612 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 00:38:26.380552 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 00:38:26.384225 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 00:38:26.387673 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 00:38:26.391634 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 00:38:26.394363 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 00:38:26.395882 jq[1536]: false Jan 20 00:38:26.396511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:38:26.401852 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 00:38:26.409158 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 00:38:26.409694 extend-filesystems[1537]: Found loop3 Jan 20 00:38:26.414107 extend-filesystems[1537]: Found loop4 Jan 20 00:38:26.414107 extend-filesystems[1537]: Found loop5 Jan 20 00:38:26.414107 extend-filesystems[1537]: Found sr0 Jan 20 00:38:26.414107 extend-filesystems[1537]: Found vda Jan 20 00:38:26.414107 extend-filesystems[1537]: Found vda1 Jan 20 00:38:26.414107 extend-filesystems[1537]: Found vda2 Jan 20 00:38:26.414107 extend-filesystems[1537]: Found vda3 Jan 20 00:38:26.414107 extend-filesystems[1537]: Found usr Jan 20 00:38:26.414107 extend-filesystems[1537]: Found vda4 Jan 20 00:38:26.414107 extend-filesystems[1537]: Found vda6 Jan 20 00:38:26.414107 extend-filesystems[1537]: Found vda7 Jan 20 00:38:26.414107 extend-filesystems[1537]: Found vda9 Jan 20 00:38:26.414107 extend-filesystems[1537]: Checking size of /dev/vda9 Jan 20 00:38:26.496435 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 00:38:26.496469 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1243) Jan 20 00:38:26.413108 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 00:38:26.442940 dbus-daemon[1534]: [system] SELinux support is enabled Jan 20 00:38:26.496804 extend-filesystems[1537]: Resized partition /dev/vda9 Jan 20 00:38:26.427134 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 00:38:26.504355 extend-filesystems[1559]: resize2fs 1.47.1 (20-May-2024) Jan 20 00:38:26.518033 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 00:38:26.432087 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 00:38:26.445440 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 00:38:26.458767 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 00:38:26.463172 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 00:38:26.486311 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 00:38:26.537743 update_engine[1565]: I20260120 00:38:26.514101 1565 main.cc:92] Flatcar Update Engine starting Jan 20 00:38:26.537743 update_engine[1565]: I20260120 00:38:26.517652 1565 update_check_scheduler.cc:74] Next update check in 7m32s Jan 20 00:38:26.507851 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 00:38:26.538479 jq[1569]: true Jan 20 00:38:26.526503 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 00:38:26.526803 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 00:38:26.527695 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 00:38:26.539362 extend-filesystems[1559]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 00:38:26.539362 extend-filesystems[1559]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 00:38:26.539362 extend-filesystems[1559]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 00:38:26.528119 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 00:38:26.551912 extend-filesystems[1537]: Resized filesystem in /dev/vda9 Jan 20 00:38:26.532796 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 00:38:26.536215 systemd-logind[1562]: Watching system buttons on /dev/input/event1 (Power Button) Jan 20 00:38:26.536248 systemd-logind[1562]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 00:38:26.540110 systemd-logind[1562]: New seat seat0. Jan 20 00:38:26.551844 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 00:38:26.557285 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 00:38:26.557605 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 00:38:26.561608 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 00:38:26.561876 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 00:38:26.579154 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 00:38:26.581408 jq[1582]: true Jan 20 00:38:26.585922 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 00:38:26.586500 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 00:38:26.598501 dbus-daemon[1534]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 00:38:26.603609 tar[1581]: linux-amd64/LICENSE Jan 20 00:38:26.605508 tar[1581]: linux-amd64/helm Jan 20 00:38:26.609816 systemd[1]: Started update-engine.service - Update Engine. Jan 20 00:38:26.613714 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 00:38:26.616097 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 00:38:26.616237 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 00:38:26.619584 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 00:38:26.619706 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 00:38:26.623722 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 00:38:26.631211 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 00:38:26.669169 bash[1617]: Updated "/home/core/.ssh/authorized_keys" Jan 20 00:38:26.672855 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 00:38:26.673859 locksmithd[1615]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 00:38:26.679915 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 00:38:26.696703 sshd_keygen[1566]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 00:38:26.730046 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 00:38:26.743262 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 00:38:26.758820 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 00:38:26.759442 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 00:38:26.772427 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 00:38:26.785727 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 00:38:26.794807 containerd[1583]: time="2026-01-20T00:38:26.794569932Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 20 00:38:26.800561 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 00:38:26.811603 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 00:38:26.817302 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 00:38:26.821151 containerd[1583]: time="2026-01-20T00:38:26.821119187Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:38:26.823333 containerd[1583]: time="2026-01-20T00:38:26.823303055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:38:26.823491 containerd[1583]: time="2026-01-20T00:38:26.823475507Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 20 00:38:26.823544 containerd[1583]: time="2026-01-20T00:38:26.823531802Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 20 00:38:26.823751 containerd[1583]: time="2026-01-20T00:38:26.823736114Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 20 00:38:26.823802 containerd[1583]: time="2026-01-20T00:38:26.823790635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 20 00:38:26.823905 containerd[1583]: time="2026-01-20T00:38:26.823889761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:38:26.824024 containerd[1583]: time="2026-01-20T00:38:26.824009614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:38:26.824337 containerd[1583]: time="2026-01-20T00:38:26.824319423Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:38:26.824436 containerd[1583]: time="2026-01-20T00:38:26.824421483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 20 00:38:26.824493 containerd[1583]: time="2026-01-20T00:38:26.824480042Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:38:26.824532 containerd[1583]: time="2026-01-20T00:38:26.824521800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 20 00:38:26.824652 containerd[1583]: time="2026-01-20T00:38:26.824637967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:38:26.825002 containerd[1583]: time="2026-01-20T00:38:26.824918391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:38:26.825235 containerd[1583]: time="2026-01-20T00:38:26.825218431Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:38:26.825283 containerd[1583]: time="2026-01-20T00:38:26.825272001Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 20 00:38:26.825453 containerd[1583]: time="2026-01-20T00:38:26.825437400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 20 00:38:26.825552 containerd[1583]: time="2026-01-20T00:38:26.825539410Z" level=info msg="metadata content store policy set" policy=shared Jan 20 00:38:26.835490 containerd[1583]: time="2026-01-20T00:38:26.835465779Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 20 00:38:26.835695 containerd[1583]: time="2026-01-20T00:38:26.835675851Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 20 00:38:26.835840 containerd[1583]: time="2026-01-20T00:38:26.835821544Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 20 00:38:26.835940 containerd[1583]: time="2026-01-20T00:38:26.835918645Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 20 00:38:26.836145 containerd[1583]: time="2026-01-20T00:38:26.836126232Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 20 00:38:26.836320 containerd[1583]: time="2026-01-20T00:38:26.836304886Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 20 00:38:26.836842 containerd[1583]: time="2026-01-20T00:38:26.836826389Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 20 00:38:26.837098 containerd[1583]: time="2026-01-20T00:38:26.837076777Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 20 00:38:26.837185 containerd[1583]: time="2026-01-20T00:38:26.837171344Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 20 00:38:26.837312 containerd[1583]: time="2026-01-20T00:38:26.837235994Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 20 00:38:26.837371 containerd[1583]: time="2026-01-20T00:38:26.837356780Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 20 00:38:26.837456 containerd[1583]: time="2026-01-20T00:38:26.837443602Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 20 00:38:26.837497 containerd[1583]: time="2026-01-20T00:38:26.837487233Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 20 00:38:26.837556 containerd[1583]: time="2026-01-20T00:38:26.837544240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 20 00:38:26.837642 containerd[1583]: time="2026-01-20T00:38:26.837628086Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 20 00:38:26.837760 containerd[1583]: time="2026-01-20T00:38:26.837697055Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 20 00:38:26.837824 containerd[1583]: time="2026-01-20T00:38:26.837809896Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 20 00:38:26.837873 containerd[1583]: time="2026-01-20T00:38:26.837862344Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 20 00:38:26.837922 containerd[1583]: time="2026-01-20T00:38:26.837911846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.838019 containerd[1583]: time="2026-01-20T00:38:26.838004158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.838068 containerd[1583]: time="2026-01-20T00:38:26.838057608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.838121 containerd[1583]: time="2026-01-20T00:38:26.838109695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.838168 containerd[1583]: time="2026-01-20T00:38:26.838158446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.838256 containerd[1583]: time="2026-01-20T00:38:26.838244847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.838298 containerd[1583]: time="2026-01-20T00:38:26.838287898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.838343 containerd[1583]: time="2026-01-20T00:38:26.838333714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.838429 containerd[1583]: time="2026-01-20T00:38:26.838414845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.838479 containerd[1583]: time="2026-01-20T00:38:26.838469587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.838524 containerd[1583]: time="2026-01-20T00:38:26.838514541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.838568 containerd[1583]: time="2026-01-20T00:38:26.838558383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.838694 containerd[1583]: time="2026-01-20T00:38:26.838680681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.838782 containerd[1583]: time="2026-01-20T00:38:26.838769106Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 20 00:38:26.838845 containerd[1583]: time="2026-01-20T00:38:26.838833517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.838893 containerd[1583]: time="2026-01-20T00:38:26.838882829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.838932 containerd[1583]: time="2026-01-20T00:38:26.838922413Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 20 00:38:26.839058 containerd[1583]: time="2026-01-20T00:38:26.839044120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 20 00:38:26.839130 containerd[1583]: time="2026-01-20T00:38:26.839116034Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 20 00:38:26.839232 containerd[1583]: time="2026-01-20T00:38:26.839160528Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 20 00:38:26.839285 containerd[1583]: time="2026-01-20T00:38:26.839271925Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 20 00:38:26.839360 containerd[1583]: time="2026-01-20T00:38:26.839315306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.839507 containerd[1583]: time="2026-01-20T00:38:26.839444317Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 20 00:38:26.839555 containerd[1583]: time="2026-01-20T00:38:26.839544845Z" level=info msg="NRI interface is disabled by configuration." Jan 20 00:38:26.839649 containerd[1583]: time="2026-01-20T00:38:26.839631186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 20 00:38:26.840118 containerd[1583]: time="2026-01-20T00:38:26.840062411Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 20 00:38:26.840367 containerd[1583]: time="2026-01-20T00:38:26.840351992Z" level=info msg="Connect containerd service" Jan 20 00:38:26.840575 containerd[1583]: time="2026-01-20T00:38:26.840558567Z" level=info msg="using legacy CRI server" Jan 20 00:38:26.840620 containerd[1583]: time="2026-01-20T00:38:26.840610404Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 00:38:26.840819 containerd[1583]: time="2026-01-20T00:38:26.840803044Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 20 00:38:26.841492 containerd[1583]: time="2026-01-20T00:38:26.841472373Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:38:26.841756 containerd[1583]: time="2026-01-20T00:38:26.841704103Z" level=info msg="Start subscribing containerd event" Jan 20 00:38:26.841906 containerd[1583]: time="2026-01-20T00:38:26.841842662Z" level=info msg="Start recovering state" Jan 20 00:38:26.842241 containerd[1583]: time="2026-01-20T00:38:26.842037085Z" level=info msg="Start event monitor" Jan 20 00:38:26.842241 containerd[1583]: time="2026-01-20T00:38:26.842048729Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 00:38:26.842241 containerd[1583]: time="2026-01-20T00:38:26.842063164Z" level=info msg="Start snapshots syncer" Jan 20 00:38:26.842241 containerd[1583]: time="2026-01-20T00:38:26.842074325Z" level=info msg="Start cni network conf syncer for default" Jan 20 00:38:26.842241 containerd[1583]: time="2026-01-20T00:38:26.842081548Z" level=info msg="Start streaming server" Jan 20 00:38:26.842505 containerd[1583]: time="2026-01-20T00:38:26.842283778Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 00:38:26.843328 containerd[1583]: time="2026-01-20T00:38:26.843310595Z" level=info msg="containerd successfully booted in 0.053203s" Jan 20 00:38:26.848214 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 00:38:27.064337 tar[1581]: linux-amd64/README.md Jan 20 00:38:27.077346 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 00:38:27.373869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:38:27.377339 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 00:38:27.379147 (kubelet)[1666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:38:27.381045 systemd[1]: Startup finished in 6.966s (kernel) + 4.455s (userspace) = 11.422s. Jan 20 00:38:27.833559 kubelet[1666]: E0120 00:38:27.833346 1666 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:38:27.837434 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:38:27.837711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:38:30.144479 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 00:38:30.160524 systemd[1]: Started sshd@0-10.0.0.52:22-10.0.0.1:57624.service - OpenSSH per-connection server daemon (10.0.0.1:57624). Jan 20 00:38:30.210795 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 57624 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:38:30.211215 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:38:30.220102 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 00:38:30.232264 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 00:38:30.234176 systemd-logind[1562]: New session 1 of user core. Jan 20 00:38:30.246912 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 00:38:30.253334 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 00:38:30.259526 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 00:38:30.378164 systemd[1685]: Queued start job for default target default.target. Jan 20 00:38:30.378649 systemd[1685]: Created slice app.slice - User Application Slice. Jan 20 00:38:30.378690 systemd[1685]: Reached target paths.target - Paths. Jan 20 00:38:30.378704 systemd[1685]: Reached target timers.target - Timers. Jan 20 00:38:30.388115 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 00:38:30.395151 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 00:38:30.395242 systemd[1685]: Reached target sockets.target - Sockets. Jan 20 00:38:30.395256 systemd[1685]: Reached target basic.target - Basic System. Jan 20 00:38:30.395303 systemd[1685]: Reached target default.target - Main User Target. Jan 20 00:38:30.395339 systemd[1685]: Startup finished in 127ms. Jan 20 00:38:30.395883 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 00:38:30.397626 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 00:38:30.457198 systemd[1]: Started sshd@1-10.0.0.52:22-10.0.0.1:57630.service - OpenSSH per-connection server daemon (10.0.0.1:57630). Jan 20 00:38:30.494902 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 57630 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:38:30.496634 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:38:30.501780 systemd-logind[1562]: New session 2 of user core. Jan 20 00:38:30.511363 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 00:38:30.570374 sshd[1697]: pam_unix(sshd:session): session closed for user core Jan 20 00:38:30.585534 systemd[1]: Started sshd@2-10.0.0.52:22-10.0.0.1:57634.service - OpenSSH per-connection server daemon (10.0.0.1:57634). Jan 20 00:38:30.586148 systemd[1]: sshd@1-10.0.0.52:22-10.0.0.1:57630.service: Deactivated successfully. Jan 20 00:38:30.588455 systemd-logind[1562]: Session 2 logged out. Waiting for processes to exit. Jan 20 00:38:30.589139 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 00:38:30.590262 systemd-logind[1562]: Removed session 2. Jan 20 00:38:30.624249 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 57634 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:38:30.625716 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:38:30.630419 systemd-logind[1562]: New session 3 of user core. Jan 20 00:38:30.645293 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 00:38:30.698288 sshd[1702]: pam_unix(sshd:session): session closed for user core Jan 20 00:38:30.713364 systemd[1]: Started sshd@3-10.0.0.52:22-10.0.0.1:57648.service - OpenSSH per-connection server daemon (10.0.0.1:57648). Jan 20 00:38:30.714228 systemd[1]: sshd@2-10.0.0.52:22-10.0.0.1:57634.service: Deactivated successfully. Jan 20 00:38:30.717287 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 00:38:30.717936 systemd-logind[1562]: Session 3 logged out. Waiting for processes to exit. Jan 20 00:38:30.719489 systemd-logind[1562]: Removed session 3. Jan 20 00:38:30.752037 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 57648 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:38:30.753533 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:38:30.759645 systemd-logind[1562]: New session 4 of user core. Jan 20 00:38:30.778376 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 00:38:30.838093 sshd[1710]: pam_unix(sshd:session): session closed for user core Jan 20 00:38:30.852308 systemd[1]: Started sshd@4-10.0.0.52:22-10.0.0.1:57660.service - OpenSSH per-connection server daemon (10.0.0.1:57660). Jan 20 00:38:30.853087 systemd[1]: sshd@3-10.0.0.52:22-10.0.0.1:57648.service: Deactivated successfully. Jan 20 00:38:30.856032 systemd-logind[1562]: Session 4 logged out. Waiting for processes to exit. Jan 20 00:38:30.856722 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 00:38:30.858191 systemd-logind[1562]: Removed session 4. Jan 20 00:38:30.887614 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 57660 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:38:30.889552 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:38:30.895651 systemd-logind[1562]: New session 5 of user core. Jan 20 00:38:30.913529 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 00:38:30.979860 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 00:38:30.980485 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:38:31.002381 sudo[1725]: pam_unix(sudo:session): session closed for user root Jan 20 00:38:31.005629 sshd[1718]: pam_unix(sshd:session): session closed for user core Jan 20 00:38:31.012248 systemd[1]: Started sshd@5-10.0.0.52:22-10.0.0.1:57674.service - OpenSSH per-connection server daemon (10.0.0.1:57674). Jan 20 00:38:31.012759 systemd[1]: sshd@4-10.0.0.52:22-10.0.0.1:57660.service: Deactivated successfully. Jan 20 00:38:31.014765 systemd-logind[1562]: Session 5 logged out. Waiting for processes to exit. Jan 20 00:38:31.016775 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 00:38:31.018008 systemd-logind[1562]: Removed session 5. Jan 20 00:38:31.052814 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 57674 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:38:31.055097 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:38:31.061683 systemd-logind[1562]: New session 6 of user core. Jan 20 00:38:31.071272 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 00:38:31.129203 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 00:38:31.129735 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:38:31.135598 sudo[1735]: pam_unix(sudo:session): session closed for user root Jan 20 00:38:31.143618 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 20 00:38:31.144034 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:38:31.170260 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 20 00:38:31.173582 auditctl[1738]: No rules Jan 20 00:38:31.174233 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 00:38:31.174619 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 20 00:38:31.177851 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:38:31.215211 augenrules[1757]: No rules Jan 20 00:38:31.216954 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:38:31.218690 sudo[1734]: pam_unix(sudo:session): session closed for user root Jan 20 00:38:31.221107 sshd[1727]: pam_unix(sshd:session): session closed for user core Jan 20 00:38:31.231379 systemd[1]: Started sshd@6-10.0.0.52:22-10.0.0.1:57690.service - OpenSSH per-connection server daemon (10.0.0.1:57690). Jan 20 00:38:31.231952 systemd[1]: sshd@5-10.0.0.52:22-10.0.0.1:57674.service: Deactivated successfully. Jan 20 00:38:31.233753 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 00:38:31.234719 systemd-logind[1562]: Session 6 logged out. Waiting for processes to exit. Jan 20 00:38:31.236619 systemd-logind[1562]: Removed session 6. Jan 20 00:38:31.266301 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 57690 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:38:31.268207 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:38:31.274098 systemd-logind[1562]: New session 7 of user core. Jan 20 00:38:31.286431 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 00:38:31.343217 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 00:38:31.343617 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:38:31.654446 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 00:38:31.654695 (dockerd)[1788]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 00:38:31.963649 dockerd[1788]: time="2026-01-20T00:38:31.963443552Z" level=info msg="Starting up" Jan 20 00:38:32.249884 dockerd[1788]: time="2026-01-20T00:38:32.249749333Z" level=info msg="Loading containers: start." Jan 20 00:38:32.396015 kernel: Initializing XFRM netlink socket Jan 20 00:38:32.497083 systemd-networkd[1248]: docker0: Link UP Jan 20 00:38:32.523640 dockerd[1788]: time="2026-01-20T00:38:32.523472419Z" level=info msg="Loading containers: done." Jan 20 00:38:32.540913 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1213576599-merged.mount: Deactivated successfully. Jan 20 00:38:32.543247 dockerd[1788]: time="2026-01-20T00:38:32.543165162Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 00:38:32.543352 dockerd[1788]: time="2026-01-20T00:38:32.543293341Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 20 00:38:32.543478 dockerd[1788]: time="2026-01-20T00:38:32.543436178Z" level=info msg="Daemon has completed initialization" Jan 20 00:38:32.588206 dockerd[1788]: time="2026-01-20T00:38:32.588105580Z" level=info msg="API listen on /run/docker.sock" Jan 20 00:38:32.588389 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 00:38:33.278284 containerd[1583]: time="2026-01-20T00:38:33.277944948Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 00:38:33.987180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2229878553.mount: Deactivated successfully. Jan 20 00:38:35.197711 containerd[1583]: time="2026-01-20T00:38:35.197630486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:35.198485 containerd[1583]: time="2026-01-20T00:38:35.198381262Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 20 00:38:35.199693 containerd[1583]: time="2026-01-20T00:38:35.199619290Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:35.202490 containerd[1583]: time="2026-01-20T00:38:35.202441518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:35.203583 containerd[1583]: time="2026-01-20T00:38:35.203541442Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.925520112s" Jan 20 00:38:35.203583 containerd[1583]: time="2026-01-20T00:38:35.203581076Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 00:38:35.204193 containerd[1583]: time="2026-01-20T00:38:35.204165432Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 00:38:36.329733 containerd[1583]: time="2026-01-20T00:38:36.329626360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:36.330692 containerd[1583]: time="2026-01-20T00:38:36.330614134Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 20 00:38:36.331802 containerd[1583]: time="2026-01-20T00:38:36.331749223Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:36.334684 containerd[1583]: time="2026-01-20T00:38:36.334625391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:36.335579 containerd[1583]: time="2026-01-20T00:38:36.335540039Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.131341706s" Jan 20 00:38:36.335625 containerd[1583]: time="2026-01-20T00:38:36.335576877Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 00:38:36.336244 containerd[1583]: time="2026-01-20T00:38:36.336080279Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 00:38:37.548067 containerd[1583]: time="2026-01-20T00:38:37.547935849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:37.548932 containerd[1583]: time="2026-01-20T00:38:37.548864145Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 20 00:38:37.550098 containerd[1583]: time="2026-01-20T00:38:37.550058682Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:37.554103 containerd[1583]: time="2026-01-20T00:38:37.554040737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:37.555557 containerd[1583]: time="2026-01-20T00:38:37.555474534Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.219361373s" Jan 20 00:38:37.555557 containerd[1583]: time="2026-01-20T00:38:37.555528895Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 00:38:37.556339 containerd[1583]: time="2026-01-20T00:38:37.556136530Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 00:38:38.088176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 00:38:38.106261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:38:38.261528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:38:38.266677 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:38:38.310720 kubelet[2018]: E0120 00:38:38.310683 2018 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:38:38.315895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:38:38.316166 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:38:38.533756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1407365418.mount: Deactivated successfully. Jan 20 00:38:38.919334 containerd[1583]: time="2026-01-20T00:38:38.919199968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:38.920200 containerd[1583]: time="2026-01-20T00:38:38.920162733Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 20 00:38:38.921317 containerd[1583]: time="2026-01-20T00:38:38.921288734Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:38.923614 containerd[1583]: time="2026-01-20T00:38:38.923564363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:38.924214 containerd[1583]: time="2026-01-20T00:38:38.924151052Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.367983804s" Jan 20 00:38:38.924214 containerd[1583]: time="2026-01-20T00:38:38.924204432Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 00:38:38.924763 containerd[1583]: time="2026-01-20T00:38:38.924676363Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 00:38:39.356807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015353601.mount: Deactivated successfully. Jan 20 00:38:40.866164 containerd[1583]: time="2026-01-20T00:38:40.866081322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:40.867160 containerd[1583]: time="2026-01-20T00:38:40.867019172Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 20 00:38:40.868637 containerd[1583]: time="2026-01-20T00:38:40.868556825Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:40.871941 containerd[1583]: time="2026-01-20T00:38:40.871858041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:40.873029 containerd[1583]: time="2026-01-20T00:38:40.872947245Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.948243682s" Jan 20 00:38:40.873070 containerd[1583]: time="2026-01-20T00:38:40.873031712Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 00:38:40.873726 containerd[1583]: time="2026-01-20T00:38:40.873689941Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 00:38:41.284717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1988286982.mount: Deactivated successfully. Jan 20 00:38:41.290551 containerd[1583]: time="2026-01-20T00:38:41.290497486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:41.291546 containerd[1583]: time="2026-01-20T00:38:41.291480618Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 00:38:41.292651 containerd[1583]: time="2026-01-20T00:38:41.292593238Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:41.296841 containerd[1583]: time="2026-01-20T00:38:41.296771105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:41.297923 containerd[1583]: time="2026-01-20T00:38:41.297857376Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 424.131077ms" Jan 20 00:38:41.297923 containerd[1583]: time="2026-01-20T00:38:41.297908241Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 00:38:41.298633 containerd[1583]: time="2026-01-20T00:38:41.298606111Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 00:38:41.737145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1516229502.mount: Deactivated successfully. Jan 20 00:38:44.396034 containerd[1583]: time="2026-01-20T00:38:44.395839138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:44.397580 containerd[1583]: time="2026-01-20T00:38:44.397517967Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 20 00:38:44.399177 containerd[1583]: time="2026-01-20T00:38:44.399120023Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:44.403231 containerd[1583]: time="2026-01-20T00:38:44.403182593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:38:44.404616 containerd[1583]: time="2026-01-20T00:38:44.404561522Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.105860224s" Jan 20 00:38:44.404616 containerd[1583]: time="2026-01-20T00:38:44.404614962Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 00:38:46.678283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:38:46.690311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:38:46.719722 systemd[1]: Reloading requested from client PID 2175 ('systemctl') (unit session-7.scope)... Jan 20 00:38:46.719767 systemd[1]: Reloading... Jan 20 00:38:46.809019 zram_generator::config[2217]: No configuration found. Jan 20 00:38:46.913854 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:38:46.981408 systemd[1]: Reloading finished in 260 ms. Jan 20 00:38:47.029384 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 00:38:47.029539 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 00:38:47.029860 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:38:47.031665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:38:47.196378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:38:47.202103 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:38:47.256334 kubelet[2274]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:38:47.256334 kubelet[2274]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:38:47.256334 kubelet[2274]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:38:47.256334 kubelet[2274]: I0120 00:38:47.256287 2274 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:38:47.482668 kubelet[2274]: I0120 00:38:47.482626 2274 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 00:38:47.482668 kubelet[2274]: I0120 00:38:47.482663 2274 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:38:47.482873 kubelet[2274]: I0120 00:38:47.482848 2274 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 00:38:47.503251 kubelet[2274]: E0120 00:38:47.503197 2274 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:38:47.503621 kubelet[2274]: I0120 00:38:47.503588 2274 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:38:47.508955 kubelet[2274]: E0120 00:38:47.508841 2274 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:38:47.508955 kubelet[2274]: I0120 00:38:47.508876 2274 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:38:47.515603 kubelet[2274]: I0120 00:38:47.515536 2274 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:38:47.516061 kubelet[2274]: I0120 00:38:47.515951 2274 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:38:47.516235 kubelet[2274]: I0120 00:38:47.516030 2274 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 20 00:38:47.516690 kubelet[2274]: I0120 00:38:47.516625 2274 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:38:47.516690 kubelet[2274]: I0120 00:38:47.516653 2274 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 00:38:47.516831 kubelet[2274]: I0120 00:38:47.516771 2274 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:38:47.519230 kubelet[2274]: I0120 00:38:47.519159 2274 kubelet.go:446] "Attempting to sync node with API server" Jan 20 00:38:47.519230 kubelet[2274]: I0120 00:38:47.519197 2274 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:38:47.519230 kubelet[2274]: I0120 00:38:47.519213 2274 kubelet.go:352] "Adding apiserver pod source" Jan 20 00:38:47.519230 kubelet[2274]: I0120 00:38:47.519222 2274 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:38:47.523865 kubelet[2274]: I0120 00:38:47.522630 2274 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:38:47.523865 kubelet[2274]: I0120 00:38:47.523053 2274 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 00:38:47.523865 kubelet[2274]: W0120 00:38:47.523103 2274 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 00:38:47.523865 kubelet[2274]: W0120 00:38:47.523270 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 20 00:38:47.523865 kubelet[2274]: E0120 00:38:47.523335 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:38:47.524292 kubelet[2274]: W0120 00:38:47.524068 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 20 00:38:47.524292 kubelet[2274]: E0120 00:38:47.524116 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:38:47.525345 kubelet[2274]: I0120 00:38:47.524786 2274 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:38:47.525345 kubelet[2274]: I0120 00:38:47.524818 2274 server.go:1287] "Started kubelet" Jan 20 00:38:47.525345 kubelet[2274]: I0120 00:38:47.525253 2274 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:38:47.529562 kubelet[2274]: I0120 00:38:47.529522 2274 server.go:479] "Adding debug handlers to kubelet server" Jan 20 00:38:47.532355 kubelet[2274]: I0120 00:38:47.532282 2274 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:38:47.533628 kubelet[2274]: I0120 00:38:47.533547 2274 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:38:47.534569 kubelet[2274]: I0120 00:38:47.533763 2274 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:38:47.534569 kubelet[2274]: I0120 00:38:47.534374 2274 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:38:47.535297 kubelet[2274]: E0120 00:38:47.532850 2274 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.52:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.52:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4976a220fcb3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 00:38:47.524801715 +0000 UTC m=+0.312980004,LastTimestamp:2026-01-20 00:38:47.524801715 +0000 UTC m=+0.312980004,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 00:38:47.535605 kubelet[2274]: E0120 00:38:47.535553 2274 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:38:47.535647 kubelet[2274]: I0120 00:38:47.535609 2274 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:38:47.535881 kubelet[2274]: I0120 00:38:47.535845 2274 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:38:47.536024 kubelet[2274]: I0120 00:38:47.535936 2274 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:38:47.536456 kubelet[2274]: W0120 00:38:47.536360 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 20 00:38:47.536500 kubelet[2274]: E0120 00:38:47.536473 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:38:47.536785 kubelet[2274]: E0120 00:38:47.536699 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="200ms" Jan 20 00:38:47.538037 kubelet[2274]: I0120 00:38:47.538003 2274 factory.go:221] Registration of the systemd container factory successfully Jan 20 00:38:47.538106 kubelet[2274]: I0120 00:38:47.538091 2274 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:38:47.539525 kubelet[2274]: E0120 00:38:47.539173 2274 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:38:47.540091 kubelet[2274]: I0120 00:38:47.540060 2274 factory.go:221] Registration of the containerd container factory successfully Jan 20 00:38:47.558640 kubelet[2274]: I0120 00:38:47.558544 2274 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 00:38:47.560551 kubelet[2274]: I0120 00:38:47.560513 2274 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 00:38:47.560551 kubelet[2274]: I0120 00:38:47.560548 2274 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 00:38:47.560850 kubelet[2274]: I0120 00:38:47.560565 2274 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:38:47.560850 kubelet[2274]: I0120 00:38:47.560572 2274 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 00:38:47.560850 kubelet[2274]: E0120 00:38:47.560615 2274 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:38:47.561714 kubelet[2274]: W0120 00:38:47.561617 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 20 00:38:47.561714 kubelet[2274]: E0120 00:38:47.561681 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:38:47.567037 kubelet[2274]: I0120 00:38:47.566920 2274 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:38:47.567037 kubelet[2274]: I0120 00:38:47.566952 2274 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:38:47.567037 kubelet[2274]: I0120 00:38:47.567018 2274 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:38:47.615758 kubelet[2274]: I0120 00:38:47.615651 2274 policy_none.go:49] "None policy: Start" Jan 20 00:38:47.615758 kubelet[2274]: I0120 00:38:47.615692 2274 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:38:47.615758 kubelet[2274]: I0120 00:38:47.615705 2274 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:38:47.622278 kubelet[2274]: I0120 00:38:47.622213 2274 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 00:38:47.622472 kubelet[2274]: I0120 00:38:47.622417 2274 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:38:47.622522 kubelet[2274]: I0120 00:38:47.622478 2274 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:38:47.623298 kubelet[2274]: I0120 00:38:47.623257 2274 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:38:47.624617 kubelet[2274]: E0120 00:38:47.624585 2274 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:38:47.624713 kubelet[2274]: E0120 00:38:47.624686 2274 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 00:38:47.666622 kubelet[2274]: E0120 00:38:47.666576 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:38:47.667999 kubelet[2274]: E0120 00:38:47.666852 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:38:47.669286 kubelet[2274]: E0120 00:38:47.669222 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:38:47.725148 kubelet[2274]: I0120 00:38:47.724953 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:38:47.725422 kubelet[2274]: E0120 00:38:47.725386 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Jan 20 00:38:47.737023 kubelet[2274]: I0120 00:38:47.736851 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:38:47.737023 kubelet[2274]: I0120 00:38:47.736936 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:38:47.737023 kubelet[2274]: I0120 00:38:47.737014 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:38:47.737209 kubelet[2274]: I0120 00:38:47.737052 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:38:47.737321 kubelet[2274]: E0120 00:38:47.737235 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="400ms" Jan 20 00:38:47.838289 kubelet[2274]: I0120 00:38:47.838118 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:38:47.838289 kubelet[2274]: I0120 00:38:47.838176 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a85b8fce8ffdc544eea23f31682e762f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a85b8fce8ffdc544eea23f31682e762f\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:38:47.838289 kubelet[2274]: I0120 00:38:47.838262 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:38:47.838289 kubelet[2274]: I0120 00:38:47.838292 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a85b8fce8ffdc544eea23f31682e762f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a85b8fce8ffdc544eea23f31682e762f\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:38:47.838499 kubelet[2274]: I0120 00:38:47.838305 2274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a85b8fce8ffdc544eea23f31682e762f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a85b8fce8ffdc544eea23f31682e762f\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:38:47.927954 kubelet[2274]: I0120 00:38:47.927912 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:38:47.928618 kubelet[2274]: E0120 00:38:47.928514 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Jan 20 00:38:47.968041 kubelet[2274]: E0120 00:38:47.967917 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:47.968041 kubelet[2274]: E0120 00:38:47.967920 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:47.968852 containerd[1583]: time="2026-01-20T00:38:47.968790690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 20 00:38:47.969745 containerd[1583]: time="2026-01-20T00:38:47.969656241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 20 00:38:47.970829 kubelet[2274]: E0120 00:38:47.970672 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:47.971193 containerd[1583]: time="2026-01-20T00:38:47.971142185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a85b8fce8ffdc544eea23f31682e762f,Namespace:kube-system,Attempt:0,}" Jan 20 00:38:48.138729 kubelet[2274]: E0120 00:38:48.138547 2274 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="800ms" Jan 20 00:38:48.331030 kubelet[2274]: I0120 00:38:48.330885 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:38:48.331628 kubelet[2274]: E0120 00:38:48.331372 2274 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Jan 20 00:38:48.366175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3473844227.mount: Deactivated successfully. Jan 20 00:38:48.372665 containerd[1583]: time="2026-01-20T00:38:48.372584794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:38:48.376285 containerd[1583]: time="2026-01-20T00:38:48.376250770Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 20 00:38:48.377915 containerd[1583]: time="2026-01-20T00:38:48.377823919Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:38:48.379229 containerd[1583]: time="2026-01-20T00:38:48.379152045Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:38:48.381922 containerd[1583]: time="2026-01-20T00:38:48.380595038Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:38:48.381922 containerd[1583]: time="2026-01-20T00:38:48.381389436Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:38:48.383194 containerd[1583]: time="2026-01-20T00:38:48.382608818Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:38:48.384198 containerd[1583]: time="2026-01-20T00:38:48.384155337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:38:48.388580 containerd[1583]: time="2026-01-20T00:38:48.388499967Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 418.714705ms" Jan 20 00:38:48.390630 containerd[1583]: time="2026-01-20T00:38:48.390400524Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 421.507203ms" Jan 20 00:38:48.394224 containerd[1583]: time="2026-01-20T00:38:48.394147402Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 422.929675ms" Jan 20 00:38:48.467864 kubelet[2274]: W0120 00:38:48.467763 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 20 00:38:48.468043 kubelet[2274]: E0120 00:38:48.467869 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:38:48.502859 containerd[1583]: time="2026-01-20T00:38:48.502685651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:38:48.503389 containerd[1583]: time="2026-01-20T00:38:48.503307773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:38:48.503528 containerd[1583]: time="2026-01-20T00:38:48.503494431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:38:48.504576 containerd[1583]: time="2026-01-20T00:38:48.504471525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:38:48.505474 containerd[1583]: time="2026-01-20T00:38:48.505309104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:38:48.505474 containerd[1583]: time="2026-01-20T00:38:48.505372783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:38:48.505474 containerd[1583]: time="2026-01-20T00:38:48.505390496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:38:48.505666 containerd[1583]: time="2026-01-20T00:38:48.505511041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:38:48.509513 containerd[1583]: time="2026-01-20T00:38:48.508508001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:38:48.509513 containerd[1583]: time="2026-01-20T00:38:48.508596216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:38:48.509513 containerd[1583]: time="2026-01-20T00:38:48.508674942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:38:48.509513 containerd[1583]: time="2026-01-20T00:38:48.508886167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:38:48.593476 containerd[1583]: time="2026-01-20T00:38:48.593394186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2b96f4ed6834ed50bd5e31f80a1ce7dc604561c0a131f8fcde7819889618db2\"" Jan 20 00:38:48.596253 kubelet[2274]: E0120 00:38:48.596156 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:48.603998 containerd[1583]: time="2026-01-20T00:38:48.601810456Z" level=info msg="CreateContainer within sandbox \"a2b96f4ed6834ed50bd5e31f80a1ce7dc604561c0a131f8fcde7819889618db2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 00:38:48.603998 containerd[1583]: time="2026-01-20T00:38:48.602565514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a85b8fce8ffdc544eea23f31682e762f,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdf91ced0c5479fdc8c0ddf9638403e3d356f8f76806bad5d428647d3c902943\"" Jan 20 00:38:48.604107 kubelet[2274]: E0120 00:38:48.604021 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:48.609816 containerd[1583]: time="2026-01-20T00:38:48.609742294Z" level=info msg="CreateContainer within sandbox \"bdf91ced0c5479fdc8c0ddf9638403e3d356f8f76806bad5d428647d3c902943\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 00:38:48.614641 containerd[1583]: time="2026-01-20T00:38:48.614173056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"19b8564539e120bad07719d5b0cd36d6a65c294e733286872df6eb81a59cd539\"" Jan 20 00:38:48.615045 kubelet[2274]: E0120 00:38:48.614901 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:48.617141 containerd[1583]: time="2026-01-20T00:38:48.617087747Z" level=info msg="CreateContainer within sandbox \"19b8564539e120bad07719d5b0cd36d6a65c294e733286872df6eb81a59cd539\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 00:38:48.628675 containerd[1583]: time="2026-01-20T00:38:48.628570550Z" level=info msg="CreateContainer within sandbox \"a2b96f4ed6834ed50bd5e31f80a1ce7dc604561c0a131f8fcde7819889618db2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"90e002f0dbbbe5a15a0d16e3f822c9211d741b406f5660a5e32d21522a54dbc8\"" Jan 20 00:38:48.629813 containerd[1583]: time="2026-01-20T00:38:48.629701742Z" level=info msg="StartContainer for \"90e002f0dbbbe5a15a0d16e3f822c9211d741b406f5660a5e32d21522a54dbc8\"" Jan 20 00:38:48.642514 containerd[1583]: time="2026-01-20T00:38:48.642358966Z" level=info msg="CreateContainer within sandbox \"bdf91ced0c5479fdc8c0ddf9638403e3d356f8f76806bad5d428647d3c902943\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8b8e455fd5772816ad3270aa9f5914e578ad38135665ade30c3650aa08ae2c34\"" Jan 20 00:38:48.644219 containerd[1583]: time="2026-01-20T00:38:48.644163645Z" level=info msg="StartContainer for \"8b8e455fd5772816ad3270aa9f5914e578ad38135665ade30c3650aa08ae2c34\"" Jan 20 00:38:48.645679 containerd[1583]: time="2026-01-20T00:38:48.645609424Z" level=info msg="CreateContainer within sandbox \"19b8564539e120bad07719d5b0cd36d6a65c294e733286872df6eb81a59cd539\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a6f52f405a786a1293f6b348cbfac609093bf7c0014ca88eb43d835e10e3e352\"" Jan 20 00:38:48.647071 containerd[1583]: time="2026-01-20T00:38:48.646025430Z" level=info msg="StartContainer for \"a6f52f405a786a1293f6b348cbfac609093bf7c0014ca88eb43d835e10e3e352\"" Jan 20 00:38:48.792830 containerd[1583]: time="2026-01-20T00:38:48.791679162Z" level=info msg="StartContainer for \"90e002f0dbbbe5a15a0d16e3f822c9211d741b406f5660a5e32d21522a54dbc8\" returns successfully" Jan 20 00:38:48.801069 kubelet[2274]: W0120 00:38:48.799391 2274 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 20 00:38:48.801069 kubelet[2274]: E0120 00:38:48.799596 2274 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:38:48.818738 containerd[1583]: time="2026-01-20T00:38:48.818684069Z" level=info msg="StartContainer for \"8b8e455fd5772816ad3270aa9f5914e578ad38135665ade30c3650aa08ae2c34\" returns successfully" Jan 20 00:38:48.834647 containerd[1583]: time="2026-01-20T00:38:48.834548353Z" level=info msg="StartContainer for \"a6f52f405a786a1293f6b348cbfac609093bf7c0014ca88eb43d835e10e3e352\" returns successfully" Jan 20 00:38:49.148024 kubelet[2274]: I0120 00:38:49.147810 2274 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:38:49.583751 kubelet[2274]: E0120 00:38:49.583631 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:38:49.584202 kubelet[2274]: E0120 00:38:49.583865 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:49.586804 kubelet[2274]: E0120 00:38:49.586763 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:38:49.588086 kubelet[2274]: E0120 00:38:49.588047 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:49.594137 kubelet[2274]: E0120 00:38:49.593913 2274 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:38:49.597007 kubelet[2274]: E0120 00:38:49.594716 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:50.286708 kubelet[2274]: E0120 00:38:50.286623 2274 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 00:38:50.452406 kubelet[2274]: I0120 00:38:50.452310 2274 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:38:50.522553 kubelet[2274]: I0120 00:38:50.522201 2274 apiserver.go:52] "Watching apiserver" Jan 20 00:38:50.536132 kubelet[2274]: I0120 00:38:50.536103 2274 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:38:50.537394 kubelet[2274]: I0120 00:38:50.537208 2274 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:38:50.543520 kubelet[2274]: E0120 00:38:50.543494 2274 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 00:38:50.543793 kubelet[2274]: I0120 00:38:50.543605 2274 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:38:50.545035 kubelet[2274]: E0120 00:38:50.545017 2274 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 00:38:50.545272 kubelet[2274]: I0120 00:38:50.545115 2274 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:38:50.546880 kubelet[2274]: E0120 00:38:50.546861 2274 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:38:50.592586 kubelet[2274]: I0120 00:38:50.592415 2274 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:38:50.594221 kubelet[2274]: I0120 00:38:50.593189 2274 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:38:50.594746 kubelet[2274]: E0120 00:38:50.594694 2274 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 00:38:50.594959 kubelet[2274]: E0120 00:38:50.594909 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:50.595697 kubelet[2274]: E0120 00:38:50.595667 2274 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 00:38:50.595934 kubelet[2274]: E0120 00:38:50.595867 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:51.594561 kubelet[2274]: I0120 00:38:51.594438 2274 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:38:51.604842 kubelet[2274]: E0120 00:38:51.604685 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:52.223906 kubelet[2274]: I0120 00:38:52.223852 2274 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:38:52.233488 kubelet[2274]: E0120 00:38:52.233173 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:52.597649 kubelet[2274]: E0120 00:38:52.597487 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:52.597649 kubelet[2274]: E0120 00:38:52.597518 2274 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:52.650207 systemd[1]: Reloading requested from client PID 2550 ('systemctl') (unit session-7.scope)... Jan 20 00:38:52.650248 systemd[1]: Reloading... Jan 20 00:38:52.729067 zram_generator::config[2591]: No configuration found. Jan 20 00:38:52.860076 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:38:52.933509 systemd[1]: Reloading finished in 282 ms. Jan 20 00:38:52.977286 kubelet[2274]: I0120 00:38:52.977245 2274 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:38:52.977377 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:38:53.001659 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 00:38:53.002078 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:38:53.010574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:38:53.168115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:38:53.181707 (kubelet)[2644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:38:53.237031 kubelet[2644]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:38:53.237031 kubelet[2644]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:38:53.237031 kubelet[2644]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:38:53.237611 kubelet[2644]: I0120 00:38:53.237060 2644 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:38:53.248277 kubelet[2644]: I0120 00:38:53.248200 2644 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 00:38:53.248277 kubelet[2644]: I0120 00:38:53.248248 2644 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:38:53.248653 kubelet[2644]: I0120 00:38:53.248600 2644 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 00:38:53.250227 kubelet[2644]: I0120 00:38:53.250173 2644 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 00:38:53.253112 kubelet[2644]: I0120 00:38:53.253046 2644 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:38:53.257315 kubelet[2644]: E0120 00:38:53.257196 2644 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:38:53.257315 kubelet[2644]: I0120 00:38:53.257246 2644 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:38:53.265238 kubelet[2644]: I0120 00:38:53.264818 2644 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:38:53.265850 kubelet[2644]: I0120 00:38:53.265750 2644 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:38:53.266090 kubelet[2644]: I0120 00:38:53.265828 2644 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 20 00:38:53.266196 kubelet[2644]: I0120 00:38:53.266099 2644 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:38:53.266196 kubelet[2644]: I0120 00:38:53.266112 2644 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 00:38:53.266196 kubelet[2644]: I0120 00:38:53.266168 2644 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:38:53.266401 kubelet[2644]: I0120 00:38:53.266369 2644 kubelet.go:446] "Attempting to sync node with API server" Jan 20 00:38:53.266425 kubelet[2644]: I0120 00:38:53.266406 2644 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:38:53.266479 kubelet[2644]: I0120 00:38:53.266425 2644 kubelet.go:352] "Adding apiserver pod source" Jan 20 00:38:53.266479 kubelet[2644]: I0120 00:38:53.266437 2644 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:38:53.269995 kubelet[2644]: I0120 00:38:53.267698 2644 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:38:53.269995 kubelet[2644]: I0120 00:38:53.268144 2644 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 00:38:53.269995 kubelet[2644]: I0120 00:38:53.268655 2644 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:38:53.269995 kubelet[2644]: I0120 00:38:53.268716 2644 server.go:1287] "Started kubelet" Jan 20 00:38:53.272560 kubelet[2644]: I0120 00:38:53.272491 2644 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:38:53.272919 kubelet[2644]: I0120 00:38:53.272856 2644 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:38:53.272947 kubelet[2644]: I0120 00:38:53.272929 2644 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:38:53.275521 kubelet[2644]: I0120 00:38:53.274360 2644 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:38:53.275521 kubelet[2644]: I0120 00:38:53.274517 2644 server.go:479] "Adding debug handlers to kubelet server" Jan 20 00:38:53.280543 kubelet[2644]: I0120 00:38:53.279606 2644 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:38:53.282682 kubelet[2644]: I0120 00:38:53.282587 2644 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:38:53.283087 kubelet[2644]: I0120 00:38:53.282796 2644 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:38:53.283087 kubelet[2644]: I0120 00:38:53.283059 2644 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:38:53.283415 kubelet[2644]: E0120 00:38:53.283300 2644 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:38:53.284960 kubelet[2644]: I0120 00:38:53.284914 2644 factory.go:221] Registration of the systemd container factory successfully Jan 20 00:38:53.285319 kubelet[2644]: I0120 00:38:53.285082 2644 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:38:53.287253 kubelet[2644]: I0120 00:38:53.287087 2644 factory.go:221] Registration of the containerd container factory successfully Jan 20 00:38:53.297270 kubelet[2644]: I0120 00:38:53.297055 2644 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 00:38:53.299620 kubelet[2644]: I0120 00:38:53.299598 2644 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 00:38:53.299733 kubelet[2644]: I0120 00:38:53.299719 2644 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 00:38:53.299817 kubelet[2644]: I0120 00:38:53.299803 2644 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:38:53.299901 kubelet[2644]: I0120 00:38:53.299887 2644 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 00:38:53.300112 kubelet[2644]: E0120 00:38:53.300087 2644 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:38:53.356030 kubelet[2644]: I0120 00:38:53.355860 2644 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:38:53.356030 kubelet[2644]: I0120 00:38:53.355885 2644 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:38:53.356030 kubelet[2644]: I0120 00:38:53.355912 2644 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:38:53.356227 kubelet[2644]: I0120 00:38:53.356185 2644 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 00:38:53.356259 kubelet[2644]: I0120 00:38:53.356222 2644 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 00:38:53.356259 kubelet[2644]: I0120 00:38:53.356254 2644 policy_none.go:49] "None policy: Start" Jan 20 00:38:53.356321 kubelet[2644]: I0120 00:38:53.356267 2644 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:38:53.356321 kubelet[2644]: I0120 00:38:53.356282 2644 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:38:53.356501 kubelet[2644]: I0120 00:38:53.356428 2644 state_mem.go:75] "Updated machine memory state" Jan 20 00:38:53.359048 kubelet[2644]: I0120 00:38:53.358839 2644 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 00:38:53.359180 kubelet[2644]: I0120 00:38:53.359130 2644 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:38:53.359180 kubelet[2644]: I0120 00:38:53.359145 2644 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:38:53.360523 kubelet[2644]: I0120 00:38:53.360428 2644 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:38:53.361930 kubelet[2644]: E0120 00:38:53.361900 2644 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:38:53.400906 kubelet[2644]: I0120 00:38:53.400857 2644 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:38:53.401383 kubelet[2644]: I0120 00:38:53.400938 2644 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:38:53.401563 kubelet[2644]: I0120 00:38:53.401023 2644 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:38:53.413711 kubelet[2644]: E0120 00:38:53.413424 2644 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:38:53.414150 kubelet[2644]: E0120 00:38:53.414111 2644 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 00:38:53.468396 kubelet[2644]: I0120 00:38:53.468045 2644 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:38:53.484353 kubelet[2644]: I0120 00:38:53.484238 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a85b8fce8ffdc544eea23f31682e762f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a85b8fce8ffdc544eea23f31682e762f\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:38:53.484353 kubelet[2644]: I0120 00:38:53.484304 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:38:53.484353 kubelet[2644]: I0120 00:38:53.484339 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:38:53.484353 kubelet[2644]: I0120 00:38:53.484363 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:38:53.484649 kubelet[2644]: I0120 00:38:53.484387 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:38:53.484649 kubelet[2644]: I0120 00:38:53.484428 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:38:53.484649 kubelet[2644]: I0120 00:38:53.484528 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a85b8fce8ffdc544eea23f31682e762f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a85b8fce8ffdc544eea23f31682e762f\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:38:53.484649 kubelet[2644]: I0120 00:38:53.484562 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a85b8fce8ffdc544eea23f31682e762f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a85b8fce8ffdc544eea23f31682e762f\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:38:53.484649 kubelet[2644]: I0120 00:38:53.484592 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:38:53.488900 kubelet[2644]: I0120 00:38:53.488611 2644 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 00:38:53.488900 kubelet[2644]: I0120 00:38:53.488899 2644 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:38:53.654200 sudo[2679]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 20 00:38:53.654770 sudo[2679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 20 00:38:53.711536 kubelet[2644]: E0120 00:38:53.711362 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:53.714131 kubelet[2644]: E0120 00:38:53.714097 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:53.714518 kubelet[2644]: E0120 00:38:53.714493 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:54.267134 kubelet[2644]: I0120 00:38:54.267018 2644 apiserver.go:52] "Watching apiserver" Jan 20 00:38:54.287119 kubelet[2644]: I0120 00:38:54.287078 2644 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:38:54.316407 kubelet[2644]: E0120 00:38:54.316289 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:54.317154 kubelet[2644]: E0120 00:38:54.317134 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:54.318254 kubelet[2644]: I0120 00:38:54.318145 2644 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:38:54.327283 kubelet[2644]: E0120 00:38:54.327194 2644 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 00:38:54.327430 kubelet[2644]: E0120 00:38:54.327385 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:54.356518 kubelet[2644]: I0120 00:38:54.356363 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.356330097 podStartE2EDuration="3.356330097s" podCreationTimestamp="2026-01-20 00:38:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:38:54.356092442 +0000 UTC m=+1.169179680" watchObservedRunningTime="2026-01-20 00:38:54.356330097 +0000 UTC m=+1.169417303" Jan 20 00:38:54.356776 kubelet[2644]: I0120 00:38:54.356560 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.356550127 podStartE2EDuration="1.356550127s" podCreationTimestamp="2026-01-20 00:38:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:38:54.346003322 +0000 UTC m=+1.159090539" watchObservedRunningTime="2026-01-20 00:38:54.356550127 +0000 UTC m=+1.169637334" Jan 20 00:38:54.386682 kubelet[2644]: I0120 00:38:54.386615 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.386597983 podStartE2EDuration="2.386597983s" podCreationTimestamp="2026-01-20 00:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:38:54.372654606 +0000 UTC m=+1.185741813" watchObservedRunningTime="2026-01-20 00:38:54.386597983 +0000 UTC m=+1.199685200" Jan 20 00:38:54.637489 sudo[2679]: pam_unix(sudo:session): session closed for user root Jan 20 00:38:54.992504 kernel: hrtimer: interrupt took 15486733 ns Jan 20 00:38:55.318889 kubelet[2644]: E0120 00:38:55.318802 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:55.318889 kubelet[2644]: E0120 00:38:55.318853 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:56.316308 sudo[1770]: pam_unix(sudo:session): session closed for user root Jan 20 00:38:56.319357 sshd[1764]: pam_unix(sshd:session): session closed for user core Jan 20 00:38:56.324659 systemd[1]: sshd@6-10.0.0.52:22-10.0.0.1:57690.service: Deactivated successfully. Jan 20 00:38:56.328736 systemd-logind[1562]: Session 7 logged out. Waiting for processes to exit. Jan 20 00:38:56.328956 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 00:38:56.331274 systemd-logind[1562]: Removed session 7. Jan 20 00:38:57.941669 kubelet[2644]: E0120 00:38:57.941542 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:58.912856 kubelet[2644]: I0120 00:38:58.912813 2644 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 00:38:58.913370 containerd[1583]: time="2026-01-20T00:38:58.913326177Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 00:38:58.913774 kubelet[2644]: I0120 00:38:58.913659 2644 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 00:38:59.714446 kubelet[2644]: E0120 00:38:59.714291 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:38:59.834694 kubelet[2644]: I0120 00:38:59.834372 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-clustermesh-secrets\") pod \"cilium-46hpc\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " pod="kube-system/cilium-46hpc" Jan 20 00:38:59.834694 kubelet[2644]: I0120 00:38:59.834412 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-hubble-tls\") pod \"cilium-46hpc\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " pod="kube-system/cilium-46hpc" Jan 20 00:38:59.834694 kubelet[2644]: I0120 00:38:59.834432 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fac7b355-5ebd-4a3c-a20b-9220754576a9-kube-proxy\") pod \"kube-proxy-zmhv2\" (UID: \"fac7b355-5ebd-4a3c-a20b-9220754576a9\") " pod="kube-system/kube-proxy-zmhv2" Jan 20 00:38:59.834694 kubelet[2644]: I0120 00:38:59.834447 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-cilium-cgroup\") pod \"cilium-46hpc\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " pod="kube-system/cilium-46hpc" Jan 20 00:38:59.834694 kubelet[2644]: I0120 00:38:59.834488 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-cni-path\") pod \"cilium-46hpc\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " pod="kube-system/cilium-46hpc" Jan 20 00:38:59.834694 kubelet[2644]: I0120 00:38:59.834502 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gspl9\" (UniqueName: \"kubernetes.io/projected/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-kube-api-access-gspl9\") pod \"cilium-46hpc\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " pod="kube-system/cilium-46hpc" Jan 20 00:38:59.834946 kubelet[2644]: I0120 00:38:59.834518 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fac7b355-5ebd-4a3c-a20b-9220754576a9-xtables-lock\") pod \"kube-proxy-zmhv2\" (UID: \"fac7b355-5ebd-4a3c-a20b-9220754576a9\") " pod="kube-system/kube-proxy-zmhv2" Jan 20 00:38:59.834946 kubelet[2644]: I0120 00:38:59.834530 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fac7b355-5ebd-4a3c-a20b-9220754576a9-lib-modules\") pod \"kube-proxy-zmhv2\" (UID: \"fac7b355-5ebd-4a3c-a20b-9220754576a9\") " pod="kube-system/kube-proxy-zmhv2" Jan 20 00:38:59.834946 kubelet[2644]: I0120 00:38:59.834542 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-bpf-maps\") pod \"cilium-46hpc\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " pod="kube-system/cilium-46hpc" Jan 20 00:38:59.834946 kubelet[2644]: I0120 00:38:59.834555 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-cilium-run\") pod \"cilium-46hpc\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " pod="kube-system/cilium-46hpc" Jan 20 00:38:59.834946 kubelet[2644]: I0120 00:38:59.834577 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-xtables-lock\") pod \"cilium-46hpc\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " pod="kube-system/cilium-46hpc" Jan 20 00:38:59.834946 kubelet[2644]: I0120 00:38:59.834591 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwwcg\" (UniqueName: \"kubernetes.io/projected/fac7b355-5ebd-4a3c-a20b-9220754576a9-kube-api-access-bwwcg\") pod \"kube-proxy-zmhv2\" (UID: \"fac7b355-5ebd-4a3c-a20b-9220754576a9\") " pod="kube-system/kube-proxy-zmhv2" Jan 20 00:38:59.835249 kubelet[2644]: I0120 00:38:59.834607 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-etc-cni-netd\") pod \"cilium-46hpc\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " pod="kube-system/cilium-46hpc" Jan 20 00:38:59.835249 kubelet[2644]: I0120 00:38:59.834712 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-lib-modules\") pod \"cilium-46hpc\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " pod="kube-system/cilium-46hpc" Jan 20 00:38:59.835249 kubelet[2644]: I0120 00:38:59.834805 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-cilium-config-path\") pod \"cilium-46hpc\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " pod="kube-system/cilium-46hpc" Jan 20 00:38:59.835249 kubelet[2644]: I0120 00:38:59.834830 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-hostproc\") pod \"cilium-46hpc\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " pod="kube-system/cilium-46hpc" Jan 20 00:38:59.835249 kubelet[2644]: I0120 00:38:59.834854 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-host-proc-sys-net\") pod \"cilium-46hpc\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " pod="kube-system/cilium-46hpc" Jan 20 00:38:59.835249 kubelet[2644]: I0120 00:38:59.834878 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-host-proc-sys-kernel\") pod \"cilium-46hpc\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " pod="kube-system/cilium-46hpc" Jan 20 00:39:00.037354 kubelet[2644]: I0120 00:39:00.037111 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec35af2d-b068-4cdc-a65b-12929172d504-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vq5rk\" (UID: \"ec35af2d-b068-4cdc-a65b-12929172d504\") " pod="kube-system/cilium-operator-6c4d7847fc-vq5rk" Jan 20 00:39:00.037354 kubelet[2644]: I0120 00:39:00.037192 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2487t\" (UniqueName: \"kubernetes.io/projected/ec35af2d-b068-4cdc-a65b-12929172d504-kube-api-access-2487t\") pod \"cilium-operator-6c4d7847fc-vq5rk\" (UID: \"ec35af2d-b068-4cdc-a65b-12929172d504\") " pod="kube-system/cilium-operator-6c4d7847fc-vq5rk" Jan 20 00:39:00.063690 kubelet[2644]: E0120 00:39:00.063545 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:00.064339 containerd[1583]: time="2026-01-20T00:39:00.064288048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zmhv2,Uid:fac7b355-5ebd-4a3c-a20b-9220754576a9,Namespace:kube-system,Attempt:0,}" Jan 20 00:39:00.074528 kubelet[2644]: E0120 00:39:00.074372 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:00.075064 containerd[1583]: time="2026-01-20T00:39:00.074953425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-46hpc,Uid:e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a,Namespace:kube-system,Attempt:0,}" Jan 20 00:39:00.102585 containerd[1583]: time="2026-01-20T00:39:00.102246221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:39:00.103023 containerd[1583]: time="2026-01-20T00:39:00.102447925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:39:00.103023 containerd[1583]: time="2026-01-20T00:39:00.102667271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:39:00.103718 containerd[1583]: time="2026-01-20T00:39:00.103245663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:39:00.117045 containerd[1583]: time="2026-01-20T00:39:00.116577714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:39:00.117045 containerd[1583]: time="2026-01-20T00:39:00.116649957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:39:00.117045 containerd[1583]: time="2026-01-20T00:39:00.116719456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:39:00.117045 containerd[1583]: time="2026-01-20T00:39:00.116844898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:39:00.165879 containerd[1583]: time="2026-01-20T00:39:00.165737672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zmhv2,Uid:fac7b355-5ebd-4a3c-a20b-9220754576a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"23ec81468e170e1881ac7e49477bf55afc69fffe3d04874d14160c46a15b5d59\"" Jan 20 00:39:00.166836 kubelet[2644]: E0120 00:39:00.166800 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:00.174536 containerd[1583]: time="2026-01-20T00:39:00.174060522Z" level=info msg="CreateContainer within sandbox \"23ec81468e170e1881ac7e49477bf55afc69fffe3d04874d14160c46a15b5d59\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 00:39:00.183338 containerd[1583]: time="2026-01-20T00:39:00.183248390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-46hpc,Uid:e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802\"" Jan 20 00:39:00.184332 kubelet[2644]: E0120 00:39:00.184183 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:00.185654 containerd[1583]: time="2026-01-20T00:39:00.185523066Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 20 00:39:00.199041 containerd[1583]: time="2026-01-20T00:39:00.198905381Z" level=info msg="CreateContainer within sandbox \"23ec81468e170e1881ac7e49477bf55afc69fffe3d04874d14160c46a15b5d59\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d2e96d33054302d406c6c481d9f27c513159c3cf10b511f04f7c453bd98f8fd6\"" Jan 20 00:39:00.199826 containerd[1583]: time="2026-01-20T00:39:00.199752199Z" level=info msg="StartContainer for \"d2e96d33054302d406c6c481d9f27c513159c3cf10b511f04f7c453bd98f8fd6\"" Jan 20 00:39:00.284039 kubelet[2644]: E0120 00:39:00.277586 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:00.284231 containerd[1583]: time="2026-01-20T00:39:00.278682871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vq5rk,Uid:ec35af2d-b068-4cdc-a65b-12929172d504,Namespace:kube-system,Attempt:0,}" Jan 20 00:39:00.316738 containerd[1583]: time="2026-01-20T00:39:00.315667500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:39:00.316738 containerd[1583]: time="2026-01-20T00:39:00.315733542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:39:00.316738 containerd[1583]: time="2026-01-20T00:39:00.315748861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:39:00.316738 containerd[1583]: time="2026-01-20T00:39:00.315846642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:39:00.338058 kubelet[2644]: E0120 00:39:00.332714 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:00.481739 containerd[1583]: time="2026-01-20T00:39:00.481489829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vq5rk,Uid:ec35af2d-b068-4cdc-a65b-12929172d504,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f37e9bb75bc1b90e8592b89bdd9ead91424d618795db3acf4640cc92f239cf4\"" Jan 20 00:39:00.482931 containerd[1583]: time="2026-01-20T00:39:00.482533274Z" level=info msg="StartContainer for \"d2e96d33054302d406c6c481d9f27c513159c3cf10b511f04f7c453bd98f8fd6\" returns successfully" Jan 20 00:39:00.484057 kubelet[2644]: E0120 00:39:00.483810 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:01.338624 kubelet[2644]: E0120 00:39:01.338571 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:01.339297 kubelet[2644]: E0120 00:39:01.339234 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:01.350572 kubelet[2644]: I0120 00:39:01.350381 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zmhv2" podStartSLOduration=2.350365598 podStartE2EDuration="2.350365598s" podCreationTimestamp="2026-01-20 00:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:39:01.350295959 +0000 UTC m=+8.163383166" watchObservedRunningTime="2026-01-20 00:39:01.350365598 +0000 UTC m=+8.163452806" Jan 20 00:39:02.339820 kubelet[2644]: E0120 00:39:02.339756 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:02.343605 kubelet[2644]: E0120 00:39:02.343497 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:03.342464 kubelet[2644]: E0120 00:39:03.342339 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:04.344303 kubelet[2644]: E0120 00:39:04.344205 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:06.523267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3249886220.mount: Deactivated successfully. Jan 20 00:39:07.969743 kubelet[2644]: E0120 00:39:07.969338 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:08.296202 containerd[1583]: time="2026-01-20T00:39:08.295947360Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:39:08.297016 containerd[1583]: time="2026-01-20T00:39:08.296936594Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 20 00:39:08.298251 containerd[1583]: time="2026-01-20T00:39:08.298163279Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:39:08.300019 containerd[1583]: time="2026-01-20T00:39:08.299931516Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.114348489s" Jan 20 00:39:08.300019 containerd[1583]: time="2026-01-20T00:39:08.300003601Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 20 00:39:08.302182 containerd[1583]: time="2026-01-20T00:39:08.302142817Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 20 00:39:08.303097 containerd[1583]: time="2026-01-20T00:39:08.302949767Z" level=info msg="CreateContainer within sandbox \"8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 00:39:08.327543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4211303933.mount: Deactivated successfully. Jan 20 00:39:08.332236 containerd[1583]: time="2026-01-20T00:39:08.332170488Z" level=info msg="CreateContainer within sandbox \"8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458\"" Jan 20 00:39:08.332959 containerd[1583]: time="2026-01-20T00:39:08.332909592Z" level=info msg="StartContainer for \"98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458\"" Jan 20 00:39:08.428255 containerd[1583]: time="2026-01-20T00:39:08.428214607Z" level=info msg="StartContainer for \"98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458\" returns successfully" Jan 20 00:39:08.565834 containerd[1583]: time="2026-01-20T00:39:08.565648735Z" level=info msg="shim disconnected" id=98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458 namespace=k8s.io Jan 20 00:39:08.565834 containerd[1583]: time="2026-01-20T00:39:08.565719606Z" level=warning msg="cleaning up after shim disconnected" id=98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458 namespace=k8s.io Jan 20 00:39:08.565834 containerd[1583]: time="2026-01-20T00:39:08.565736838Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:39:09.324953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458-rootfs.mount: Deactivated successfully. Jan 20 00:39:09.359120 kubelet[2644]: E0120 00:39:09.359035 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:09.361810 containerd[1583]: time="2026-01-20T00:39:09.361754574Z" level=info msg="CreateContainer within sandbox \"8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 00:39:09.382118 containerd[1583]: time="2026-01-20T00:39:09.381501428Z" level=info msg="CreateContainer within sandbox \"8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4\"" Jan 20 00:39:09.382735 containerd[1583]: time="2026-01-20T00:39:09.382422610Z" level=info msg="StartContainer for \"d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4\"" Jan 20 00:39:09.461422 containerd[1583]: time="2026-01-20T00:39:09.461334756Z" level=info msg="StartContainer for \"d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4\" returns successfully" Jan 20 00:39:09.475870 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:39:09.476772 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:39:09.476860 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:39:09.486249 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:39:09.511363 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:39:09.513827 containerd[1583]: time="2026-01-20T00:39:09.513771353Z" level=info msg="shim disconnected" id=d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4 namespace=k8s.io Jan 20 00:39:09.514111 containerd[1583]: time="2026-01-20T00:39:09.514069096Z" level=warning msg="cleaning up after shim disconnected" id=d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4 namespace=k8s.io Jan 20 00:39:09.514111 containerd[1583]: time="2026-01-20T00:39:09.514102659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:39:09.534708 containerd[1583]: time="2026-01-20T00:39:09.534523554Z" level=warning msg="cleanup warnings time=\"2026-01-20T00:39:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 20 00:39:09.857361 containerd[1583]: time="2026-01-20T00:39:09.857283772Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:39:09.858410 containerd[1583]: time="2026-01-20T00:39:09.858352198Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 20 00:39:09.859598 containerd[1583]: time="2026-01-20T00:39:09.859529067Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:39:09.861141 containerd[1583]: time="2026-01-20T00:39:09.861086412Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.558904893s" Jan 20 00:39:09.861141 containerd[1583]: time="2026-01-20T00:39:09.861129310Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 20 00:39:09.863625 containerd[1583]: time="2026-01-20T00:39:09.863523096Z" level=info msg="CreateContainer within sandbox \"3f37e9bb75bc1b90e8592b89bdd9ead91424d618795db3acf4640cc92f239cf4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 20 00:39:09.875227 containerd[1583]: time="2026-01-20T00:39:09.875159614Z" level=info msg="CreateContainer within sandbox \"3f37e9bb75bc1b90e8592b89bdd9ead91424d618795db3acf4640cc92f239cf4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60\"" Jan 20 00:39:09.876041 containerd[1583]: time="2026-01-20T00:39:09.875932586Z" level=info msg="StartContainer for \"7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60\"" Jan 20 00:39:09.940019 containerd[1583]: time="2026-01-20T00:39:09.939940742Z" level=info msg="StartContainer for \"7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60\" returns successfully" Jan 20 00:39:10.332936 systemd[1]: run-containerd-runc-k8s.io-d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4-runc.ZD5cO5.mount: Deactivated successfully. Jan 20 00:39:10.333618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4-rootfs.mount: Deactivated successfully. Jan 20 00:39:10.363152 kubelet[2644]: E0120 00:39:10.363109 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:10.367138 containerd[1583]: time="2026-01-20T00:39:10.366550997Z" level=info msg="CreateContainer within sandbox \"8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 00:39:10.367606 kubelet[2644]: E0120 00:39:10.366879 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:10.386197 containerd[1583]: time="2026-01-20T00:39:10.386116869Z" level=info msg="CreateContainer within sandbox \"8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d\"" Jan 20 00:39:10.389108 containerd[1583]: time="2026-01-20T00:39:10.386893727Z" level=info msg="StartContainer for \"38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d\"" Jan 20 00:39:10.409431 kubelet[2644]: I0120 00:39:10.409130 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vq5rk" podStartSLOduration=2.031949176 podStartE2EDuration="11.409105252s" podCreationTimestamp="2026-01-20 00:38:59 +0000 UTC" firstStartedPulling="2026-01-20 00:39:00.484918436 +0000 UTC m=+7.298005664" lastFinishedPulling="2026-01-20 00:39:09.862074533 +0000 UTC m=+16.675161740" observedRunningTime="2026-01-20 00:39:10.408892004 +0000 UTC m=+17.221979211" watchObservedRunningTime="2026-01-20 00:39:10.409105252 +0000 UTC m=+17.222192458" Jan 20 00:39:10.512352 containerd[1583]: time="2026-01-20T00:39:10.512261472Z" level=info msg="StartContainer for \"38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d\" returns successfully" Jan 20 00:39:10.660948 containerd[1583]: time="2026-01-20T00:39:10.660672209Z" level=info msg="shim disconnected" id=38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d namespace=k8s.io Jan 20 00:39:10.660948 containerd[1583]: time="2026-01-20T00:39:10.660737861Z" level=warning msg="cleaning up after shim disconnected" id=38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d namespace=k8s.io Jan 20 00:39:10.660948 containerd[1583]: time="2026-01-20T00:39:10.660750244Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:39:11.328934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d-rootfs.mount: Deactivated successfully. Jan 20 00:39:11.370681 kubelet[2644]: E0120 00:39:11.370638 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:11.371331 kubelet[2644]: E0120 00:39:11.371097 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:11.373560 containerd[1583]: time="2026-01-20T00:39:11.373511478Z" level=info msg="CreateContainer within sandbox \"8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 00:39:11.394595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1333489435.mount: Deactivated successfully. Jan 20 00:39:11.395284 containerd[1583]: time="2026-01-20T00:39:11.395198925Z" level=info msg="CreateContainer within sandbox \"8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d\"" Jan 20 00:39:11.397343 containerd[1583]: time="2026-01-20T00:39:11.396047423Z" level=info msg="StartContainer for \"0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d\"" Jan 20 00:39:11.480459 containerd[1583]: time="2026-01-20T00:39:11.479535435Z" level=info msg="StartContainer for \"0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d\" returns successfully" Jan 20 00:39:11.510948 containerd[1583]: time="2026-01-20T00:39:11.510867117Z" level=info msg="shim disconnected" id=0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d namespace=k8s.io Jan 20 00:39:11.510948 containerd[1583]: time="2026-01-20T00:39:11.510946635Z" level=warning msg="cleaning up after shim disconnected" id=0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d namespace=k8s.io Jan 20 00:39:11.511298 containerd[1583]: time="2026-01-20T00:39:11.511041161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:39:11.972668 update_engine[1565]: I20260120 00:39:11.972528 1565 update_attempter.cc:509] Updating boot flags... Jan 20 00:39:12.008025 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3353) Jan 20 00:39:12.053082 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3357) Jan 20 00:39:12.328413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d-rootfs.mount: Deactivated successfully. Jan 20 00:39:12.375498 kubelet[2644]: E0120 00:39:12.375442 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:12.378604 containerd[1583]: time="2026-01-20T00:39:12.378506176Z" level=info msg="CreateContainer within sandbox \"8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 00:39:12.408836 containerd[1583]: time="2026-01-20T00:39:12.408757590Z" level=info msg="CreateContainer within sandbox \"8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4\"" Jan 20 00:39:12.409594 containerd[1583]: time="2026-01-20T00:39:12.409275729Z" level=info msg="StartContainer for \"50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4\"" Jan 20 00:39:12.502090 containerd[1583]: time="2026-01-20T00:39:12.502050186Z" level=info msg="StartContainer for \"50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4\" returns successfully" Jan 20 00:39:12.619120 kubelet[2644]: I0120 00:39:12.619019 2644 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 00:39:12.731254 kubelet[2644]: I0120 00:39:12.731176 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28efe43b-eccb-407d-bbc0-23676fbbab3d-config-volume\") pod \"coredns-668d6bf9bc-4ghg9\" (UID: \"28efe43b-eccb-407d-bbc0-23676fbbab3d\") " pod="kube-system/coredns-668d6bf9bc-4ghg9" Jan 20 00:39:12.731254 kubelet[2644]: I0120 00:39:12.731235 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5eee7c04-0ef7-4118-810c-6d960a82253f-config-volume\") pod \"coredns-668d6bf9bc-r4mc4\" (UID: \"5eee7c04-0ef7-4118-810c-6d960a82253f\") " pod="kube-system/coredns-668d6bf9bc-r4mc4" Jan 20 00:39:12.731254 kubelet[2644]: I0120 00:39:12.731254 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68lw8\" (UniqueName: \"kubernetes.io/projected/5eee7c04-0ef7-4118-810c-6d960a82253f-kube-api-access-68lw8\") pod \"coredns-668d6bf9bc-r4mc4\" (UID: \"5eee7c04-0ef7-4118-810c-6d960a82253f\") " pod="kube-system/coredns-668d6bf9bc-r4mc4" Jan 20 00:39:12.731501 kubelet[2644]: I0120 00:39:12.731270 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2h5b\" (UniqueName: \"kubernetes.io/projected/28efe43b-eccb-407d-bbc0-23676fbbab3d-kube-api-access-t2h5b\") pod \"coredns-668d6bf9bc-4ghg9\" (UID: \"28efe43b-eccb-407d-bbc0-23676fbbab3d\") " pod="kube-system/coredns-668d6bf9bc-4ghg9" Jan 20 00:39:12.958145 kubelet[2644]: E0120 00:39:12.958086 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:12.958959 containerd[1583]: time="2026-01-20T00:39:12.958902761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4ghg9,Uid:28efe43b-eccb-407d-bbc0-23676fbbab3d,Namespace:kube-system,Attempt:0,}" Jan 20 00:39:12.962239 kubelet[2644]: E0120 00:39:12.962160 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:12.962722 containerd[1583]: time="2026-01-20T00:39:12.962613586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r4mc4,Uid:5eee7c04-0ef7-4118-810c-6d960a82253f,Namespace:kube-system,Attempt:0,}" Jan 20 00:39:13.384181 kubelet[2644]: E0120 00:39:13.384026 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:13.402425 kubelet[2644]: I0120 00:39:13.402308 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-46hpc" podStartSLOduration=6.285799652 podStartE2EDuration="14.402288011s" podCreationTimestamp="2026-01-20 00:38:59 +0000 UTC" firstStartedPulling="2026-01-20 00:39:00.18470649 +0000 UTC m=+6.997793696" lastFinishedPulling="2026-01-20 00:39:08.301194847 +0000 UTC m=+15.114282055" observedRunningTime="2026-01-20 00:39:13.400807333 +0000 UTC m=+20.213894540" watchObservedRunningTime="2026-01-20 00:39:13.402288011 +0000 UTC m=+20.215375288" Jan 20 00:39:14.386132 kubelet[2644]: E0120 00:39:14.386077 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:14.718427 systemd-networkd[1248]: cilium_host: Link UP Jan 20 00:39:14.719206 systemd-networkd[1248]: cilium_net: Link UP Jan 20 00:39:14.719438 systemd-networkd[1248]: cilium_net: Gained carrier Jan 20 00:39:14.719666 systemd-networkd[1248]: cilium_host: Gained carrier Jan 20 00:39:14.854877 systemd-networkd[1248]: cilium_vxlan: Link UP Jan 20 00:39:14.854886 systemd-networkd[1248]: cilium_vxlan: Gained carrier Jan 20 00:39:14.904261 systemd-networkd[1248]: cilium_host: Gained IPv6LL Jan 20 00:39:15.093070 kernel: NET: Registered PF_ALG protocol family Jan 20 00:39:15.388599 kubelet[2644]: E0120 00:39:15.388510 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:15.536261 systemd-networkd[1248]: cilium_net: Gained IPv6LL Jan 20 00:39:15.899842 systemd-networkd[1248]: lxc_health: Link UP Jan 20 00:39:15.907571 systemd-networkd[1248]: lxc_health: Gained carrier Jan 20 00:39:16.045016 systemd-networkd[1248]: lxcd9918563d5f3: Link UP Jan 20 00:39:16.054092 kernel: eth0: renamed from tmpdec19 Jan 20 00:39:16.062325 systemd-networkd[1248]: lxcd9918563d5f3: Gained carrier Jan 20 00:39:16.390753 kubelet[2644]: E0120 00:39:16.390722 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:16.537030 systemd-networkd[1248]: lxc1f4226312132: Link UP Jan 20 00:39:16.550064 kernel: eth0: renamed from tmp23c2b Jan 20 00:39:16.559134 systemd-networkd[1248]: lxc1f4226312132: Gained carrier Jan 20 00:39:16.688283 systemd-networkd[1248]: cilium_vxlan: Gained IPv6LL Jan 20 00:39:17.393115 systemd-networkd[1248]: lxc_health: Gained IPv6LL Jan 20 00:39:17.584394 systemd-networkd[1248]: lxcd9918563d5f3: Gained IPv6LL Jan 20 00:39:17.712324 systemd-networkd[1248]: lxc1f4226312132: Gained IPv6LL Jan 20 00:39:19.977083 containerd[1583]: time="2026-01-20T00:39:19.976082889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:39:19.977083 containerd[1583]: time="2026-01-20T00:39:19.976162436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:39:19.977083 containerd[1583]: time="2026-01-20T00:39:19.976181172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:39:19.977083 containerd[1583]: time="2026-01-20T00:39:19.976305243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:39:19.983206 containerd[1583]: time="2026-01-20T00:39:19.982999805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:39:19.984637 containerd[1583]: time="2026-01-20T00:39:19.984219217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:39:19.984637 containerd[1583]: time="2026-01-20T00:39:19.984247789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:39:19.984790 containerd[1583]: time="2026-01-20T00:39:19.984576492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:39:20.021229 systemd[1]: run-containerd-runc-k8s.io-dec19d8bd1c4f8444f13e016f5d4e9fefb24fff52a24cac825ab621e8a8fad33-runc.XnEIR3.mount: Deactivated successfully. Jan 20 00:39:20.023790 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:39:20.025225 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:39:20.058559 containerd[1583]: time="2026-01-20T00:39:20.058161478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4ghg9,Uid:28efe43b-eccb-407d-bbc0-23676fbbab3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"dec19d8bd1c4f8444f13e016f5d4e9fefb24fff52a24cac825ab621e8a8fad33\"" Jan 20 00:39:20.061068 kubelet[2644]: E0120 00:39:20.059522 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:20.066755 containerd[1583]: time="2026-01-20T00:39:20.066671233Z" level=info msg="CreateContainer within sandbox \"dec19d8bd1c4f8444f13e016f5d4e9fefb24fff52a24cac825ab621e8a8fad33\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:39:20.073244 containerd[1583]: time="2026-01-20T00:39:20.073216354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r4mc4,Uid:5eee7c04-0ef7-4118-810c-6d960a82253f,Namespace:kube-system,Attempt:0,} returns sandbox id \"23c2b8ec652415e822b17f5fde935484fb0eb98e4acf5e041fd7ccf6153b6ab9\"" Jan 20 00:39:20.074641 kubelet[2644]: E0120 00:39:20.074612 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:20.076828 containerd[1583]: time="2026-01-20T00:39:20.076735871Z" level=info msg="CreateContainer within sandbox \"23c2b8ec652415e822b17f5fde935484fb0eb98e4acf5e041fd7ccf6153b6ab9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:39:20.099516 containerd[1583]: time="2026-01-20T00:39:20.099419155Z" level=info msg="CreateContainer within sandbox \"23c2b8ec652415e822b17f5fde935484fb0eb98e4acf5e041fd7ccf6153b6ab9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b178bc95cf32c4d1a5867b5423428d25243ffefe95c7ac291197daca75519dc\"" Jan 20 00:39:20.100609 containerd[1583]: time="2026-01-20T00:39:20.100558463Z" level=info msg="StartContainer for \"8b178bc95cf32c4d1a5867b5423428d25243ffefe95c7ac291197daca75519dc\"" Jan 20 00:39:20.104732 containerd[1583]: time="2026-01-20T00:39:20.104633957Z" level=info msg="CreateContainer within sandbox \"dec19d8bd1c4f8444f13e016f5d4e9fefb24fff52a24cac825ab621e8a8fad33\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"098e0cdfeaf18b8aa1dffcf034ff913d6111acf80005f161abde67eae002b458\"" Jan 20 00:39:20.108241 containerd[1583]: time="2026-01-20T00:39:20.108188498Z" level=info msg="StartContainer for \"098e0cdfeaf18b8aa1dffcf034ff913d6111acf80005f161abde67eae002b458\"" Jan 20 00:39:20.201187 containerd[1583]: time="2026-01-20T00:39:20.200937246Z" level=info msg="StartContainer for \"8b178bc95cf32c4d1a5867b5423428d25243ffefe95c7ac291197daca75519dc\" returns successfully" Jan 20 00:39:20.201187 containerd[1583]: time="2026-01-20T00:39:20.200961619Z" level=info msg="StartContainer for \"098e0cdfeaf18b8aa1dffcf034ff913d6111acf80005f161abde67eae002b458\" returns successfully" Jan 20 00:39:20.402241 kubelet[2644]: E0120 00:39:20.401160 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:20.405059 kubelet[2644]: E0120 00:39:20.404926 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:20.418090 kubelet[2644]: I0120 00:39:20.417932 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4ghg9" podStartSLOduration=21.417911652 podStartE2EDuration="21.417911652s" podCreationTimestamp="2026-01-20 00:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:39:20.417171519 +0000 UTC m=+27.230258726" watchObservedRunningTime="2026-01-20 00:39:20.417911652 +0000 UTC m=+27.230998879" Jan 20 00:39:20.453445 kubelet[2644]: I0120 00:39:20.453162 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-r4mc4" podStartSLOduration=21.453081026 podStartE2EDuration="21.453081026s" podCreationTimestamp="2026-01-20 00:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:39:20.452118943 +0000 UTC m=+27.265206150" watchObservedRunningTime="2026-01-20 00:39:20.453081026 +0000 UTC m=+27.266168232" Jan 20 00:39:21.406956 kubelet[2644]: E0120 00:39:21.406574 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:21.406956 kubelet[2644]: E0120 00:39:21.406759 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:22.408826 kubelet[2644]: E0120 00:39:22.408731 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:23.102141 kubelet[2644]: I0120 00:39:23.102066 2644 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 00:39:23.102616 kubelet[2644]: E0120 00:39:23.102584 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:23.411104 kubelet[2644]: E0120 00:39:23.410853 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:39:39.456205 systemd[1]: Started sshd@7-10.0.0.52:22-10.0.0.1:52068.service - OpenSSH per-connection server daemon (10.0.0.1:52068). Jan 20 00:39:39.492938 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 52068 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:39:39.494918 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:39:39.500069 systemd-logind[1562]: New session 8 of user core. Jan 20 00:39:39.516342 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 00:39:39.868785 sshd[4041]: pam_unix(sshd:session): session closed for user core Jan 20 00:39:39.873100 systemd[1]: sshd@7-10.0.0.52:22-10.0.0.1:52068.service: Deactivated successfully. Jan 20 00:39:39.875435 systemd-logind[1562]: Session 8 logged out. Waiting for processes to exit. Jan 20 00:39:39.875514 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 00:39:39.876878 systemd-logind[1562]: Removed session 8. Jan 20 00:39:44.897456 systemd[1]: Started sshd@8-10.0.0.52:22-10.0.0.1:51820.service - OpenSSH per-connection server daemon (10.0.0.1:51820). Jan 20 00:39:44.930778 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 51820 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:39:44.932749 sshd[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:39:44.938693 systemd-logind[1562]: New session 9 of user core. Jan 20 00:39:44.950469 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 00:39:45.089544 sshd[4057]: pam_unix(sshd:session): session closed for user core Jan 20 00:39:45.094054 systemd[1]: sshd@8-10.0.0.52:22-10.0.0.1:51820.service: Deactivated successfully. Jan 20 00:39:45.096961 systemd-logind[1562]: Session 9 logged out. Waiting for processes to exit. Jan 20 00:39:45.097110 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 00:39:45.098815 systemd-logind[1562]: Removed session 9. Jan 20 00:39:50.108394 systemd[1]: Started sshd@9-10.0.0.52:22-10.0.0.1:51828.service - OpenSSH per-connection server daemon (10.0.0.1:51828). Jan 20 00:39:50.147005 sshd[4073]: Accepted publickey for core from 10.0.0.1 port 51828 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:39:50.149349 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:39:50.155393 systemd-logind[1562]: New session 10 of user core. Jan 20 00:39:50.169513 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 00:39:50.305248 sshd[4073]: pam_unix(sshd:session): session closed for user core Jan 20 00:39:50.310338 systemd[1]: sshd@9-10.0.0.52:22-10.0.0.1:51828.service: Deactivated successfully. Jan 20 00:39:50.313206 systemd-logind[1562]: Session 10 logged out. Waiting for processes to exit. Jan 20 00:39:50.313343 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 00:39:50.315209 systemd-logind[1562]: Removed session 10. Jan 20 00:39:55.331517 systemd[1]: Started sshd@10-10.0.0.52:22-10.0.0.1:60326.service - OpenSSH per-connection server daemon (10.0.0.1:60326). Jan 20 00:39:55.416763 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 60326 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:39:55.418805 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:39:55.431955 systemd-logind[1562]: New session 11 of user core. Jan 20 00:39:55.443947 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 00:39:55.711249 sshd[4091]: pam_unix(sshd:session): session closed for user core Jan 20 00:39:55.720351 systemd[1]: sshd@10-10.0.0.52:22-10.0.0.1:60326.service: Deactivated successfully. Jan 20 00:39:55.727903 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 00:39:55.730131 systemd-logind[1562]: Session 11 logged out. Waiting for processes to exit. Jan 20 00:39:55.732459 systemd-logind[1562]: Removed session 11. Jan 20 00:40:00.735369 systemd[1]: Started sshd@11-10.0.0.52:22-10.0.0.1:60330.service - OpenSSH per-connection server daemon (10.0.0.1:60330). Jan 20 00:40:00.815521 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 60330 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:40:00.819910 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:00.838605 systemd-logind[1562]: New session 12 of user core. Jan 20 00:40:00.851038 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 00:40:01.098317 sshd[4109]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:01.109835 systemd[1]: sshd@11-10.0.0.52:22-10.0.0.1:60330.service: Deactivated successfully. Jan 20 00:40:01.115878 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 00:40:01.116340 systemd-logind[1562]: Session 12 logged out. Waiting for processes to exit. Jan 20 00:40:01.121635 systemd-logind[1562]: Removed session 12. Jan 20 00:40:06.116620 systemd[1]: Started sshd@12-10.0.0.52:22-10.0.0.1:34138.service - OpenSSH per-connection server daemon (10.0.0.1:34138). Jan 20 00:40:06.265639 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 34138 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:40:06.273810 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:06.287691 systemd-logind[1562]: New session 13 of user core. Jan 20 00:40:06.293551 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 00:40:06.577816 sshd[4126]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:06.592483 systemd[1]: sshd@12-10.0.0.52:22-10.0.0.1:34138.service: Deactivated successfully. Jan 20 00:40:06.601205 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 00:40:06.606548 systemd-logind[1562]: Session 13 logged out. Waiting for processes to exit. Jan 20 00:40:06.608495 systemd-logind[1562]: Removed session 13. Jan 20 00:40:11.600422 systemd[1]: Started sshd@13-10.0.0.52:22-10.0.0.1:34152.service - OpenSSH per-connection server daemon (10.0.0.1:34152). Jan 20 00:40:11.710013 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 34152 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:40:11.712398 sshd[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:11.728283 systemd-logind[1562]: New session 14 of user core. Jan 20 00:40:11.753793 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 00:40:12.019850 sshd[4143]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:12.037348 systemd-logind[1562]: Session 14 logged out. Waiting for processes to exit. Jan 20 00:40:12.037704 systemd[1]: sshd@13-10.0.0.52:22-10.0.0.1:34152.service: Deactivated successfully. Jan 20 00:40:12.047253 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 00:40:12.078310 systemd[1]: Started sshd@14-10.0.0.52:22-10.0.0.1:34168.service - OpenSSH per-connection server daemon (10.0.0.1:34168). Jan 20 00:40:12.084674 systemd-logind[1562]: Removed session 14. Jan 20 00:40:12.143599 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 34168 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:40:12.152607 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:12.167572 systemd-logind[1562]: New session 15 of user core. Jan 20 00:40:12.185730 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 00:40:12.544730 sshd[4159]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:12.560149 systemd[1]: Started sshd@15-10.0.0.52:22-10.0.0.1:47228.service - OpenSSH per-connection server daemon (10.0.0.1:47228). Jan 20 00:40:12.563145 systemd[1]: sshd@14-10.0.0.52:22-10.0.0.1:34168.service: Deactivated successfully. Jan 20 00:40:12.581134 systemd-logind[1562]: Session 15 logged out. Waiting for processes to exit. Jan 20 00:40:12.586861 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 00:40:12.592266 systemd-logind[1562]: Removed session 15. Jan 20 00:40:12.624674 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 47228 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:40:12.630654 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:12.652763 systemd-logind[1562]: New session 16 of user core. Jan 20 00:40:12.660411 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 00:40:12.917514 sshd[4170]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:12.927127 systemd[1]: sshd@15-10.0.0.52:22-10.0.0.1:47228.service: Deactivated successfully. Jan 20 00:40:12.941863 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 00:40:12.942406 systemd-logind[1562]: Session 16 logged out. Waiting for processes to exit. Jan 20 00:40:12.947181 systemd-logind[1562]: Removed session 16. Jan 20 00:40:17.301803 kubelet[2644]: E0120 00:40:17.301404 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:40:17.937478 systemd[1]: Started sshd@16-10.0.0.52:22-10.0.0.1:47242.service - OpenSSH per-connection server daemon (10.0.0.1:47242). Jan 20 00:40:17.995727 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 47242 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:40:17.997476 sshd[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:18.016787 systemd-logind[1562]: New session 17 of user core. Jan 20 00:40:18.038717 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 00:40:18.268811 sshd[4190]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:18.276802 systemd[1]: sshd@16-10.0.0.52:22-10.0.0.1:47242.service: Deactivated successfully. Jan 20 00:40:18.288008 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 00:40:18.288152 systemd-logind[1562]: Session 17 logged out. Waiting for processes to exit. Jan 20 00:40:18.292055 systemd-logind[1562]: Removed session 17. Jan 20 00:40:21.307403 kubelet[2644]: E0120 00:40:21.305319 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:40:23.287503 systemd[1]: Started sshd@17-10.0.0.52:22-10.0.0.1:53764.service - OpenSSH per-connection server daemon (10.0.0.1:53764). Jan 20 00:40:23.303267 kubelet[2644]: E0120 00:40:23.300776 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:40:23.303267 kubelet[2644]: E0120 00:40:23.302900 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:40:23.419721 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 53764 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:40:23.427511 sshd[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:23.448733 systemd-logind[1562]: New session 18 of user core. Jan 20 00:40:23.467539 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 00:40:23.876167 sshd[4205]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:23.890724 systemd[1]: sshd@17-10.0.0.52:22-10.0.0.1:53764.service: Deactivated successfully. Jan 20 00:40:23.916106 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 00:40:23.920053 systemd-logind[1562]: Session 18 logged out. Waiting for processes to exit. Jan 20 00:40:23.928519 systemd-logind[1562]: Removed session 18. Jan 20 00:40:28.898509 systemd[1]: Started sshd@18-10.0.0.52:22-10.0.0.1:53770.service - OpenSSH per-connection server daemon (10.0.0.1:53770). Jan 20 00:40:28.992213 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 53770 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:40:28.998915 sshd[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:29.011569 systemd-logind[1562]: New session 19 of user core. Jan 20 00:40:29.021244 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 00:40:29.263065 sshd[4220]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:29.276944 systemd[1]: sshd@18-10.0.0.52:22-10.0.0.1:53770.service: Deactivated successfully. Jan 20 00:40:29.287904 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 00:40:29.294740 systemd-logind[1562]: Session 19 logged out. Waiting for processes to exit. Jan 20 00:40:29.297166 systemd-logind[1562]: Removed session 19. Jan 20 00:40:29.304782 kubelet[2644]: E0120 00:40:29.304750 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:40:30.304628 kubelet[2644]: E0120 00:40:30.302055 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:40:33.304225 kubelet[2644]: E0120 00:40:33.304128 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:40:34.281410 systemd[1]: Started sshd@19-10.0.0.52:22-10.0.0.1:52732.service - OpenSSH per-connection server daemon (10.0.0.1:52732). Jan 20 00:40:34.390567 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 52732 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:40:34.394406 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:34.420836 systemd-logind[1562]: New session 20 of user core. Jan 20 00:40:34.430212 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 00:40:34.663826 sshd[4238]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:34.671133 systemd[1]: sshd@19-10.0.0.52:22-10.0.0.1:52732.service: Deactivated successfully. Jan 20 00:40:34.677210 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 00:40:34.679197 systemd-logind[1562]: Session 20 logged out. Waiting for processes to exit. Jan 20 00:40:34.681068 systemd-logind[1562]: Removed session 20. Jan 20 00:40:37.311491 kubelet[2644]: E0120 00:40:37.303086 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:40:39.685064 systemd[1]: Started sshd@20-10.0.0.52:22-10.0.0.1:52744.service - OpenSSH per-connection server daemon (10.0.0.1:52744). Jan 20 00:40:39.818435 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 52744 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:40:39.824834 sshd[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:39.839151 systemd-logind[1562]: New session 21 of user core. Jan 20 00:40:39.858016 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 00:40:40.225344 sshd[4254]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:40.246389 systemd[1]: sshd@20-10.0.0.52:22-10.0.0.1:52744.service: Deactivated successfully. Jan 20 00:40:40.256396 systemd-logind[1562]: Session 21 logged out. Waiting for processes to exit. Jan 20 00:40:40.261833 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 00:40:40.270276 systemd-logind[1562]: Removed session 21. Jan 20 00:40:45.252127 systemd[1]: Started sshd@21-10.0.0.52:22-10.0.0.1:60504.service - OpenSSH per-connection server daemon (10.0.0.1:60504). Jan 20 00:40:45.373916 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 60504 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:40:45.376154 sshd[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:45.388542 systemd-logind[1562]: New session 22 of user core. Jan 20 00:40:45.401073 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 00:40:45.697927 sshd[4269]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:45.713724 systemd[1]: sshd@21-10.0.0.52:22-10.0.0.1:60504.service: Deactivated successfully. Jan 20 00:40:45.718358 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 00:40:45.720683 systemd-logind[1562]: Session 22 logged out. Waiting for processes to exit. Jan 20 00:40:45.724684 systemd-logind[1562]: Removed session 22. Jan 20 00:40:50.723723 systemd[1]: Started sshd@22-10.0.0.52:22-10.0.0.1:60518.service - OpenSSH per-connection server daemon (10.0.0.1:60518). Jan 20 00:40:50.823633 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 60518 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:40:50.826797 sshd[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:50.848753 systemd-logind[1562]: New session 23 of user core. Jan 20 00:40:50.859921 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 00:40:51.193846 sshd[4285]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:51.210101 systemd[1]: sshd@22-10.0.0.52:22-10.0.0.1:60518.service: Deactivated successfully. Jan 20 00:40:51.214346 systemd-logind[1562]: Session 23 logged out. Waiting for processes to exit. Jan 20 00:40:51.215444 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 00:40:51.225386 systemd-logind[1562]: Removed session 23. Jan 20 00:40:56.223199 systemd[1]: Started sshd@23-10.0.0.52:22-10.0.0.1:59556.service - OpenSSH per-connection server daemon (10.0.0.1:59556). Jan 20 00:40:56.340672 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 59556 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:40:56.345803 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:56.362712 systemd-logind[1562]: New session 24 of user core. Jan 20 00:40:56.378691 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 00:40:56.633945 sshd[4302]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:56.653475 systemd[1]: sshd@23-10.0.0.52:22-10.0.0.1:59556.service: Deactivated successfully. Jan 20 00:40:56.659384 systemd-logind[1562]: Session 24 logged out. Waiting for processes to exit. Jan 20 00:40:56.660610 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 00:40:56.663846 systemd-logind[1562]: Removed session 24. Jan 20 00:41:01.657584 systemd[1]: Started sshd@24-10.0.0.52:22-10.0.0.1:59558.service - OpenSSH per-connection server daemon (10.0.0.1:59558). Jan 20 00:41:01.759149 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 59558 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:01.762165 sshd[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:01.787615 systemd-logind[1562]: New session 25 of user core. Jan 20 00:41:01.796126 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 00:41:02.047323 sshd[4319]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:02.053890 systemd[1]: sshd@24-10.0.0.52:22-10.0.0.1:59558.service: Deactivated successfully. Jan 20 00:41:02.065935 systemd-logind[1562]: Session 25 logged out. Waiting for processes to exit. Jan 20 00:41:02.066131 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 00:41:02.082680 systemd-logind[1562]: Removed session 25. Jan 20 00:41:07.083224 systemd[1]: Started sshd@25-10.0.0.52:22-10.0.0.1:50868.service - OpenSSH per-connection server daemon (10.0.0.1:50868). Jan 20 00:41:07.217870 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 50868 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:07.222221 sshd[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:07.256802 systemd-logind[1562]: New session 26 of user core. Jan 20 00:41:07.273471 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 00:41:07.620284 sshd[4335]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:07.636738 systemd[1]: sshd@25-10.0.0.52:22-10.0.0.1:50868.service: Deactivated successfully. Jan 20 00:41:07.640589 systemd-logind[1562]: Session 26 logged out. Waiting for processes to exit. Jan 20 00:41:07.641600 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 00:41:07.659105 systemd-logind[1562]: Removed session 26. Jan 20 00:41:12.646433 systemd[1]: Started sshd@26-10.0.0.52:22-10.0.0.1:35052.service - OpenSSH per-connection server daemon (10.0.0.1:35052). Jan 20 00:41:12.731845 sshd[4350]: Accepted publickey for core from 10.0.0.1 port 35052 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:12.735156 sshd[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:12.750817 systemd-logind[1562]: New session 27 of user core. Jan 20 00:41:12.759473 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 00:41:13.020126 sshd[4350]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:13.028638 systemd[1]: sshd@26-10.0.0.52:22-10.0.0.1:35052.service: Deactivated successfully. Jan 20 00:41:13.040464 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 00:41:13.042904 systemd-logind[1562]: Session 27 logged out. Waiting for processes to exit. Jan 20 00:41:13.053342 systemd-logind[1562]: Removed session 27. Jan 20 00:41:18.051246 systemd[1]: Started sshd@27-10.0.0.52:22-10.0.0.1:35058.service - OpenSSH per-connection server daemon (10.0.0.1:35058). Jan 20 00:41:18.102687 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 35058 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:18.105445 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:18.114600 systemd-logind[1562]: New session 28 of user core. Jan 20 00:41:18.133768 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 00:41:18.422198 sshd[4367]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:18.428272 systemd[1]: sshd@27-10.0.0.52:22-10.0.0.1:35058.service: Deactivated successfully. Jan 20 00:41:18.439333 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 00:41:18.442076 systemd-logind[1562]: Session 28 logged out. Waiting for processes to exit. Jan 20 00:41:18.449677 systemd-logind[1562]: Removed session 28. Jan 20 00:41:23.453435 systemd[1]: Started sshd@28-10.0.0.52:22-10.0.0.1:49278.service - OpenSSH per-connection server daemon (10.0.0.1:49278). Jan 20 00:41:23.532076 sshd[4383]: Accepted publickey for core from 10.0.0.1 port 49278 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:23.537730 sshd[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:23.561701 systemd-logind[1562]: New session 29 of user core. Jan 20 00:41:23.568943 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 00:41:23.832867 sshd[4383]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:23.841776 systemd[1]: Started sshd@29-10.0.0.52:22-10.0.0.1:49292.service - OpenSSH per-connection server daemon (10.0.0.1:49292). Jan 20 00:41:23.843380 systemd[1]: sshd@28-10.0.0.52:22-10.0.0.1:49278.service: Deactivated successfully. Jan 20 00:41:23.860628 systemd-logind[1562]: Session 29 logged out. Waiting for processes to exit. Jan 20 00:41:23.862643 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 00:41:23.866249 systemd-logind[1562]: Removed session 29. Jan 20 00:41:23.925279 sshd[4396]: Accepted publickey for core from 10.0.0.1 port 49292 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:23.927402 sshd[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:23.941940 systemd-logind[1562]: New session 30 of user core. Jan 20 00:41:23.954003 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 00:41:24.709405 sshd[4396]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:24.731411 systemd[1]: Started sshd@30-10.0.0.52:22-10.0.0.1:49300.service - OpenSSH per-connection server daemon (10.0.0.1:49300). Jan 20 00:41:24.734336 systemd[1]: sshd@29-10.0.0.52:22-10.0.0.1:49292.service: Deactivated successfully. Jan 20 00:41:24.742467 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 00:41:24.751271 systemd-logind[1562]: Session 30 logged out. Waiting for processes to exit. Jan 20 00:41:24.759480 systemd-logind[1562]: Removed session 30. Jan 20 00:41:24.828644 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 49300 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:24.831732 sshd[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:24.858029 systemd-logind[1562]: New session 31 of user core. Jan 20 00:41:24.864534 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 00:41:26.377101 sshd[4411]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:26.400876 systemd[1]: Started sshd@31-10.0.0.52:22-10.0.0.1:49316.service - OpenSSH per-connection server daemon (10.0.0.1:49316). Jan 20 00:41:26.403215 systemd[1]: sshd@30-10.0.0.52:22-10.0.0.1:49300.service: Deactivated successfully. Jan 20 00:41:26.409220 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 00:41:26.417082 systemd-logind[1562]: Session 31 logged out. Waiting for processes to exit. Jan 20 00:41:26.429284 systemd-logind[1562]: Removed session 31. Jan 20 00:41:26.507722 sshd[4432]: Accepted publickey for core from 10.0.0.1 port 49316 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:26.510330 sshd[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:26.534344 systemd-logind[1562]: New session 32 of user core. Jan 20 00:41:26.548227 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 00:41:27.419268 sshd[4432]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:27.438380 systemd[1]: Started sshd@32-10.0.0.52:22-10.0.0.1:49324.service - OpenSSH per-connection server daemon (10.0.0.1:49324). Jan 20 00:41:27.448306 systemd[1]: sshd@31-10.0.0.52:22-10.0.0.1:49316.service: Deactivated successfully. Jan 20 00:41:27.458324 systemd-logind[1562]: Session 32 logged out. Waiting for processes to exit. Jan 20 00:41:27.464372 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 00:41:27.475908 systemd-logind[1562]: Removed session 32. Jan 20 00:41:27.546681 sshd[4445]: Accepted publickey for core from 10.0.0.1 port 49324 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:27.550702 sshd[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:27.565116 systemd-logind[1562]: New session 33 of user core. Jan 20 00:41:27.576619 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 00:41:27.854743 sshd[4445]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:27.865877 systemd[1]: sshd@32-10.0.0.52:22-10.0.0.1:49324.service: Deactivated successfully. Jan 20 00:41:27.873231 systemd-logind[1562]: Session 33 logged out. Waiting for processes to exit. Jan 20 00:41:27.874821 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 00:41:27.875880 systemd-logind[1562]: Removed session 33. Jan 20 00:41:32.303061 kubelet[2644]: E0120 00:41:32.300818 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:32.891379 systemd[1]: Started sshd@33-10.0.0.52:22-10.0.0.1:39254.service - OpenSSH per-connection server daemon (10.0.0.1:39254). Jan 20 00:41:32.986033 sshd[4465]: Accepted publickey for core from 10.0.0.1 port 39254 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:32.990093 sshd[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:33.002772 systemd-logind[1562]: New session 34 of user core. Jan 20 00:41:33.014728 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 00:41:33.264534 sshd[4465]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:33.270885 systemd[1]: sshd@33-10.0.0.52:22-10.0.0.1:39254.service: Deactivated successfully. Jan 20 00:41:33.282045 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 00:41:33.283426 systemd-logind[1562]: Session 34 logged out. Waiting for processes to exit. Jan 20 00:41:33.289276 systemd-logind[1562]: Removed session 34. Jan 20 00:41:33.302034 kubelet[2644]: E0120 00:41:33.301809 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:35.307472 kubelet[2644]: E0120 00:41:35.307371 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:38.289505 systemd[1]: Started sshd@34-10.0.0.52:22-10.0.0.1:39264.service - OpenSSH per-connection server daemon (10.0.0.1:39264). Jan 20 00:41:38.369492 sshd[4480]: Accepted publickey for core from 10.0.0.1 port 39264 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:38.376140 sshd[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:38.390906 systemd-logind[1562]: New session 35 of user core. Jan 20 00:41:38.406050 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 00:41:38.622765 sshd[4480]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:38.632493 systemd[1]: sshd@34-10.0.0.52:22-10.0.0.1:39264.service: Deactivated successfully. Jan 20 00:41:38.639833 systemd-logind[1562]: Session 35 logged out. Waiting for processes to exit. Jan 20 00:41:38.642452 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 00:41:38.644360 systemd-logind[1562]: Removed session 35. Jan 20 00:41:39.311052 kubelet[2644]: E0120 00:41:39.309503 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:43.643814 systemd[1]: Started sshd@35-10.0.0.52:22-10.0.0.1:60590.service - OpenSSH per-connection server daemon (10.0.0.1:60590). Jan 20 00:41:43.700577 sshd[4497]: Accepted publickey for core from 10.0.0.1 port 60590 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:43.705128 sshd[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:43.718133 systemd-logind[1562]: New session 36 of user core. Jan 20 00:41:43.725632 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 00:41:43.977539 sshd[4497]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:43.992329 systemd[1]: sshd@35-10.0.0.52:22-10.0.0.1:60590.service: Deactivated successfully. Jan 20 00:41:44.004188 systemd-logind[1562]: Session 36 logged out. Waiting for processes to exit. Jan 20 00:41:44.006828 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 00:41:44.013787 systemd-logind[1562]: Removed session 36. Jan 20 00:41:45.307756 kubelet[2644]: E0120 00:41:45.305171 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:48.991435 systemd[1]: Started sshd@36-10.0.0.52:22-10.0.0.1:60594.service - OpenSSH per-connection server daemon (10.0.0.1:60594). Jan 20 00:41:49.060438 sshd[4512]: Accepted publickey for core from 10.0.0.1 port 60594 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:49.068430 sshd[4512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:49.083150 systemd-logind[1562]: New session 37 of user core. Jan 20 00:41:49.101578 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 00:41:49.304885 kubelet[2644]: E0120 00:41:49.302419 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:49.345859 sshd[4512]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:49.356836 systemd[1]: sshd@36-10.0.0.52:22-10.0.0.1:60594.service: Deactivated successfully. Jan 20 00:41:49.360411 systemd-logind[1562]: Session 37 logged out. Waiting for processes to exit. Jan 20 00:41:49.360509 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 00:41:49.363101 systemd-logind[1562]: Removed session 37. Jan 20 00:41:54.366455 systemd[1]: Started sshd@37-10.0.0.52:22-10.0.0.1:43218.service - OpenSSH per-connection server daemon (10.0.0.1:43218). Jan 20 00:41:54.442850 sshd[4529]: Accepted publickey for core from 10.0.0.1 port 43218 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:54.445124 sshd[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:54.455876 systemd-logind[1562]: New session 38 of user core. Jan 20 00:41:54.459526 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 00:41:54.671067 sshd[4529]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:54.690443 systemd[1]: Started sshd@38-10.0.0.52:22-10.0.0.1:43220.service - OpenSSH per-connection server daemon (10.0.0.1:43220). Jan 20 00:41:54.693074 systemd[1]: sshd@37-10.0.0.52:22-10.0.0.1:43218.service: Deactivated successfully. Jan 20 00:41:54.711445 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 00:41:54.720882 systemd-logind[1562]: Session 38 logged out. Waiting for processes to exit. Jan 20 00:41:54.725560 systemd-logind[1562]: Removed session 38. Jan 20 00:41:54.783004 sshd[4541]: Accepted publickey for core from 10.0.0.1 port 43220 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:54.785780 sshd[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:54.797237 systemd-logind[1562]: New session 39 of user core. Jan 20 00:41:54.805667 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 20 00:41:56.302139 kubelet[2644]: E0120 00:41:56.301397 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:56.534611 containerd[1583]: time="2026-01-20T00:41:56.532719930Z" level=info msg="StopContainer for \"7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60\" with timeout 30 (s)" Jan 20 00:41:56.535357 containerd[1583]: time="2026-01-20T00:41:56.535048847Z" level=info msg="Stop container \"7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60\" with signal terminated" Jan 20 00:41:56.600672 containerd[1583]: time="2026-01-20T00:41:56.600487079Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:41:56.609303 containerd[1583]: time="2026-01-20T00:41:56.608332280Z" level=info msg="StopContainer for \"50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4\" with timeout 2 (s)" Jan 20 00:41:56.609303 containerd[1583]: time="2026-01-20T00:41:56.608845267Z" level=info msg="Stop container \"50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4\" with signal terminated" Jan 20 00:41:56.648054 systemd-networkd[1248]: lxc_health: Link DOWN Jan 20 00:41:56.648066 systemd-networkd[1248]: lxc_health: Lost carrier Jan 20 00:41:56.681379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60-rootfs.mount: Deactivated successfully. Jan 20 00:41:56.729455 containerd[1583]: time="2026-01-20T00:41:56.729382349Z" level=info msg="shim disconnected" id=7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60 namespace=k8s.io Jan 20 00:41:56.730114 containerd[1583]: time="2026-01-20T00:41:56.729754153Z" level=warning msg="cleaning up after shim disconnected" id=7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60 namespace=k8s.io Jan 20 00:41:56.730114 containerd[1583]: time="2026-01-20T00:41:56.729780291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:41:56.782806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4-rootfs.mount: Deactivated successfully. Jan 20 00:41:56.811587 containerd[1583]: time="2026-01-20T00:41:56.811493772Z" level=info msg="StopContainer for \"7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60\" returns successfully" Jan 20 00:41:56.820356 containerd[1583]: time="2026-01-20T00:41:56.820287213Z" level=info msg="shim disconnected" id=50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4 namespace=k8s.io Jan 20 00:41:56.820698 containerd[1583]: time="2026-01-20T00:41:56.820535115Z" level=warning msg="cleaning up after shim disconnected" id=50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4 namespace=k8s.io Jan 20 00:41:56.820698 containerd[1583]: time="2026-01-20T00:41:56.820561465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:41:56.827419 containerd[1583]: time="2026-01-20T00:41:56.827374250Z" level=info msg="StopPodSandbox for \"3f37e9bb75bc1b90e8592b89bdd9ead91424d618795db3acf4640cc92f239cf4\"" Jan 20 00:41:56.827723 containerd[1583]: time="2026-01-20T00:41:56.827572620Z" level=info msg="Container to stop \"7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:41:56.836480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f37e9bb75bc1b90e8592b89bdd9ead91424d618795db3acf4640cc92f239cf4-shm.mount: Deactivated successfully. Jan 20 00:41:56.898890 containerd[1583]: time="2026-01-20T00:41:56.898674151Z" level=info msg="StopContainer for \"50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4\" returns successfully" Jan 20 00:41:56.901200 containerd[1583]: time="2026-01-20T00:41:56.901165642Z" level=info msg="StopPodSandbox for \"8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802\"" Jan 20 00:41:56.901356 containerd[1583]: time="2026-01-20T00:41:56.901331070Z" level=info msg="Container to stop \"98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:41:56.908038 containerd[1583]: time="2026-01-20T00:41:56.901438029Z" level=info msg="Container to stop \"d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:41:56.908038 containerd[1583]: time="2026-01-20T00:41:56.901462315Z" level=info msg="Container to stop \"38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:41:56.908038 containerd[1583]: time="2026-01-20T00:41:56.901480629Z" level=info msg="Container to stop \"0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:41:56.908038 containerd[1583]: time="2026-01-20T00:41:56.901495957Z" level=info msg="Container to stop \"50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:41:57.058231 containerd[1583]: time="2026-01-20T00:41:57.058117843Z" level=info msg="shim disconnected" id=3f37e9bb75bc1b90e8592b89bdd9ead91424d618795db3acf4640cc92f239cf4 namespace=k8s.io Jan 20 00:41:57.058231 containerd[1583]: time="2026-01-20T00:41:57.058219132Z" level=warning msg="cleaning up after shim disconnected" id=3f37e9bb75bc1b90e8592b89bdd9ead91424d618795db3acf4640cc92f239cf4 namespace=k8s.io Jan 20 00:41:57.058231 containerd[1583]: time="2026-01-20T00:41:57.058236915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:41:57.080713 containerd[1583]: time="2026-01-20T00:41:57.079619356Z" level=info msg="shim disconnected" id=8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802 namespace=k8s.io Jan 20 00:41:57.080713 containerd[1583]: time="2026-01-20T00:41:57.079743257Z" level=warning msg="cleaning up after shim disconnected" id=8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802 namespace=k8s.io Jan 20 00:41:57.080713 containerd[1583]: time="2026-01-20T00:41:57.079758305Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:41:57.093801 containerd[1583]: time="2026-01-20T00:41:57.091839154Z" level=info msg="TearDown network for sandbox \"3f37e9bb75bc1b90e8592b89bdd9ead91424d618795db3acf4640cc92f239cf4\" successfully" Jan 20 00:41:57.093801 containerd[1583]: time="2026-01-20T00:41:57.091881453Z" level=info msg="StopPodSandbox for \"3f37e9bb75bc1b90e8592b89bdd9ead91424d618795db3acf4640cc92f239cf4\" returns successfully" Jan 20 00:41:57.131070 containerd[1583]: time="2026-01-20T00:41:57.130861016Z" level=info msg="TearDown network for sandbox \"8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802\" successfully" Jan 20 00:41:57.131070 containerd[1583]: time="2026-01-20T00:41:57.130901542Z" level=info msg="StopPodSandbox for \"8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802\" returns successfully" Jan 20 00:41:57.202750 kubelet[2644]: I0120 00:41:57.202389 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-bpf-maps\") pod \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " Jan 20 00:41:57.202750 kubelet[2644]: I0120 00:41:57.202462 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-cilium-config-path\") pod \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " Jan 20 00:41:57.202750 kubelet[2644]: I0120 00:41:57.202499 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec35af2d-b068-4cdc-a65b-12929172d504-cilium-config-path\") pod \"ec35af2d-b068-4cdc-a65b-12929172d504\" (UID: \"ec35af2d-b068-4cdc-a65b-12929172d504\") " Jan 20 00:41:57.202750 kubelet[2644]: I0120 00:41:57.202534 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-hubble-tls\") pod \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " Jan 20 00:41:57.202750 kubelet[2644]: I0120 00:41:57.202562 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-cni-path\") pod \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " Jan 20 00:41:57.202750 kubelet[2644]: I0120 00:41:57.202593 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-cilium-run\") pod \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " Jan 20 00:41:57.203295 kubelet[2644]: I0120 00:41:57.202625 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-etc-cni-netd\") pod \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " Jan 20 00:41:57.203295 kubelet[2644]: I0120 00:41:57.202681 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-lib-modules\") pod \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " Jan 20 00:41:57.204035 kubelet[2644]: I0120 00:41:57.202707 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-host-proc-sys-kernel\") pod \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " Jan 20 00:41:57.204035 kubelet[2644]: I0120 00:41:57.203437 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-cilium-cgroup\") pod \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " Jan 20 00:41:57.204035 kubelet[2644]: I0120 00:41:57.203465 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-hostproc\") pod \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " Jan 20 00:41:57.204035 kubelet[2644]: I0120 00:41:57.203493 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-xtables-lock\") pod \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " Jan 20 00:41:57.204035 kubelet[2644]: I0120 00:41:57.203519 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-host-proc-sys-net\") pod \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " Jan 20 00:41:57.204035 kubelet[2644]: I0120 00:41:57.203552 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-clustermesh-secrets\") pod \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " Jan 20 00:41:57.204308 kubelet[2644]: I0120 00:41:57.203584 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gspl9\" (UniqueName: \"kubernetes.io/projected/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-kube-api-access-gspl9\") pod \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\" (UID: \"e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a\") " Jan 20 00:41:57.204308 kubelet[2644]: I0120 00:41:57.203615 2644 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2487t\" (UniqueName: \"kubernetes.io/projected/ec35af2d-b068-4cdc-a65b-12929172d504-kube-api-access-2487t\") pod \"ec35af2d-b068-4cdc-a65b-12929172d504\" (UID: \"ec35af2d-b068-4cdc-a65b-12929172d504\") " Jan 20 00:41:57.204530 kubelet[2644]: I0120 00:41:57.204446 2644 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a" (UID: "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:41:57.205070 kubelet[2644]: I0120 00:41:57.204761 2644 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a" (UID: "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:41:57.205070 kubelet[2644]: I0120 00:41:57.204848 2644 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-hostproc" (OuterVolumeSpecName: "hostproc") pod "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a" (UID: "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:41:57.205070 kubelet[2644]: I0120 00:41:57.204879 2644 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a" (UID: "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:41:57.207911 kubelet[2644]: I0120 00:41:57.207861 2644 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-cni-path" (OuterVolumeSpecName: "cni-path") pod "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a" (UID: "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:41:57.208048 kubelet[2644]: I0120 00:41:57.207921 2644 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a" (UID: "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:41:57.208048 kubelet[2644]: I0120 00:41:57.207954 2644 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a" (UID: "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:41:57.208048 kubelet[2644]: I0120 00:41:57.208033 2644 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a" (UID: "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:41:57.208190 kubelet[2644]: I0120 00:41:57.208061 2644 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a" (UID: "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:41:57.208190 kubelet[2644]: I0120 00:41:57.208091 2644 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a" (UID: "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:41:57.219336 kubelet[2644]: I0120 00:41:57.219249 2644 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a" (UID: "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 00:41:57.228577 kubelet[2644]: I0120 00:41:57.228382 2644 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec35af2d-b068-4cdc-a65b-12929172d504-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ec35af2d-b068-4cdc-a65b-12929172d504" (UID: "ec35af2d-b068-4cdc-a65b-12929172d504"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 00:41:57.231742 kubelet[2644]: I0120 00:41:57.231616 2644 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec35af2d-b068-4cdc-a65b-12929172d504-kube-api-access-2487t" (OuterVolumeSpecName: "kube-api-access-2487t") pod "ec35af2d-b068-4cdc-a65b-12929172d504" (UID: "ec35af2d-b068-4cdc-a65b-12929172d504"). InnerVolumeSpecName "kube-api-access-2487t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:41:57.238262 kubelet[2644]: I0120 00:41:57.238162 2644 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a" (UID: "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:41:57.238846 kubelet[2644]: I0120 00:41:57.238748 2644 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-kube-api-access-gspl9" (OuterVolumeSpecName: "kube-api-access-gspl9") pod "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a" (UID: "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a"). InnerVolumeSpecName "kube-api-access-gspl9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:41:57.239621 kubelet[2644]: I0120 00:41:57.239280 2644 scope.go:117] "RemoveContainer" containerID="50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4" Jan 20 00:41:57.242137 kubelet[2644]: I0120 00:41:57.241701 2644 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a" (UID: "e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 00:41:57.242377 containerd[1583]: time="2026-01-20T00:41:57.242206644Z" level=info msg="RemoveContainer for \"50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4\"" Jan 20 00:41:57.281824 containerd[1583]: time="2026-01-20T00:41:57.281701309Z" level=info msg="RemoveContainer for \"50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4\" returns successfully" Jan 20 00:41:57.283870 kubelet[2644]: I0120 00:41:57.283750 2644 scope.go:117] "RemoveContainer" containerID="0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d" Jan 20 00:41:57.285460 containerd[1583]: time="2026-01-20T00:41:57.285432534Z" level=info msg="RemoveContainer for \"0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d\"" Jan 20 00:41:57.297012 containerd[1583]: time="2026-01-20T00:41:57.295148931Z" level=info msg="RemoveContainer for \"0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d\" returns successfully" Jan 20 00:41:57.297154 kubelet[2644]: I0120 00:41:57.295476 2644 scope.go:117] "RemoveContainer" containerID="38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d" Jan 20 00:41:57.299871 containerd[1583]: time="2026-01-20T00:41:57.299787308Z" level=info msg="RemoveContainer for \"38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d\"" Jan 20 00:41:57.306229 kubelet[2644]: I0120 00:41:57.306038 2644 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 20 00:41:57.306229 kubelet[2644]: I0120 00:41:57.306137 2644 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 20 00:41:57.306229 kubelet[2644]: I0120 00:41:57.306152 2644 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 20 00:41:57.308356 kubelet[2644]: I0120 00:41:57.306169 2644 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gspl9\" (UniqueName: \"kubernetes.io/projected/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-kube-api-access-gspl9\") on node \"localhost\" DevicePath \"\"" Jan 20 00:41:57.308356 kubelet[2644]: I0120 00:41:57.306461 2644 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 20 00:41:57.308356 kubelet[2644]: I0120 00:41:57.306474 2644 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 20 00:41:57.308356 kubelet[2644]: I0120 00:41:57.306487 2644 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2487t\" (UniqueName: \"kubernetes.io/projected/ec35af2d-b068-4cdc-a65b-12929172d504-kube-api-access-2487t\") on node \"localhost\" DevicePath \"\"" Jan 20 00:41:57.308356 kubelet[2644]: I0120 00:41:57.307278 2644 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 20 00:41:57.308356 kubelet[2644]: I0120 00:41:57.307299 2644 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 00:41:57.308356 kubelet[2644]: I0120 00:41:57.307484 2644 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec35af2d-b068-4cdc-a65b-12929172d504-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 00:41:57.308356 kubelet[2644]: I0120 00:41:57.307500 2644 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 20 00:41:57.308608 kubelet[2644]: I0120 00:41:57.307511 2644 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 20 00:41:57.308608 kubelet[2644]: I0120 00:41:57.307522 2644 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 20 00:41:57.308608 kubelet[2644]: I0120 00:41:57.307724 2644 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 20 00:41:57.308608 kubelet[2644]: I0120 00:41:57.307736 2644 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 20 00:41:57.308608 kubelet[2644]: I0120 00:41:57.307748 2644 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 20 00:41:57.333587 containerd[1583]: time="2026-01-20T00:41:57.332542263Z" level=info msg="RemoveContainer for \"38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d\" returns successfully" Jan 20 00:41:57.336081 kubelet[2644]: I0120 00:41:57.334869 2644 scope.go:117] "RemoveContainer" containerID="d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4" Jan 20 00:41:57.341272 containerd[1583]: time="2026-01-20T00:41:57.340849890Z" level=info msg="RemoveContainer for \"d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4\"" Jan 20 00:41:57.357359 containerd[1583]: time="2026-01-20T00:41:57.357104640Z" level=info msg="RemoveContainer for \"d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4\" returns successfully" Jan 20 00:41:57.357484 kubelet[2644]: I0120 00:41:57.357385 2644 scope.go:117] "RemoveContainer" containerID="98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458" Jan 20 00:41:57.360264 containerd[1583]: time="2026-01-20T00:41:57.360157687Z" level=info msg="RemoveContainer for \"98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458\"" Jan 20 00:41:57.367335 containerd[1583]: time="2026-01-20T00:41:57.367238253Z" level=info msg="RemoveContainer for \"98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458\" returns successfully" Jan 20 00:41:57.367674 kubelet[2644]: I0120 00:41:57.367539 2644 scope.go:117] "RemoveContainer" containerID="50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4" Jan 20 00:41:57.368132 containerd[1583]: time="2026-01-20T00:41:57.368087820Z" level=error msg="ContainerStatus for \"50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4\": not found" Jan 20 00:41:57.404016 kubelet[2644]: E0120 00:41:57.403893 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4\": not found" containerID="50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4" Jan 20 00:41:57.404192 kubelet[2644]: I0120 00:41:57.404066 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4"} err="failed to get container status \"50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"50f7f5bb2ce0535a03ce408b3d456653de4d48a007c60dc8549788c3647d28d4\": not found" Jan 20 00:41:57.404254 kubelet[2644]: I0120 00:41:57.404191 2644 scope.go:117] "RemoveContainer" containerID="0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d" Jan 20 00:41:57.404754 containerd[1583]: time="2026-01-20T00:41:57.404608354Z" level=error msg="ContainerStatus for \"0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d\": not found" Jan 20 00:41:57.406682 kubelet[2644]: E0120 00:41:57.404857 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d\": not found" containerID="0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d" Jan 20 00:41:57.406682 kubelet[2644]: I0120 00:41:57.404924 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d"} err="failed to get container status \"0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ce3ef5d214b950bae8cbdbfaa66d6c6fe85dcfa474af6440987e05120a0642d\": not found" Jan 20 00:41:57.406682 kubelet[2644]: I0120 00:41:57.404953 2644 scope.go:117] "RemoveContainer" containerID="38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d" Jan 20 00:41:57.406682 kubelet[2644]: E0120 00:41:57.405413 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d\": not found" containerID="38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d" Jan 20 00:41:57.406682 kubelet[2644]: I0120 00:41:57.405443 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d"} err="failed to get container status \"38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d\": rpc error: code = NotFound desc = an error occurred when try to find container \"38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d\": not found" Jan 20 00:41:57.406682 kubelet[2644]: I0120 00:41:57.405468 2644 scope.go:117] "RemoveContainer" containerID="d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4" Jan 20 00:41:57.407019 containerd[1583]: time="2026-01-20T00:41:57.405232399Z" level=error msg="ContainerStatus for \"38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38ef162278458659b3ee8384ab719754e0d0407e12e7df14a5f7c5e478a02e2d\": not found" Jan 20 00:41:57.407019 containerd[1583]: time="2026-01-20T00:41:57.405750757Z" level=error msg="ContainerStatus for \"d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4\": not found" Jan 20 00:41:57.407019 containerd[1583]: time="2026-01-20T00:41:57.406396682Z" level=error msg="ContainerStatus for \"98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458\": not found" Jan 20 00:41:57.407142 kubelet[2644]: E0120 00:41:57.405940 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4\": not found" containerID="d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4" Jan 20 00:41:57.407142 kubelet[2644]: I0120 00:41:57.406032 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4"} err="failed to get container status \"d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4\": rpc error: code = NotFound desc = an error occurred when try to find container \"d15df20f7c1a65e1d60de274bbe46908f18d2038452f9703af3a7a70f762eff4\": not found" Jan 20 00:41:57.407142 kubelet[2644]: I0120 00:41:57.406057 2644 scope.go:117] "RemoveContainer" containerID="98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458" Jan 20 00:41:57.407142 kubelet[2644]: E0120 00:41:57.406558 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458\": not found" containerID="98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458" Jan 20 00:41:57.407142 kubelet[2644]: I0120 00:41:57.406581 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458"} err="failed to get container status \"98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458\": rpc error: code = NotFound desc = an error occurred when try to find container \"98e0e09fec61497fff173f07157de920600b4dc617b7e7cd37976fccc85a1458\": not found" Jan 20 00:41:57.407142 kubelet[2644]: I0120 00:41:57.406600 2644 scope.go:117] "RemoveContainer" containerID="7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60" Jan 20 00:41:57.408946 containerd[1583]: time="2026-01-20T00:41:57.408864067Z" level=info msg="RemoveContainer for \"7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60\"" Jan 20 00:41:57.419953 containerd[1583]: time="2026-01-20T00:41:57.419842053Z" level=info msg="RemoveContainer for \"7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60\" returns successfully" Jan 20 00:41:57.420391 kubelet[2644]: I0120 00:41:57.420235 2644 scope.go:117] "RemoveContainer" containerID="7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60" Jan 20 00:41:57.421707 containerd[1583]: time="2026-01-20T00:41:57.420558304Z" level=error msg="ContainerStatus for \"7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60\": not found" Jan 20 00:41:57.422162 kubelet[2644]: E0120 00:41:57.421924 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60\": not found" containerID="7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60" Jan 20 00:41:57.422162 kubelet[2644]: I0120 00:41:57.422046 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60"} err="failed to get container status \"7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e6b21f4af8455c4c9d634eacd57a9b6ddba7e0a82e3c4ff6fa40d58e13fce60\": not found" Jan 20 00:41:57.536706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f37e9bb75bc1b90e8592b89bdd9ead91424d618795db3acf4640cc92f239cf4-rootfs.mount: Deactivated successfully. Jan 20 00:41:57.536959 systemd[1]: var-lib-kubelet-pods-ec35af2d\x2db068\x2d4cdc\x2da65b\x2d12929172d504-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2487t.mount: Deactivated successfully. Jan 20 00:41:57.537192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802-rootfs.mount: Deactivated successfully. Jan 20 00:41:57.537369 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e850e03ece82484886150ffba51646291156762b9f7253662b08deb0c66a802-shm.mount: Deactivated successfully. Jan 20 00:41:57.537556 systemd[1]: var-lib-kubelet-pods-e3f7ed6a\x2dcdf2\x2d4a81\x2d8ce0\x2d8c66a822498a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgspl9.mount: Deactivated successfully. Jan 20 00:41:57.537774 systemd[1]: var-lib-kubelet-pods-e3f7ed6a\x2dcdf2\x2d4a81\x2d8ce0\x2d8c66a822498a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 20 00:41:57.537948 systemd[1]: var-lib-kubelet-pods-e3f7ed6a\x2dcdf2\x2d4a81\x2d8ce0\x2d8c66a822498a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 20 00:41:58.352306 sshd[4541]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:58.378921 systemd[1]: Started sshd@39-10.0.0.52:22-10.0.0.1:43226.service - OpenSSH per-connection server daemon (10.0.0.1:43226). Jan 20 00:41:58.382281 systemd[1]: sshd@38-10.0.0.52:22-10.0.0.1:43220.service: Deactivated successfully. Jan 20 00:41:58.393612 systemd-logind[1562]: Session 39 logged out. Waiting for processes to exit. Jan 20 00:41:58.397187 systemd[1]: session-39.scope: Deactivated successfully. Jan 20 00:41:58.407166 systemd-logind[1562]: Removed session 39. Jan 20 00:41:58.469901 kubelet[2644]: E0120 00:41:58.469752 2644 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 00:41:58.473327 sshd[4711]: Accepted publickey for core from 10.0.0.1 port 43226 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:58.478448 sshd[4711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:58.493824 systemd-logind[1562]: New session 40 of user core. Jan 20 00:41:58.515875 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 20 00:41:59.306703 kubelet[2644]: I0120 00:41:59.306566 2644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a" path="/var/lib/kubelet/pods/e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a/volumes" Jan 20 00:41:59.310274 kubelet[2644]: I0120 00:41:59.310189 2644 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec35af2d-b068-4cdc-a65b-12929172d504" path="/var/lib/kubelet/pods/ec35af2d-b068-4cdc-a65b-12929172d504/volumes" Jan 20 00:41:59.819340 sshd[4711]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:59.846345 systemd[1]: Started sshd@40-10.0.0.52:22-10.0.0.1:43242.service - OpenSSH per-connection server daemon (10.0.0.1:43242). Jan 20 00:41:59.847256 systemd[1]: sshd@39-10.0.0.52:22-10.0.0.1:43226.service: Deactivated successfully. Jan 20 00:41:59.866397 systemd[1]: session-40.scope: Deactivated successfully. Jan 20 00:41:59.885141 systemd-logind[1562]: Session 40 logged out. Waiting for processes to exit. Jan 20 00:41:59.891396 systemd-logind[1562]: Removed session 40. Jan 20 00:41:59.905768 kubelet[2644]: I0120 00:41:59.905181 2644 memory_manager.go:355] "RemoveStaleState removing state" podUID="ec35af2d-b068-4cdc-a65b-12929172d504" containerName="cilium-operator" Jan 20 00:41:59.905768 kubelet[2644]: I0120 00:41:59.905223 2644 memory_manager.go:355] "RemoveStaleState removing state" podUID="e3f7ed6a-cdf2-4a81-8ce0-8c66a822498a" containerName="cilium-agent" Jan 20 00:41:59.931851 sshd[4726]: Accepted publickey for core from 10.0.0.1 port 43242 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:41:59.934258 sshd[4726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:41:59.943344 kubelet[2644]: I0120 00:41:59.942435 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1-cilium-cgroup\") pod \"cilium-mzhrc\" (UID: \"6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1\") " pod="kube-system/cilium-mzhrc" Jan 20 00:41:59.943344 kubelet[2644]: I0120 00:41:59.942558 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1-lib-modules\") pod \"cilium-mzhrc\" (UID: \"6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1\") " pod="kube-system/cilium-mzhrc" Jan 20 00:41:59.943344 kubelet[2644]: I0120 00:41:59.942597 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1-host-proc-sys-net\") pod \"cilium-mzhrc\" (UID: \"6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1\") " pod="kube-system/cilium-mzhrc" Jan 20 00:41:59.943344 kubelet[2644]: I0120 00:41:59.942628 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1-host-proc-sys-kernel\") pod \"cilium-mzhrc\" (UID: \"6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1\") " pod="kube-system/cilium-mzhrc" Jan 20 00:41:59.943344 kubelet[2644]: I0120 00:41:59.942709 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1-xtables-lock\") pod \"cilium-mzhrc\" (UID: \"6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1\") " pod="kube-system/cilium-mzhrc" Jan 20 00:41:59.943344 kubelet[2644]: I0120 00:41:59.942744 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1-cni-path\") pod \"cilium-mzhrc\" (UID: \"6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1\") " pod="kube-system/cilium-mzhrc" Jan 20 00:41:59.943847 kubelet[2644]: I0120 00:41:59.942776 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mz6d\" (UniqueName: \"kubernetes.io/projected/6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1-kube-api-access-7mz6d\") pod \"cilium-mzhrc\" (UID: \"6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1\") " pod="kube-system/cilium-mzhrc" Jan 20 00:41:59.943847 kubelet[2644]: I0120 00:41:59.942805 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1-cilium-run\") pod \"cilium-mzhrc\" (UID: \"6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1\") " pod="kube-system/cilium-mzhrc" Jan 20 00:41:59.943847 kubelet[2644]: I0120 00:41:59.942836 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1-etc-cni-netd\") pod \"cilium-mzhrc\" (UID: \"6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1\") " pod="kube-system/cilium-mzhrc" Jan 20 00:41:59.943847 kubelet[2644]: I0120 00:41:59.943053 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1-clustermesh-secrets\") pod \"cilium-mzhrc\" (UID: \"6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1\") " pod="kube-system/cilium-mzhrc" Jan 20 00:41:59.943847 kubelet[2644]: I0120 00:41:59.943089 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1-cilium-ipsec-secrets\") pod \"cilium-mzhrc\" (UID: \"6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1\") " pod="kube-system/cilium-mzhrc" Jan 20 00:41:59.944771 kubelet[2644]: I0120 00:41:59.943119 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1-cilium-config-path\") pod \"cilium-mzhrc\" (UID: \"6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1\") " pod="kube-system/cilium-mzhrc" Jan 20 00:41:59.944771 kubelet[2644]: I0120 00:41:59.943147 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1-hubble-tls\") pod \"cilium-mzhrc\" (UID: \"6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1\") " pod="kube-system/cilium-mzhrc" Jan 20 00:41:59.944771 kubelet[2644]: I0120 00:41:59.943173 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1-bpf-maps\") pod \"cilium-mzhrc\" (UID: \"6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1\") " pod="kube-system/cilium-mzhrc" Jan 20 00:41:59.944771 kubelet[2644]: I0120 00:41:59.943274 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1-hostproc\") pod \"cilium-mzhrc\" (UID: \"6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1\") " pod="kube-system/cilium-mzhrc" Jan 20 00:41:59.955066 systemd-logind[1562]: New session 41 of user core. Jan 20 00:41:59.975253 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 20 00:42:00.075047 sshd[4726]: pam_unix(sshd:session): session closed for user core Jan 20 00:42:00.100481 systemd[1]: Started sshd@41-10.0.0.52:22-10.0.0.1:43252.service - OpenSSH per-connection server daemon (10.0.0.1:43252). Jan 20 00:42:00.101936 systemd[1]: sshd@40-10.0.0.52:22-10.0.0.1:43242.service: Deactivated successfully. Jan 20 00:42:00.110923 systemd[1]: session-41.scope: Deactivated successfully. Jan 20 00:42:00.117942 systemd-logind[1562]: Session 41 logged out. Waiting for processes to exit. Jan 20 00:42:00.123112 systemd-logind[1562]: Removed session 41. Jan 20 00:42:00.176798 sshd[4740]: Accepted publickey for core from 10.0.0.1 port 43252 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:42:00.183021 sshd[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:42:00.198453 systemd-logind[1562]: New session 42 of user core. Jan 20 00:42:00.208939 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 20 00:42:00.242271 kubelet[2644]: E0120 00:42:00.241780 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:00.244797 containerd[1583]: time="2026-01-20T00:42:00.243588978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mzhrc,Uid:6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1,Namespace:kube-system,Attempt:0,}" Jan 20 00:42:00.340932 containerd[1583]: time="2026-01-20T00:42:00.337284954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:42:00.340932 containerd[1583]: time="2026-01-20T00:42:00.337503341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:42:00.340932 containerd[1583]: time="2026-01-20T00:42:00.337518981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:00.340932 containerd[1583]: time="2026-01-20T00:42:00.338333486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:00.470446 containerd[1583]: time="2026-01-20T00:42:00.470397777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mzhrc,Uid:6ec0447c-c7f3-4301-b0dd-ee10dd2c5eb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"57b2376e177585fa049d2267918706812aafb6bc40fed8653f06874866813bd8\"" Jan 20 00:42:00.475294 kubelet[2644]: E0120 00:42:00.475144 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:00.483801 containerd[1583]: time="2026-01-20T00:42:00.482516066Z" level=info msg="CreateContainer within sandbox \"57b2376e177585fa049d2267918706812aafb6bc40fed8653f06874866813bd8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 00:42:00.545740 containerd[1583]: time="2026-01-20T00:42:00.545598620Z" level=info msg="CreateContainer within sandbox \"57b2376e177585fa049d2267918706812aafb6bc40fed8653f06874866813bd8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"570599c9e6817f96e7f14df858545e0fd842d467691e8f2791bc33e67cd33021\"" Jan 20 00:42:00.547148 containerd[1583]: time="2026-01-20T00:42:00.547048224Z" level=info msg="StartContainer for \"570599c9e6817f96e7f14df858545e0fd842d467691e8f2791bc33e67cd33021\"" Jan 20 00:42:00.696785 containerd[1583]: time="2026-01-20T00:42:00.695393114Z" level=info msg="StartContainer for \"570599c9e6817f96e7f14df858545e0fd842d467691e8f2791bc33e67cd33021\" returns successfully" Jan 20 00:42:00.822951 containerd[1583]: time="2026-01-20T00:42:00.822120367Z" level=info msg="shim disconnected" id=570599c9e6817f96e7f14df858545e0fd842d467691e8f2791bc33e67cd33021 namespace=k8s.io Jan 20 00:42:00.822951 containerd[1583]: time="2026-01-20T00:42:00.822182222Z" level=warning msg="cleaning up after shim disconnected" id=570599c9e6817f96e7f14df858545e0fd842d467691e8f2791bc33e67cd33021 namespace=k8s.io Jan 20 00:42:00.822951 containerd[1583]: time="2026-01-20T00:42:00.822196759Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:42:00.881702 containerd[1583]: time="2026-01-20T00:42:00.881528131Z" level=warning msg="cleanup warnings time=\"2026-01-20T00:42:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 20 00:42:01.285910 kubelet[2644]: E0120 00:42:01.285362 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:01.292208 containerd[1583]: time="2026-01-20T00:42:01.292076743Z" level=info msg="CreateContainer within sandbox \"57b2376e177585fa049d2267918706812aafb6bc40fed8653f06874866813bd8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 00:42:01.390879 containerd[1583]: time="2026-01-20T00:42:01.385900696Z" level=info msg="CreateContainer within sandbox \"57b2376e177585fa049d2267918706812aafb6bc40fed8653f06874866813bd8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"67115e493c92527d59092879c476d9809c3d137f6c4464b503b3c8281e16b9b6\"" Jan 20 00:42:01.399394 containerd[1583]: time="2026-01-20T00:42:01.394843388Z" level=info msg="StartContainer for \"67115e493c92527d59092879c476d9809c3d137f6c4464b503b3c8281e16b9b6\"" Jan 20 00:42:01.626938 containerd[1583]: time="2026-01-20T00:42:01.626756735Z" level=info msg="StartContainer for \"67115e493c92527d59092879c476d9809c3d137f6c4464b503b3c8281e16b9b6\" returns successfully" Jan 20 00:42:01.737104 containerd[1583]: time="2026-01-20T00:42:01.733755020Z" level=info msg="shim disconnected" id=67115e493c92527d59092879c476d9809c3d137f6c4464b503b3c8281e16b9b6 namespace=k8s.io Jan 20 00:42:01.737104 containerd[1583]: time="2026-01-20T00:42:01.733816345Z" level=warning msg="cleaning up after shim disconnected" id=67115e493c92527d59092879c476d9809c3d137f6c4464b503b3c8281e16b9b6 namespace=k8s.io Jan 20 00:42:01.737104 containerd[1583]: time="2026-01-20T00:42:01.733829830Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:42:02.070601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67115e493c92527d59092879c476d9809c3d137f6c4464b503b3c8281e16b9b6-rootfs.mount: Deactivated successfully. Jan 20 00:42:02.291777 kubelet[2644]: E0120 00:42:02.287931 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:02.293955 containerd[1583]: time="2026-01-20T00:42:02.293203384Z" level=info msg="CreateContainer within sandbox \"57b2376e177585fa049d2267918706812aafb6bc40fed8653f06874866813bd8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 00:42:02.351957 containerd[1583]: time="2026-01-20T00:42:02.351629699Z" level=info msg="CreateContainer within sandbox \"57b2376e177585fa049d2267918706812aafb6bc40fed8653f06874866813bd8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f71477d5174ae47f507313058f12e8ca56464c6e13582b26a01577ae700f4d0c\"" Jan 20 00:42:02.353741 containerd[1583]: time="2026-01-20T00:42:02.352474485Z" level=info msg="StartContainer for \"f71477d5174ae47f507313058f12e8ca56464c6e13582b26a01577ae700f4d0c\"" Jan 20 00:42:02.562789 containerd[1583]: time="2026-01-20T00:42:02.561187888Z" level=info msg="StartContainer for \"f71477d5174ae47f507313058f12e8ca56464c6e13582b26a01577ae700f4d0c\" returns successfully" Jan 20 00:42:02.650829 containerd[1583]: time="2026-01-20T00:42:02.650514059Z" level=info msg="shim disconnected" id=f71477d5174ae47f507313058f12e8ca56464c6e13582b26a01577ae700f4d0c namespace=k8s.io Jan 20 00:42:02.650829 containerd[1583]: time="2026-01-20T00:42:02.650606802Z" level=warning msg="cleaning up after shim disconnected" id=f71477d5174ae47f507313058f12e8ca56464c6e13582b26a01577ae700f4d0c namespace=k8s.io Jan 20 00:42:02.650829 containerd[1583]: time="2026-01-20T00:42:02.650624204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:42:03.070250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f71477d5174ae47f507313058f12e8ca56464c6e13582b26a01577ae700f4d0c-rootfs.mount: Deactivated successfully. Jan 20 00:42:03.316378 kubelet[2644]: E0120 00:42:03.307936 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:03.339830 containerd[1583]: time="2026-01-20T00:42:03.323406181Z" level=info msg="CreateContainer within sandbox \"57b2376e177585fa049d2267918706812aafb6bc40fed8653f06874866813bd8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 00:42:03.426319 containerd[1583]: time="2026-01-20T00:42:03.426213422Z" level=info msg="CreateContainer within sandbox \"57b2376e177585fa049d2267918706812aafb6bc40fed8653f06874866813bd8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d5999562fa2ca99606b6fa0771b3e128d466e9b92a2874e3f999c21dd0f049e2\"" Jan 20 00:42:03.428704 containerd[1583]: time="2026-01-20T00:42:03.427167973Z" level=info msg="StartContainer for \"d5999562fa2ca99606b6fa0771b3e128d466e9b92a2874e3f999c21dd0f049e2\"" Jan 20 00:42:03.472724 kubelet[2644]: E0120 00:42:03.470709 2644 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 00:42:03.649613 containerd[1583]: time="2026-01-20T00:42:03.646823816Z" level=info msg="StartContainer for \"d5999562fa2ca99606b6fa0771b3e128d466e9b92a2874e3f999c21dd0f049e2\" returns successfully" Jan 20 00:42:03.738604 containerd[1583]: time="2026-01-20T00:42:03.738530996Z" level=info msg="shim disconnected" id=d5999562fa2ca99606b6fa0771b3e128d466e9b92a2874e3f999c21dd0f049e2 namespace=k8s.io Jan 20 00:42:03.739150 containerd[1583]: time="2026-01-20T00:42:03.738950759Z" level=warning msg="cleaning up after shim disconnected" id=d5999562fa2ca99606b6fa0771b3e128d466e9b92a2874e3f999c21dd0f049e2 namespace=k8s.io Jan 20 00:42:03.739150 containerd[1583]: time="2026-01-20T00:42:03.739034245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:42:04.068468 systemd[1]: run-containerd-runc-k8s.io-d5999562fa2ca99606b6fa0771b3e128d466e9b92a2874e3f999c21dd0f049e2-runc.gQb3PF.mount: Deactivated successfully. Jan 20 00:42:04.068791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5999562fa2ca99606b6fa0771b3e128d466e9b92a2874e3f999c21dd0f049e2-rootfs.mount: Deactivated successfully. Jan 20 00:42:04.332323 kubelet[2644]: E0120 00:42:04.324679 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:04.345909 containerd[1583]: time="2026-01-20T00:42:04.345750595Z" level=info msg="CreateContainer within sandbox \"57b2376e177585fa049d2267918706812aafb6bc40fed8653f06874866813bd8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 00:42:04.394754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376403538.mount: Deactivated successfully. Jan 20 00:42:04.411450 containerd[1583]: time="2026-01-20T00:42:04.411343354Z" level=info msg="CreateContainer within sandbox \"57b2376e177585fa049d2267918706812aafb6bc40fed8653f06874866813bd8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"adc81085333a0868dc0839d449eb838831eecb4bc37c88942872f4ef9f8c8b01\"" Jan 20 00:42:04.413901 containerd[1583]: time="2026-01-20T00:42:04.412337635Z" level=info msg="StartContainer for \"adc81085333a0868dc0839d449eb838831eecb4bc37c88942872f4ef9f8c8b01\"" Jan 20 00:42:04.567840 containerd[1583]: time="2026-01-20T00:42:04.567776668Z" level=info msg="StartContainer for \"adc81085333a0868dc0839d449eb838831eecb4bc37c88942872f4ef9f8c8b01\" returns successfully" Jan 20 00:42:05.334747 kubelet[2644]: E0120 00:42:05.334645 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:05.377539 kubelet[2644]: I0120 00:42:05.377417 2644 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mzhrc" podStartSLOduration=6.377394881 podStartE2EDuration="6.377394881s" podCreationTimestamp="2026-01-20 00:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:42:05.375188504 +0000 UTC m=+192.188275731" watchObservedRunningTime="2026-01-20 00:42:05.377394881 +0000 UTC m=+192.190482088" Jan 20 00:42:05.568062 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 20 00:42:06.336750 kubelet[2644]: E0120 00:42:06.336474 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:07.657955 kubelet[2644]: I0120 00:42:07.656402 2644 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T00:42:07Z","lastTransitionTime":"2026-01-20T00:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 20 00:42:11.249211 systemd-networkd[1248]: lxc_health: Link UP Jan 20 00:42:11.304195 systemd-networkd[1248]: lxc_health: Gained carrier Jan 20 00:42:11.959505 kubelet[2644]: E0120 00:42:11.958879 2644 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34838->127.0.0.1:32935: write tcp 127.0.0.1:34838->127.0.0.1:32935: write: broken pipe Jan 20 00:42:12.247900 kubelet[2644]: E0120 00:42:12.245377 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:12.397777 kubelet[2644]: E0120 00:42:12.396043 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:13.265033 systemd-networkd[1248]: lxc_health: Gained IPv6LL Jan 20 00:42:13.397066 kubelet[2644]: E0120 00:42:13.394270 2644 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:17.113600 sshd[4740]: pam_unix(sshd:session): session closed for user core Jan 20 00:42:17.128511 systemd[1]: sshd@41-10.0.0.52:22-10.0.0.1:43252.service: Deactivated successfully. Jan 20 00:42:17.139630 systemd-logind[1562]: Session 42 logged out. Waiting for processes to exit. Jan 20 00:42:17.145033 systemd[1]: session-42.scope: Deactivated successfully. Jan 20 00:42:17.150174 systemd-logind[1562]: Removed session 42.