Jan 23 01:21:28.924411 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 01:21:28.924464 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:21:28.924473 kernel: BIOS-provided physical RAM map: Jan 23 01:21:28.924479 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jan 23 01:21:28.924485 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jan 23 01:21:28.924491 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 01:21:28.924501 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jan 23 01:21:28.924507 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jan 23 01:21:28.924513 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 01:21:28.924519 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 01:21:28.924525 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 01:21:28.924531 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 01:21:28.924537 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 23 01:21:28.924543 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 01:21:28.924552 kernel: NX (Execute Disable) protection: active Jan 23 01:21:28.924559 kernel: APIC: Static calls initialized Jan 23 01:21:28.924565 kernel: SMBIOS 2.8 present. Jan 23 01:21:28.924571 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jan 23 01:21:28.924578 kernel: DMI: Memory slots populated: 1/1 Jan 23 01:21:28.924584 kernel: Hypervisor detected: KVM Jan 23 01:21:28.924592 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 23 01:21:28.924598 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 01:21:28.924605 kernel: kvm-clock: using sched offset of 7459078522 cycles Jan 23 01:21:28.924611 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 01:21:28.924618 kernel: tsc: Detected 2000.002 MHz processor Jan 23 01:21:28.924625 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 01:21:28.924632 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 01:21:28.924638 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jan 23 01:21:28.924645 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 01:21:28.924651 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 01:21:28.924660 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 23 01:21:28.924666 kernel: Using GB pages for direct mapping Jan 23 01:21:28.924673 kernel: ACPI: Early table checksum verification disabled Jan 23 01:21:28.924679 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jan 23 01:21:28.924686 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:21:28.924692 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:21:28.924699 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:21:28.924705 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 23 01:21:28.924712 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:21:28.924721 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:21:28.924730 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:21:28.924737 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:21:28.924744 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jan 23 01:21:28.924751 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jan 23 01:21:28.924760 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 23 01:21:28.924766 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jan 23 01:21:28.924773 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jan 23 01:21:28.924780 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jan 23 01:21:28.924786 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jan 23 01:21:28.924793 kernel: No NUMA configuration found Jan 23 01:21:28.924800 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 23 01:21:28.924807 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Jan 23 01:21:28.924813 kernel: Zone ranges: Jan 23 01:21:28.924822 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 01:21:28.924829 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 01:21:28.924836 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 23 01:21:28.924842 kernel: Device empty Jan 23 01:21:28.924849 kernel: Movable zone start for each node Jan 23 01:21:28.924855 kernel: Early memory node ranges Jan 23 01:21:28.924862 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 01:21:28.924869 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jan 23 01:21:28.924875 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 23 01:21:28.924882 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 23 01:21:28.924891 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:21:28.924898 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 01:21:28.924904 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 23 01:21:28.924911 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 01:21:28.924918 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 01:21:28.924924 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 01:21:28.924931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 01:21:28.924938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 01:21:28.924944 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 01:21:28.924953 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 01:21:28.924960 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 01:21:28.924966 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 01:21:28.924973 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 01:21:28.924980 kernel: TSC deadline timer available Jan 23 01:21:28.924986 kernel: CPU topo: Max. logical packages: 1 Jan 23 01:21:28.924993 kernel: CPU topo: Max. logical dies: 1 Jan 23 01:21:28.925000 kernel: CPU topo: Max. dies per package: 1 Jan 23 01:21:28.925006 kernel: CPU topo: Max. threads per core: 1 Jan 23 01:21:28.925015 kernel: CPU topo: Num. cores per package: 2 Jan 23 01:21:28.925022 kernel: CPU topo: Num. threads per package: 2 Jan 23 01:21:28.925028 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 01:21:28.925035 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 01:21:28.925042 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 01:21:28.925048 kernel: kvm-guest: setup PV sched yield Jan 23 01:21:28.925055 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 01:21:28.925062 kernel: Booting paravirtualized kernel on KVM Jan 23 01:21:28.925069 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 01:21:28.925078 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 01:21:28.925084 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 01:21:28.925091 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 01:21:28.925098 kernel: pcpu-alloc: [0] 0 1 Jan 23 01:21:28.925104 kernel: kvm-guest: PV spinlocks enabled Jan 23 01:21:28.925111 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 01:21:28.925119 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:21:28.925126 kernel: random: crng init done Jan 23 01:21:28.925134 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 01:21:28.925141 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 01:21:28.925148 kernel: Fallback order for Node 0: 0 Jan 23 01:21:28.925155 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jan 23 01:21:28.925161 kernel: Policy zone: Normal Jan 23 01:21:28.925168 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 01:21:28.925174 kernel: software IO TLB: area num 2. Jan 23 01:21:28.925181 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 01:21:28.925188 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 01:21:28.925196 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 01:21:28.925203 kernel: Dynamic Preempt: voluntary Jan 23 01:21:28.925210 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 01:21:28.925217 kernel: rcu: RCU event tracing is enabled. Jan 23 01:21:28.925224 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 01:21:28.925231 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 01:21:28.925238 kernel: Rude variant of Tasks RCU enabled. Jan 23 01:21:28.925245 kernel: Tracing variant of Tasks RCU enabled. Jan 23 01:21:28.925251 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 01:21:28.925258 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 01:21:28.925267 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:21:28.925281 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:21:28.925290 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:21:28.925297 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 01:21:28.925304 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 01:21:28.925311 kernel: Console: colour VGA+ 80x25 Jan 23 01:21:28.925318 kernel: printk: legacy console [tty0] enabled Jan 23 01:21:28.925325 kernel: printk: legacy console [ttyS0] enabled Jan 23 01:21:28.925332 kernel: ACPI: Core revision 20240827 Jan 23 01:21:28.925341 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 01:21:28.925348 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 01:21:28.925355 kernel: x2apic enabled Jan 23 01:21:28.925362 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 01:21:28.925369 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 01:21:28.925376 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 01:21:28.925383 kernel: kvm-guest: setup PV IPIs Jan 23 01:21:28.926122 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 01:21:28.926137 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x1cd42fed8cc, max_idle_ns: 440795202126 ns Jan 23 01:21:28.926145 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000002) Jan 23 01:21:28.926152 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 01:21:28.926159 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 01:21:28.926167 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 01:21:28.926174 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 01:21:28.926181 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 01:21:28.926188 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 01:21:28.926200 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 23 01:21:28.926207 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 01:21:28.926214 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 01:21:28.926221 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 01:21:28.926228 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 01:21:28.926235 kernel: active return thunk: srso_alias_return_thunk Jan 23 01:21:28.926243 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 01:21:28.926250 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 01:21:28.926259 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 01:21:28.926266 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 01:21:28.926273 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 01:21:28.926280 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 01:21:28.926288 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 01:21:28.926295 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 01:21:28.926302 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jan 23 01:21:28.926309 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jan 23 01:21:28.926316 kernel: Freeing SMP alternatives memory: 32K Jan 23 01:21:28.926325 kernel: pid_max: default: 32768 minimum: 301 Jan 23 01:21:28.926332 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 01:21:28.926339 kernel: landlock: Up and running. Jan 23 01:21:28.926346 kernel: SELinux: Initializing. Jan 23 01:21:28.926353 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 01:21:28.926376 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 01:21:28.926383 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 01:21:28.926390 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 23 01:21:28.926397 kernel: ... version: 0 Jan 23 01:21:28.926407 kernel: ... bit width: 48 Jan 23 01:21:28.926414 kernel: ... generic registers: 6 Jan 23 01:21:28.926421 kernel: ... value mask: 0000ffffffffffff Jan 23 01:21:28.926454 kernel: ... max period: 00007fffffffffff Jan 23 01:21:28.926462 kernel: ... fixed-purpose events: 0 Jan 23 01:21:28.926468 kernel: ... event mask: 000000000000003f Jan 23 01:21:28.926475 kernel: signal: max sigframe size: 3376 Jan 23 01:21:28.926482 kernel: rcu: Hierarchical SRCU implementation. Jan 23 01:21:28.926490 kernel: rcu: Max phase no-delay instances is 400. Jan 23 01:21:28.927317 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 01:21:28.927326 kernel: smp: Bringing up secondary CPUs ... Jan 23 01:21:28.927333 kernel: smpboot: x86: Booting SMP configuration: Jan 23 01:21:28.927340 kernel: .... node #0, CPUs: #1 Jan 23 01:21:28.927347 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 01:21:28.927354 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jan 23 01:21:28.927362 kernel: Memory: 3953616K/4193772K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 235480K reserved, 0K cma-reserved) Jan 23 01:21:28.927369 kernel: devtmpfs: initialized Jan 23 01:21:28.927376 kernel: x86/mm: Memory block size: 128MB Jan 23 01:21:28.927386 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 01:21:28.927393 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 01:21:28.927400 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 01:21:28.927407 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 01:21:28.927414 kernel: audit: initializing netlink subsys (disabled) Jan 23 01:21:28.927422 kernel: audit: type=2000 audit(1769131285.955:1): state=initialized audit_enabled=0 res=1 Jan 23 01:21:28.927457 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 01:21:28.927465 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 01:21:28.927472 kernel: cpuidle: using governor menu Jan 23 01:21:28.927483 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 01:21:28.927490 kernel: dca service started, version 1.12.1 Jan 23 01:21:28.927497 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 01:21:28.927504 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 01:21:28.927512 kernel: PCI: Using configuration type 1 for base access Jan 23 01:21:28.927519 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 01:21:28.927526 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 01:21:28.927533 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 01:21:28.927540 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 01:21:28.927549 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 01:21:28.927556 kernel: ACPI: Added _OSI(Module Device) Jan 23 01:21:28.927563 kernel: ACPI: Added _OSI(Processor Device) Jan 23 01:21:28.927571 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 01:21:28.927578 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 01:21:28.927584 kernel: ACPI: Interpreter enabled Jan 23 01:21:28.927591 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 01:21:28.927598 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 01:21:28.927606 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 01:21:28.927615 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 01:21:28.927622 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 01:21:28.927629 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 01:21:28.927817 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 01:21:28.927948 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 01:21:28.928074 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 01:21:28.928084 kernel: PCI host bridge to bus 0000:00 Jan 23 01:21:28.928212 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 01:21:28.928326 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 01:21:28.928458 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 01:21:28.928575 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 23 01:21:28.928686 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 01:21:28.928796 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jan 23 01:21:28.928906 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 01:21:28.929088 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 01:21:28.930611 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 01:21:28.930746 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 23 01:21:28.930894 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 23 01:21:28.931075 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 23 01:21:28.931256 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 01:21:28.936032 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jan 23 01:21:28.936182 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jan 23 01:21:28.936309 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 23 01:21:28.939006 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 23 01:21:28.939195 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 01:21:28.939531 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jan 23 01:21:28.939656 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 23 01:21:28.939785 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 23 01:21:28.939909 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 23 01:21:28.940040 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 01:21:28.940163 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 01:21:28.940295 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 01:21:28.940416 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jan 23 01:21:28.940557 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jan 23 01:21:28.940695 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 01:21:28.940816 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 01:21:28.940826 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 01:21:28.940834 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 01:21:28.940841 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 01:21:28.940848 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 01:21:28.940856 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 01:21:28.940866 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 01:21:28.940873 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 01:21:28.940881 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 01:21:28.940888 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 01:21:28.940895 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 01:21:28.940902 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 01:21:28.940909 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 01:21:28.940916 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 01:21:28.940924 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 01:21:28.940933 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 01:21:28.940940 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 01:21:28.940947 kernel: iommu: Default domain type: Translated Jan 23 01:21:28.940955 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 01:21:28.940962 kernel: PCI: Using ACPI for IRQ routing Jan 23 01:21:28.940969 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 01:21:28.940976 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jan 23 01:21:28.940983 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jan 23 01:21:28.941104 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 01:21:28.941256 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 01:21:28.941382 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 01:21:28.941393 kernel: vgaarb: loaded Jan 23 01:21:28.943442 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 01:21:28.943466 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 01:21:28.943494 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 01:21:28.943502 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 01:21:28.943510 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 01:21:28.943517 kernel: pnp: PnP ACPI init Jan 23 01:21:28.943670 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 01:21:28.943682 kernel: pnp: PnP ACPI: found 5 devices Jan 23 01:21:28.943690 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 01:21:28.943697 kernel: NET: Registered PF_INET protocol family Jan 23 01:21:28.943704 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 01:21:28.943711 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 01:21:28.943719 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 01:21:28.943726 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 01:21:28.943737 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 01:21:28.943744 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 01:21:28.943751 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 01:21:28.943759 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 01:21:28.943766 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 01:21:28.943773 kernel: NET: Registered PF_XDP protocol family Jan 23 01:21:28.943890 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 01:21:28.944003 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 01:21:28.944115 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 01:21:28.944231 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 23 01:21:28.946066 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 01:21:28.946191 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jan 23 01:21:28.946202 kernel: PCI: CLS 0 bytes, default 64 Jan 23 01:21:28.946210 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 01:21:28.946217 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jan 23 01:21:28.946225 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x1cd42fed8cc, max_idle_ns: 440795202126 ns Jan 23 01:21:28.946232 kernel: Initialise system trusted keyrings Jan 23 01:21:28.946462 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 01:21:28.946480 kernel: Key type asymmetric registered Jan 23 01:21:28.946487 kernel: Asymmetric key parser 'x509' registered Jan 23 01:21:28.946495 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 01:21:28.946502 kernel: io scheduler mq-deadline registered Jan 23 01:21:28.946509 kernel: io scheduler kyber registered Jan 23 01:21:28.946516 kernel: io scheduler bfq registered Jan 23 01:21:28.946524 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 01:21:28.946532 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 01:21:28.946543 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 01:21:28.946550 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 01:21:28.946557 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:21:28.946565 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 01:21:28.946572 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 01:21:28.946579 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 01:21:28.946723 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 01:21:28.946736 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 01:21:28.946857 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 01:21:28.946973 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T01:21:28 UTC (1769131288) Jan 23 01:21:28.947088 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 01:21:28.947097 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 01:21:28.947105 kernel: NET: Registered PF_INET6 protocol family Jan 23 01:21:28.947112 kernel: Segment Routing with IPv6 Jan 23 01:21:28.947119 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 01:21:28.947126 kernel: NET: Registered PF_PACKET protocol family Jan 23 01:21:28.947134 kernel: Key type dns_resolver registered Jan 23 01:21:28.947144 kernel: IPI shorthand broadcast: enabled Jan 23 01:21:28.947151 kernel: sched_clock: Marking stable (3112118727, 384504962)->(3600489579, -103865890) Jan 23 01:21:28.947159 kernel: registered taskstats version 1 Jan 23 01:21:28.947166 kernel: Loading compiled-in X.509 certificates Jan 23 01:21:28.947173 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 01:21:28.947180 kernel: Demotion targets for Node 0: null Jan 23 01:21:28.947187 kernel: Key type .fscrypt registered Jan 23 01:21:28.947194 kernel: Key type fscrypt-provisioning registered Jan 23 01:21:28.947201 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 01:21:28.947210 kernel: ima: Allocated hash algorithm: sha1 Jan 23 01:21:28.947218 kernel: ima: No architecture policies found Jan 23 01:21:28.947225 kernel: clk: Disabling unused clocks Jan 23 01:21:28.947232 kernel: Warning: unable to open an initial console. Jan 23 01:21:28.947240 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 01:21:28.947247 kernel: Write protecting the kernel read-only data: 40960k Jan 23 01:21:28.947254 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 01:21:28.947261 kernel: Run /init as init process Jan 23 01:21:28.947268 kernel: with arguments: Jan 23 01:21:28.947278 kernel: /init Jan 23 01:21:28.947285 kernel: with environment: Jan 23 01:21:28.947307 kernel: HOME=/ Jan 23 01:21:28.947316 kernel: TERM=linux Jan 23 01:21:28.947325 systemd[1]: Successfully made /usr/ read-only. Jan 23 01:21:28.947336 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:21:28.947344 systemd[1]: Detected virtualization kvm. Jan 23 01:21:28.947354 systemd[1]: Detected architecture x86-64. Jan 23 01:21:28.947362 systemd[1]: Running in initrd. Jan 23 01:21:28.947370 systemd[1]: No hostname configured, using default hostname. Jan 23 01:21:28.947378 systemd[1]: Hostname set to . Jan 23 01:21:28.947386 systemd[1]: Initializing machine ID from random generator. Jan 23 01:21:28.947394 systemd[1]: Queued start job for default target initrd.target. Jan 23 01:21:28.947402 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:21:28.947410 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:21:28.947421 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 01:21:28.947445 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:21:28.947454 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 01:21:28.947462 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 01:21:28.947471 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 01:21:28.947479 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 01:21:28.947487 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:21:28.947498 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:21:28.947506 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:21:28.947514 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:21:28.947522 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:21:28.947530 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:21:28.947540 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:21:28.947548 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:21:28.947556 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 01:21:28.947564 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 01:21:28.947574 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:21:28.947582 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:21:28.947592 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:21:28.947600 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:21:28.947608 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 01:21:28.947619 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:21:28.947627 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 01:21:28.947635 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 01:21:28.947643 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 01:21:28.947651 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:21:28.947659 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:21:28.947667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:21:28.947674 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 01:21:28.947685 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:21:28.947716 systemd-journald[187]: Collecting audit messages is disabled. Jan 23 01:21:28.947739 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 01:21:28.947747 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:21:28.947756 systemd-journald[187]: Journal started Jan 23 01:21:28.947773 systemd-journald[187]: Runtime Journal (/run/log/journal/cd5e1c910df840fa8b0a573cb1a9b601) is 8M, max 78.2M, 70.2M free. Jan 23 01:21:28.929373 systemd-modules-load[188]: Inserted module 'overlay' Jan 23 01:21:28.953944 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:21:28.974457 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 01:21:28.976059 systemd-modules-load[188]: Inserted module 'br_netfilter' Jan 23 01:21:29.079852 kernel: Bridge firewalling registered Jan 23 01:21:29.080726 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:21:29.081866 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:21:29.083701 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:21:29.088708 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 01:21:29.092541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:21:29.101299 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:21:29.106534 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:21:29.113683 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:21:29.118156 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:21:29.126852 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:21:29.129786 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 01:21:29.133319 systemd-tmpfiles[209]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 01:21:29.141505 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:21:29.146545 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:21:29.154532 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:21:29.190035 systemd-resolved[232]: Positive Trust Anchors: Jan 23 01:21:29.190936 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:21:29.190964 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:21:29.197757 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 23 01:21:29.198856 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:21:29.200190 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:21:29.247461 kernel: SCSI subsystem initialized Jan 23 01:21:29.257533 kernel: Loading iSCSI transport class v2.0-870. Jan 23 01:21:29.268598 kernel: iscsi: registered transport (tcp) Jan 23 01:21:29.290131 kernel: iscsi: registered transport (qla4xxx) Jan 23 01:21:29.290167 kernel: QLogic iSCSI HBA Driver Jan 23 01:21:29.311912 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:21:29.334247 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:21:29.337619 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:21:29.384210 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 01:21:29.387137 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 01:21:29.439467 kernel: raid6: avx2x4 gen() 28853 MB/s Jan 23 01:21:29.456459 kernel: raid6: avx2x2 gen() 29156 MB/s Jan 23 01:21:29.476982 kernel: raid6: avx2x1 gen() 19026 MB/s Jan 23 01:21:29.477009 kernel: raid6: using algorithm avx2x2 gen() 29156 MB/s Jan 23 01:21:29.497842 kernel: raid6: .... xor() 27649 MB/s, rmw enabled Jan 23 01:21:29.497888 kernel: raid6: using avx2x2 recovery algorithm Jan 23 01:21:29.520452 kernel: xor: automatically using best checksumming function avx Jan 23 01:21:29.685478 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 01:21:29.693405 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:21:29.695958 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:21:29.725653 systemd-udevd[435]: Using default interface naming scheme 'v255'. Jan 23 01:21:29.732656 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:21:29.737503 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 01:21:29.759782 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Jan 23 01:21:29.789486 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:21:29.792559 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:21:29.866955 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:21:29.872115 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 01:21:29.960532 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jan 23 01:21:29.966443 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 01:21:30.145475 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 01:21:30.152094 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:21:30.161706 kernel: scsi host0: Virtio SCSI HBA Jan 23 01:21:30.161912 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 23 01:21:30.152210 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:21:30.165183 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:21:30.168774 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:21:30.172807 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:21:30.215448 kernel: AES CTR mode by8 optimization enabled Jan 23 01:21:30.223490 kernel: libata version 3.00 loaded. Jan 23 01:21:30.258666 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 23 01:21:30.258917 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jan 23 01:21:30.259075 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 01:21:30.259410 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 23 01:21:30.259586 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 01:21:30.273003 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 01:21:30.273030 kernel: GPT:9289727 != 167739391 Jan 23 01:21:30.273048 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 01:21:30.273059 kernel: GPT:9289727 != 167739391 Jan 23 01:21:30.273068 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 01:21:30.273077 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:21:30.273088 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 01:21:30.279453 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 01:21:30.284463 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 01:21:30.285476 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 01:21:30.285655 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 01:21:30.285803 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 01:21:30.293494 kernel: scsi host1: ahci Jan 23 01:21:30.294561 kernel: scsi host2: ahci Jan 23 01:21:30.295448 kernel: scsi host3: ahci Jan 23 01:21:30.296455 kernel: scsi host4: ahci Jan 23 01:21:30.296847 kernel: scsi host5: ahci Jan 23 01:21:30.298510 kernel: scsi host6: ahci Jan 23 01:21:30.298761 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 1 Jan 23 01:21:30.298774 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 1 Jan 23 01:21:30.298784 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 1 Jan 23 01:21:30.298794 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 1 Jan 23 01:21:30.298803 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 1 Jan 23 01:21:30.298813 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 1 Jan 23 01:21:30.353077 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 23 01:21:30.442889 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:21:30.459033 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 23 01:21:30.472175 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 23 01:21:30.473210 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 23 01:21:30.482730 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 01:21:30.485030 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 01:21:30.505768 disk-uuid[606]: Primary Header is updated. Jan 23 01:21:30.505768 disk-uuid[606]: Secondary Entries is updated. Jan 23 01:21:30.505768 disk-uuid[606]: Secondary Header is updated. Jan 23 01:21:30.517646 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:21:30.530455 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:21:30.614869 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 01:21:30.614918 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 01:21:30.615474 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 01:21:30.622899 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 01:21:30.622924 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 01:21:30.623447 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 01:21:30.706798 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 01:21:30.723900 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:21:30.724896 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:21:30.726787 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:21:30.730533 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 01:21:30.767722 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:21:31.535456 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:21:31.535978 disk-uuid[607]: The operation has completed successfully. Jan 23 01:21:31.590331 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 01:21:31.590466 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 01:21:31.616872 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 01:21:31.630860 sh[635]: Success Jan 23 01:21:31.650730 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 01:21:31.650793 kernel: device-mapper: uevent: version 1.0.3 Jan 23 01:21:31.656493 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 01:21:31.667467 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 01:21:31.713630 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 01:21:31.717516 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 01:21:31.734498 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 01:21:31.748454 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (647) Jan 23 01:21:31.748488 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 01:21:31.752483 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:21:31.768108 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 01:21:31.768152 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 01:21:31.768171 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 01:21:31.772246 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 01:21:31.773600 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:21:31.775011 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 01:21:31.777575 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 01:21:31.780727 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 01:21:31.820472 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (682) Jan 23 01:21:31.827145 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:21:31.827181 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:21:31.837597 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 01:21:31.837638 kernel: BTRFS info (device sda6): turning on async discard Jan 23 01:21:31.837650 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 01:21:31.845466 kernel: BTRFS info (device sda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:21:31.846712 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 01:21:31.850578 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 01:21:31.913706 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:21:31.917660 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:21:31.971849 ignition[753]: Ignition 2.22.0 Jan 23 01:21:31.972360 ignition[753]: Stage: fetch-offline Jan 23 01:21:31.973071 ignition[753]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:21:31.973086 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:21:31.977687 systemd-networkd[816]: lo: Link UP Jan 23 01:21:31.973178 ignition[753]: parsed url from cmdline: "" Jan 23 01:21:31.977692 systemd-networkd[816]: lo: Gained carrier Jan 23 01:21:31.973183 ignition[753]: no config URL provided Jan 23 01:21:31.977873 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:21:31.973188 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:21:31.979505 systemd-networkd[816]: Enumeration completed Jan 23 01:21:31.973197 ignition[753]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:21:31.979877 systemd-networkd[816]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:21:31.973202 ignition[753]: failed to fetch config: resource requires networking Jan 23 01:21:31.979881 systemd-networkd[816]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:21:31.973498 ignition[753]: Ignition finished successfully Jan 23 01:21:31.980164 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:21:31.981969 systemd[1]: Reached target network.target - Network. Jan 23 01:21:31.982407 systemd-networkd[816]: eth0: Link UP Jan 23 01:21:31.982634 systemd-networkd[816]: eth0: Gained carrier Jan 23 01:21:31.982643 systemd-networkd[816]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:21:31.986818 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 01:21:32.017200 ignition[824]: Ignition 2.22.0 Jan 23 01:21:32.017215 ignition[824]: Stage: fetch Jan 23 01:21:32.017323 ignition[824]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:21:32.017530 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:21:32.017606 ignition[824]: parsed url from cmdline: "" Jan 23 01:21:32.017610 ignition[824]: no config URL provided Jan 23 01:21:32.017616 ignition[824]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:21:32.017625 ignition[824]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:21:32.017647 ignition[824]: PUT http://169.254.169.254/v1/token: attempt #1 Jan 23 01:21:32.017780 ignition[824]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 01:21:32.218317 ignition[824]: PUT http://169.254.169.254/v1/token: attempt #2 Jan 23 01:21:32.218741 ignition[824]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 01:21:32.618848 ignition[824]: PUT http://169.254.169.254/v1/token: attempt #3 Jan 23 01:21:32.619014 ignition[824]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 01:21:32.764487 systemd-networkd[816]: eth0: DHCPv4 address 172.238.187.240/24, gateway 172.238.187.1 acquired from 23.205.167.160 Jan 23 01:21:33.419810 ignition[824]: PUT http://169.254.169.254/v1/token: attempt #4 Jan 23 01:21:33.591257 ignition[824]: PUT result: OK Jan 23 01:21:33.591386 ignition[824]: GET http://169.254.169.254/v1/user-data: attempt #1 Jan 23 01:21:33.700255 ignition[824]: GET result: OK Jan 23 01:21:33.700661 ignition[824]: parsing config with SHA512: 3a6c08d92784bd1ce989973c4bf1f819720081c09da82d1099af1dbc28d2f78cfe2bea747264d575f8888a501ee3d3c828817c601ae6d90faf2aa9f345b8b91a Jan 23 01:21:33.704729 unknown[824]: fetched base config from "system" Jan 23 01:21:33.704839 unknown[824]: fetched base config from "system" Jan 23 01:21:33.704846 unknown[824]: fetched user config from "akamai" Jan 23 01:21:33.707321 ignition[824]: fetch: fetch complete Jan 23 01:21:33.716740 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 01:21:33.707328 ignition[824]: fetch: fetch passed Jan 23 01:21:33.707374 ignition[824]: Ignition finished successfully Jan 23 01:21:33.732561 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 01:21:33.768851 ignition[832]: Ignition 2.22.0 Jan 23 01:21:33.768867 ignition[832]: Stage: kargs Jan 23 01:21:33.768988 ignition[832]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:21:33.768999 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:21:33.770003 ignition[832]: kargs: kargs passed Jan 23 01:21:33.772881 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 01:21:33.770048 ignition[832]: Ignition finished successfully Jan 23 01:21:33.775590 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 01:21:33.802315 ignition[839]: Ignition 2.22.0 Jan 23 01:21:33.802332 ignition[839]: Stage: disks Jan 23 01:21:33.802476 ignition[839]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:21:33.802487 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:21:33.804698 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 01:21:33.803045 ignition[839]: disks: disks passed Jan 23 01:21:33.806062 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 01:21:33.803084 ignition[839]: Ignition finished successfully Jan 23 01:21:33.808239 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 01:21:33.809798 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:21:33.811180 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:21:33.812886 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:21:33.816526 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 01:21:33.846044 systemd-fsck[847]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 01:21:33.848941 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 01:21:33.852497 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 01:21:33.953516 systemd-networkd[816]: eth0: Gained IPv6LL Jan 23 01:21:33.969552 kernel: EXT4-fs (sda9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 01:21:33.971444 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 01:21:33.972273 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 01:21:33.974923 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:21:33.978562 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 01:21:33.980948 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 01:21:33.981825 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 01:21:33.981849 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:21:33.987269 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 01:21:33.989293 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 01:21:33.998473 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (856) Jan 23 01:21:33.998499 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:21:34.005678 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:21:34.014385 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 01:21:34.014408 kernel: BTRFS info (device sda6): turning on async discard Jan 23 01:21:34.014420 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 01:21:34.018100 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:21:34.051958 initrd-setup-root[880]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 01:21:34.057761 initrd-setup-root[887]: cut: /sysroot/etc/group: No such file or directory Jan 23 01:21:34.064790 initrd-setup-root[894]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 01:21:34.069867 initrd-setup-root[901]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 01:21:34.164005 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 01:21:34.166625 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 01:21:34.169029 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 01:21:34.186861 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 01:21:34.192645 kernel: BTRFS info (device sda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:21:34.200953 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 01:21:34.227650 ignition[971]: INFO : Ignition 2.22.0 Jan 23 01:21:34.227650 ignition[971]: INFO : Stage: mount Jan 23 01:21:34.229623 ignition[971]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:21:34.229623 ignition[971]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:21:34.229623 ignition[971]: INFO : mount: mount passed Jan 23 01:21:34.229623 ignition[971]: INFO : Ignition finished successfully Jan 23 01:21:34.231518 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 01:21:34.234873 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 01:21:34.972782 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:21:35.002455 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (980) Jan 23 01:21:35.002495 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:21:35.009113 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:21:35.016485 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 01:21:35.016514 kernel: BTRFS info (device sda6): turning on async discard Jan 23 01:21:35.016529 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 01:21:35.021106 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:21:35.056696 ignition[996]: INFO : Ignition 2.22.0 Jan 23 01:21:35.056696 ignition[996]: INFO : Stage: files Jan 23 01:21:35.058803 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:21:35.058803 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:21:35.058803 ignition[996]: DEBUG : files: compiled without relabeling support, skipping Jan 23 01:21:35.058803 ignition[996]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 01:21:35.058803 ignition[996]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 01:21:35.064503 ignition[996]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 01:21:35.064503 ignition[996]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 01:21:35.064503 ignition[996]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 01:21:35.064503 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 01:21:35.064503 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 23 01:21:35.061453 unknown[996]: wrote ssh authorized keys file for user: core Jan 23 01:21:35.327578 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 01:21:35.404764 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 01:21:35.404764 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 01:21:35.407361 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 23 01:21:35.669739 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 01:21:35.787658 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 01:21:35.789155 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 01:21:35.789155 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 01:21:35.789155 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:21:35.789155 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:21:35.789155 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:21:35.789155 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:21:35.789155 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:21:35.789155 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:21:35.821338 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:21:35.821338 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:21:35.821338 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:21:35.821338 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:21:35.821338 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:21:35.821338 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 23 01:21:36.128915 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 01:21:36.754950 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:21:36.754950 ignition[996]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 01:21:36.757670 ignition[996]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:21:36.758976 ignition[996]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:21:36.758976 ignition[996]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 01:21:36.758976 ignition[996]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 23 01:21:36.758976 ignition[996]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 01:21:36.758976 ignition[996]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 01:21:36.758976 ignition[996]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 23 01:21:36.758976 ignition[996]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 23 01:21:36.769099 ignition[996]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 01:21:36.769099 ignition[996]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:21:36.769099 ignition[996]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:21:36.769099 ignition[996]: INFO : files: files passed Jan 23 01:21:36.769099 ignition[996]: INFO : Ignition finished successfully Jan 23 01:21:36.762703 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 01:21:36.765659 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 01:21:36.770529 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 01:21:36.781635 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 01:21:36.782565 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 01:21:36.790044 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:21:36.791467 initrd-setup-root-after-ignition[1027]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:21:36.792997 initrd-setup-root-after-ignition[1031]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:21:36.794978 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:21:36.796062 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 01:21:36.798444 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 01:21:36.858191 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 01:21:36.858338 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 01:21:36.859405 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 01:21:36.860581 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 01:21:36.862254 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 01:21:36.862995 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 01:21:36.888103 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:21:36.890592 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 01:21:36.922397 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:21:36.923280 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:21:36.924975 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 01:21:36.926582 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 01:21:36.926726 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:21:36.928404 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 01:21:36.929484 systemd[1]: Stopped target basic.target - Basic System. Jan 23 01:21:36.930935 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 01:21:36.932417 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:21:36.933896 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 01:21:36.935610 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:21:36.937256 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 01:21:36.938871 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:21:36.940510 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 01:21:36.942060 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 01:21:36.943817 systemd[1]: Stopped target swap.target - Swaps. Jan 23 01:21:36.945362 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 01:21:36.945533 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:21:36.947732 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:21:36.949190 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:21:36.950523 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 01:21:36.952675 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:21:36.953881 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 01:21:36.953980 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 01:21:36.956010 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 01:21:36.956168 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:21:36.957165 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 01:21:36.957297 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 01:21:36.960509 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 01:21:36.962524 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 01:21:36.962641 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:21:36.971559 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 01:21:36.972277 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 01:21:36.972388 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:21:36.998814 ignition[1051]: INFO : Ignition 2.22.0 Jan 23 01:21:36.998814 ignition[1051]: INFO : Stage: umount Jan 23 01:21:36.998814 ignition[1051]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:21:36.998814 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:21:36.998814 ignition[1051]: INFO : umount: umount passed Jan 23 01:21:36.998814 ignition[1051]: INFO : Ignition finished successfully Jan 23 01:21:36.994984 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 01:21:36.995099 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:21:36.999568 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 01:21:37.003544 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 01:21:37.010339 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 01:21:37.011038 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 01:21:37.013632 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 01:21:37.013737 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 01:21:37.015258 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 01:21:37.015311 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 01:21:37.017044 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 01:21:37.017097 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 01:21:37.018039 systemd[1]: Stopped target network.target - Network. Jan 23 01:21:37.022576 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 01:21:37.022632 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:21:37.023725 systemd[1]: Stopped target paths.target - Path Units. Jan 23 01:21:37.024446 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 01:21:37.028737 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:21:37.029955 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 01:21:37.030736 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 01:21:37.033590 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 01:21:37.033652 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:21:37.035113 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 01:21:37.035159 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:21:37.036554 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 01:21:37.036622 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 01:21:37.037952 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 01:21:37.038020 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 01:21:37.039478 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 01:21:37.040945 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 01:21:37.043999 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 01:21:37.044893 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 01:21:37.044999 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 01:21:37.047829 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 01:21:37.047958 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 01:21:37.053725 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 01:21:37.053998 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 01:21:37.054131 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 01:21:37.056466 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 01:21:37.057597 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 01:21:37.059000 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 01:21:37.059043 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:21:37.060701 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 01:21:37.060756 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 01:21:37.063201 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 01:21:37.065606 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 01:21:37.065678 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:21:37.068628 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:21:37.068681 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:21:37.070002 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 01:21:37.070051 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 01:21:37.071355 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 01:21:37.071409 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:21:37.073713 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:21:37.075574 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:21:37.075635 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:21:37.092017 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 01:21:37.092155 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 01:21:37.095778 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 01:21:37.095956 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:21:37.098058 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 01:21:37.098129 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 01:21:37.099663 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 01:21:37.099700 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:21:37.101247 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 01:21:37.101296 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:21:37.103548 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 01:21:37.103595 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 01:21:37.105347 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 01:21:37.105397 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:21:37.108531 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 01:21:37.109572 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 01:21:37.109627 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:21:37.113605 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 01:21:37.113656 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:21:37.116813 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:21:37.116864 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:21:37.122609 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 01:21:37.122665 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 01:21:37.122710 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:21:37.123144 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 01:21:37.123247 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 01:21:37.124839 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 01:21:37.127137 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 01:21:37.156340 systemd[1]: Switching root. Jan 23 01:21:37.190414 systemd-journald[187]: Journal stopped Jan 23 01:21:38.506039 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 23 01:21:38.506066 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 01:21:38.506078 kernel: SELinux: policy capability open_perms=1 Jan 23 01:21:38.506088 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 01:21:38.506097 kernel: SELinux: policy capability always_check_network=0 Jan 23 01:21:38.506108 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 01:21:38.506118 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 01:21:38.506127 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 01:21:38.506136 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 01:21:38.506145 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 01:21:38.506155 kernel: audit: type=1403 audit(1769131297.347:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 01:21:38.506165 systemd[1]: Successfully loaded SELinux policy in 73.308ms. Jan 23 01:21:38.506178 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.342ms. Jan 23 01:21:38.506190 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:21:38.506201 systemd[1]: Detected virtualization kvm. Jan 23 01:21:38.506211 systemd[1]: Detected architecture x86-64. Jan 23 01:21:38.506223 systemd[1]: Detected first boot. Jan 23 01:21:38.506233 systemd[1]: Initializing machine ID from random generator. Jan 23 01:21:38.506243 zram_generator::config[1094]: No configuration found. Jan 23 01:21:38.506254 kernel: Guest personality initialized and is inactive Jan 23 01:21:38.506264 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 01:21:38.506273 kernel: Initialized host personality Jan 23 01:21:38.506282 kernel: NET: Registered PF_VSOCK protocol family Jan 23 01:21:38.506292 systemd[1]: Populated /etc with preset unit settings. Jan 23 01:21:38.506305 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 01:21:38.506315 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 01:21:38.506325 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 01:21:38.506335 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 01:21:38.506345 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 01:21:38.506355 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 01:21:38.506365 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 01:21:38.506393 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 01:21:38.506404 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 01:21:38.506414 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 01:21:38.506424 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 01:21:38.506634 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 01:21:38.506645 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:21:38.506655 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:21:38.506665 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 01:21:38.506679 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 01:21:38.506692 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 01:21:38.506703 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:21:38.506714 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 01:21:38.506724 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:21:38.506734 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:21:38.506744 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 01:21:38.506757 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 01:21:38.506767 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 01:21:38.506777 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 01:21:38.506788 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:21:38.506798 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:21:38.506808 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:21:38.506819 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:21:38.506829 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 01:21:38.506839 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 01:21:38.506851 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 01:21:38.506862 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:21:38.506872 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:21:38.506883 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:21:38.506897 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 01:21:38.506907 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 01:21:38.506917 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 01:21:38.506928 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 01:21:38.506938 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:21:38.506949 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 01:21:38.506959 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 01:21:38.506969 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 01:21:38.506982 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 01:21:38.506993 systemd[1]: Reached target machines.target - Containers. Jan 23 01:21:38.507003 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 01:21:38.507013 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:21:38.507024 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:21:38.507034 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 01:21:38.507044 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:21:38.507055 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:21:38.507065 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:21:38.507078 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 01:21:38.507088 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:21:38.507098 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 01:21:38.507109 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 01:21:38.507120 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 01:21:38.507130 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 01:21:38.507141 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 01:21:38.507152 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:21:38.507165 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:21:38.507175 kernel: fuse: init (API version 7.41) Jan 23 01:21:38.507185 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:21:38.507195 kernel: loop: module loaded Jan 23 01:21:38.507205 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:21:38.507216 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 01:21:38.507225 kernel: ACPI: bus type drm_connector registered Jan 23 01:21:38.507235 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 01:21:38.507248 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:21:38.507258 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 01:21:38.507269 systemd[1]: Stopped verity-setup.service. Jan 23 01:21:38.507279 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:21:38.507289 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 01:21:38.507299 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 01:21:38.507310 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 01:21:38.507341 systemd-journald[1181]: Collecting audit messages is disabled. Jan 23 01:21:38.507364 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 01:21:38.507377 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 01:21:38.507387 systemd-journald[1181]: Journal started Jan 23 01:21:38.507408 systemd-journald[1181]: Runtime Journal (/run/log/journal/101d2f6c47314963a12552bd9514993e) is 8M, max 78.2M, 70.2M free. Jan 23 01:21:38.054550 systemd[1]: Queued start job for default target multi-user.target. Jan 23 01:21:38.082261 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 01:21:38.083338 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 01:21:38.513474 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:21:38.513893 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 01:21:38.515167 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 01:21:38.516288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:21:38.517658 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 01:21:38.517941 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 01:21:38.519121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:21:38.519626 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:21:38.520968 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:21:38.521246 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:21:38.522596 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:21:38.522872 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:21:38.524151 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 01:21:38.524419 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 01:21:38.525808 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:21:38.526059 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:21:38.527482 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:21:38.528876 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:21:38.530124 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 01:21:38.531307 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 01:21:38.545974 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:21:38.552614 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 01:21:38.554602 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 01:21:38.556540 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 01:21:38.556570 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:21:38.559285 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 01:21:38.568544 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 01:21:38.570063 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:21:38.571594 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 01:21:38.575570 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 01:21:38.576509 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:21:38.578538 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 01:21:38.579318 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:21:38.582167 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:21:38.589905 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 01:21:38.597178 systemd-journald[1181]: Time spent on flushing to /var/log/journal/101d2f6c47314963a12552bd9514993e is 47.723ms for 1008 entries. Jan 23 01:21:38.597178 systemd-journald[1181]: System Journal (/var/log/journal/101d2f6c47314963a12552bd9514993e) is 8M, max 195.6M, 187.6M free. Jan 23 01:21:38.668141 systemd-journald[1181]: Received client request to flush runtime journal. Jan 23 01:21:38.668178 kernel: loop0: detected capacity change from 0 to 8 Jan 23 01:21:38.602620 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 01:21:38.608878 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 01:21:38.611777 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 01:21:38.651702 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 01:21:38.652893 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 01:21:38.671768 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 01:21:38.674239 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 01:21:38.698470 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 01:21:38.715132 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:21:38.719249 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 01:21:38.726829 kernel: loop1: detected capacity change from 0 to 224512 Jan 23 01:21:38.725788 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:21:38.738185 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:21:38.753143 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 01:21:38.771995 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 23 01:21:38.772017 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 23 01:21:38.787984 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:21:38.788572 kernel: loop2: detected capacity change from 0 to 128560 Jan 23 01:21:38.842497 kernel: loop3: detected capacity change from 0 to 110984 Jan 23 01:21:38.880459 kernel: loop4: detected capacity change from 0 to 8 Jan 23 01:21:38.886608 kernel: loop5: detected capacity change from 0 to 224512 Jan 23 01:21:38.909474 kernel: loop6: detected capacity change from 0 to 128560 Jan 23 01:21:38.934459 kernel: loop7: detected capacity change from 0 to 110984 Jan 23 01:21:38.957773 (sd-merge)[1246]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Jan 23 01:21:38.961949 (sd-merge)[1246]: Merged extensions into '/usr'. Jan 23 01:21:38.969749 systemd[1]: Reload requested from client PID 1219 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 01:21:38.969917 systemd[1]: Reloading... Jan 23 01:21:39.111464 zram_generator::config[1272]: No configuration found. Jan 23 01:21:39.209513 ldconfig[1214]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 01:21:39.319203 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 01:21:39.320086 systemd[1]: Reloading finished in 349 ms. Jan 23 01:21:39.340067 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 01:21:39.341558 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 01:21:39.342758 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 01:21:39.356234 systemd[1]: Starting ensure-sysext.service... Jan 23 01:21:39.360545 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:21:39.367647 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:21:39.383230 systemd[1]: Reload requested from client PID 1316 ('systemctl') (unit ensure-sysext.service)... Jan 23 01:21:39.383247 systemd[1]: Reloading... Jan 23 01:21:39.385996 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 01:21:39.386602 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 01:21:39.386979 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 01:21:39.387373 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 01:21:39.389112 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 01:21:39.389514 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Jan 23 01:21:39.389644 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Jan 23 01:21:39.396826 systemd-tmpfiles[1317]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:21:39.396838 systemd-tmpfiles[1317]: Skipping /boot Jan 23 01:21:39.422883 systemd-tmpfiles[1317]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:21:39.422962 systemd-tmpfiles[1317]: Skipping /boot Jan 23 01:21:39.444129 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Jan 23 01:21:39.482470 zram_generator::config[1344]: No configuration found. Jan 23 01:21:39.756811 systemd[1]: Reloading finished in 373 ms. Jan 23 01:21:39.757686 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 01:21:39.767902 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:21:39.769221 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:21:39.791515 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 01:21:39.792111 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 01:21:39.797559 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:21:39.805790 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 01:21:39.811477 kernel: ACPI: button: Power Button [PWRF] Jan 23 01:21:39.809200 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 01:21:39.816580 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:21:39.820610 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:21:39.824571 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 01:21:39.834375 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:21:39.834711 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:21:39.836055 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:21:39.852504 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:21:39.866183 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:21:39.868618 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:21:39.868719 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:21:39.873185 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 01:21:39.874691 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:21:39.880554 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:21:39.881792 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:21:39.881946 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:21:39.882018 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:21:39.882089 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:21:39.887675 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:21:39.887889 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:21:39.890915 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:21:39.892680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:21:39.892770 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:21:39.892878 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:21:39.912218 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 01:21:39.914735 systemd[1]: Finished ensure-sysext.service. Jan 23 01:21:39.952649 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 01:21:39.955175 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:21:39.955668 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:21:39.964008 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 01:21:39.969639 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 01:21:39.970919 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:21:39.973698 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:21:39.975035 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:21:39.975546 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:21:39.977923 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:21:39.978191 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:21:39.980559 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 01:21:39.984217 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:21:39.984630 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 01:21:39.984311 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:21:40.013596 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 01:21:40.031058 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 01:21:40.034463 augenrules[1478]: No rules Jan 23 01:21:40.034396 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:21:40.036488 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:21:40.038615 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 01:21:40.041001 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 01:21:40.080653 kernel: EDAC MC: Ver: 3.0.0 Jan 23 01:21:40.093624 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:21:40.189663 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 01:21:40.192163 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 01:21:40.219274 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 01:21:40.309297 systemd-networkd[1427]: lo: Link UP Jan 23 01:21:40.309306 systemd-networkd[1427]: lo: Gained carrier Jan 23 01:21:40.314346 systemd-networkd[1427]: Enumeration completed Jan 23 01:21:40.316553 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:21:40.316953 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:21:40.316958 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:21:40.320728 systemd-networkd[1427]: eth0: Link UP Jan 23 01:21:40.320924 systemd-networkd[1427]: eth0: Gained carrier Jan 23 01:21:40.320937 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:21:40.362664 systemd-resolved[1429]: Positive Trust Anchors: Jan 23 01:21:40.362682 systemd-resolved[1429]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:21:40.362709 systemd-resolved[1429]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:21:40.366303 systemd-resolved[1429]: Defaulting to hostname 'linux'. Jan 23 01:21:40.374816 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 01:21:40.376751 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:21:40.378187 systemd[1]: Reached target network.target - Network. Jan 23 01:21:40.379161 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:21:40.379980 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 01:21:40.382393 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 01:21:40.384551 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 01:21:40.387846 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:21:40.389049 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:21:40.390059 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 01:21:40.391351 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 01:21:40.393485 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 01:21:40.394866 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 01:21:40.395913 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 01:21:40.396738 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 01:21:40.397720 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 01:21:40.397753 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:21:40.398461 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:21:40.401160 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 01:21:40.405644 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 01:21:40.409243 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 01:21:40.433254 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 01:21:40.434266 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 01:21:40.446692 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 01:21:40.448037 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 01:21:40.449898 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 01:21:40.451072 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 01:21:40.453453 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:21:40.454191 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:21:40.455144 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:21:40.455187 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:21:40.458519 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 01:21:40.460961 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 01:21:40.464409 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 01:21:40.470368 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 01:21:40.473654 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 01:21:40.478304 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 01:21:40.479046 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 01:21:40.481678 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 01:21:40.487956 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 01:21:40.495579 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 01:21:40.499600 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 01:21:40.505600 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 01:21:40.507993 jq[1515]: false Jan 23 01:21:40.511646 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 01:21:40.513194 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 01:21:40.515677 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 01:21:40.516701 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 01:21:40.521617 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 01:21:40.540533 jq[1526]: true Jan 23 01:21:40.543764 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 01:21:40.546166 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 01:21:40.547085 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 01:21:40.547451 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 01:21:40.547949 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 01:21:40.558125 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Refreshing passwd entry cache Jan 23 01:21:40.558128 oslogin_cache_refresh[1517]: Refreshing passwd entry cache Jan 23 01:21:40.576091 oslogin_cache_refresh[1517]: Failure getting users, quitting Jan 23 01:21:40.576539 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Failure getting users, quitting Jan 23 01:21:40.576539 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:21:40.576539 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Refreshing group entry cache Jan 23 01:21:40.576108 oslogin_cache_refresh[1517]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:21:40.576150 oslogin_cache_refresh[1517]: Refreshing group entry cache Jan 23 01:21:40.578599 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Failure getting groups, quitting Jan 23 01:21:40.578599 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:21:40.576767 oslogin_cache_refresh[1517]: Failure getting groups, quitting Jan 23 01:21:40.576777 oslogin_cache_refresh[1517]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:21:40.583872 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 01:21:40.584613 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 01:21:40.592592 update_engine[1525]: I20260123 01:21:40.591527 1525 main.cc:92] Flatcar Update Engine starting Jan 23 01:21:40.598155 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 01:21:40.599839 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 01:21:40.609949 jq[1532]: true Jan 23 01:21:40.616783 (ntainerd)[1549]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 01:21:40.639415 tar[1536]: linux-amd64/LICENSE Jan 23 01:21:40.639415 tar[1536]: linux-amd64/helm Jan 23 01:21:40.639716 extend-filesystems[1516]: Found /dev/sda6 Jan 23 01:21:40.648245 extend-filesystems[1516]: Found /dev/sda9 Jan 23 01:21:40.653562 extend-filesystems[1516]: Checking size of /dev/sda9 Jan 23 01:21:40.659261 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 01:21:40.659086 dbus-daemon[1513]: [system] SELinux support is enabled Jan 23 01:21:40.663843 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 01:21:40.663875 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 01:21:40.665627 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 01:21:40.665646 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 01:21:40.682700 coreos-metadata[1512]: Jan 23 01:21:40.682 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 23 01:21:40.686537 extend-filesystems[1516]: Resized partition /dev/sda9 Jan 23 01:21:40.694458 extend-filesystems[1568]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 01:21:40.696509 update_engine[1525]: I20260123 01:21:40.693833 1525 update_check_scheduler.cc:74] Next update check in 7m12s Jan 23 01:21:40.692806 systemd[1]: Started update-engine.service - Update Engine. Jan 23 01:21:40.707485 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Jan 23 01:21:40.714525 systemd-logind[1524]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 01:21:40.714558 systemd-logind[1524]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 01:21:40.715113 systemd-logind[1524]: New seat seat0. Jan 23 01:21:40.717306 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 01:21:40.718976 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 01:21:40.856770 bash[1578]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:21:40.859148 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 01:21:40.874552 systemd[1]: Starting sshkeys.service... Jan 23 01:21:40.928882 locksmithd[1570]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 01:21:40.934146 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 01:21:40.940315 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 01:21:41.000918 containerd[1549]: time="2026-01-23T01:21:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 01:21:41.002363 containerd[1549]: time="2026-01-23T01:21:41.002340957Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 01:21:41.036708 containerd[1549]: time="2026-01-23T01:21:41.036609073Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.73µs" Jan 23 01:21:41.037204 containerd[1549]: time="2026-01-23T01:21:41.036862953Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 01:21:41.037204 containerd[1549]: time="2026-01-23T01:21:41.036891723Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 01:21:41.037926 containerd[1549]: time="2026-01-23T01:21:41.037721022Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 01:21:41.038019 containerd[1549]: time="2026-01-23T01:21:41.037999102Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 01:21:41.038378 containerd[1549]: time="2026-01-23T01:21:41.038148591Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:21:41.038836 containerd[1549]: time="2026-01-23T01:21:41.038695891Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:21:41.038836 containerd[1549]: time="2026-01-23T01:21:41.038721191Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:21:41.040755 containerd[1549]: time="2026-01-23T01:21:41.040301209Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:21:41.040926 containerd[1549]: time="2026-01-23T01:21:41.040905579Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:21:41.041097 containerd[1549]: time="2026-01-23T01:21:41.041076869Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:21:41.042481 containerd[1549]: time="2026-01-23T01:21:41.041200518Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 01:21:41.042481 containerd[1549]: time="2026-01-23T01:21:41.041357718Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 01:21:41.042481 containerd[1549]: time="2026-01-23T01:21:41.041698628Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:21:41.042481 containerd[1549]: time="2026-01-23T01:21:41.041738328Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:21:41.042481 containerd[1549]: time="2026-01-23T01:21:41.041753318Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 01:21:41.042712 containerd[1549]: time="2026-01-23T01:21:41.042689417Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 01:21:41.044215 containerd[1549]: time="2026-01-23T01:21:41.044191875Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 01:21:41.044397 containerd[1549]: time="2026-01-23T01:21:41.044378075Z" level=info msg="metadata content store policy set" policy=shared Jan 23 01:21:41.056940 coreos-metadata[1594]: Jan 23 01:21:41.056 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 23 01:21:41.065110 containerd[1549]: time="2026-01-23T01:21:41.065077835Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 01:21:41.065185 containerd[1549]: time="2026-01-23T01:21:41.065121265Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 01:21:41.065185 containerd[1549]: time="2026-01-23T01:21:41.065135635Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 01:21:41.065185 containerd[1549]: time="2026-01-23T01:21:41.065146764Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 01:21:41.065185 containerd[1549]: time="2026-01-23T01:21:41.065157624Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 01:21:41.065185 containerd[1549]: time="2026-01-23T01:21:41.065166174Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 01:21:41.065282 containerd[1549]: time="2026-01-23T01:21:41.065210734Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 01:21:41.065282 containerd[1549]: time="2026-01-23T01:21:41.065226894Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 01:21:41.065282 containerd[1549]: time="2026-01-23T01:21:41.065236754Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 01:21:41.065282 containerd[1549]: time="2026-01-23T01:21:41.065245304Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 01:21:41.065282 containerd[1549]: time="2026-01-23T01:21:41.065254294Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 01:21:41.065282 containerd[1549]: time="2026-01-23T01:21:41.065273034Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 01:21:41.065619 containerd[1549]: time="2026-01-23T01:21:41.065415764Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 01:21:41.065648 containerd[1549]: time="2026-01-23T01:21:41.065623274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 01:21:41.065648 containerd[1549]: time="2026-01-23T01:21:41.065645144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 01:21:41.065691 containerd[1549]: time="2026-01-23T01:21:41.065655664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 01:21:41.065691 containerd[1549]: time="2026-01-23T01:21:41.065670944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 01:21:41.065691 containerd[1549]: time="2026-01-23T01:21:41.065680634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 01:21:41.065691 containerd[1549]: time="2026-01-23T01:21:41.065690724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 01:21:41.065761 containerd[1549]: time="2026-01-23T01:21:41.065700564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 01:21:41.066061 containerd[1549]: time="2026-01-23T01:21:41.065710854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 01:21:41.066061 containerd[1549]: time="2026-01-23T01:21:41.065880734Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 01:21:41.066061 containerd[1549]: time="2026-01-23T01:21:41.065892594Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 01:21:41.066061 containerd[1549]: time="2026-01-23T01:21:41.065935824Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 01:21:41.066061 containerd[1549]: time="2026-01-23T01:21:41.065946904Z" level=info msg="Start snapshots syncer" Jan 23 01:21:41.066145 containerd[1549]: time="2026-01-23T01:21:41.065974244Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 01:21:41.066843 containerd[1549]: time="2026-01-23T01:21:41.066697143Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 01:21:41.066843 containerd[1549]: time="2026-01-23T01:21:41.066749323Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 01:21:41.068579 containerd[1549]: time="2026-01-23T01:21:41.068527621Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 01:21:41.068949 containerd[1549]: time="2026-01-23T01:21:41.068815421Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 01:21:41.069063 containerd[1549]: time="2026-01-23T01:21:41.069038821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 01:21:41.069063 containerd[1549]: time="2026-01-23T01:21:41.069061311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 01:21:41.069112 containerd[1549]: time="2026-01-23T01:21:41.069071431Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 01:21:41.069112 containerd[1549]: time="2026-01-23T01:21:41.069088041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 01:21:41.069311 containerd[1549]: time="2026-01-23T01:21:41.069258470Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 01:21:41.069311 containerd[1549]: time="2026-01-23T01:21:41.069300610Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 01:21:41.069571 containerd[1549]: time="2026-01-23T01:21:41.069324550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 01:21:41.070620 containerd[1549]: time="2026-01-23T01:21:41.070600159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 01:21:41.070650 containerd[1549]: time="2026-01-23T01:21:41.070620439Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 01:21:41.070830 containerd[1549]: time="2026-01-23T01:21:41.070652419Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:21:41.070830 containerd[1549]: time="2026-01-23T01:21:41.070696039Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:21:41.070830 containerd[1549]: time="2026-01-23T01:21:41.070704839Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:21:41.070830 containerd[1549]: time="2026-01-23T01:21:41.070714899Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:21:41.070830 containerd[1549]: time="2026-01-23T01:21:41.070722569Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 01:21:41.070830 containerd[1549]: time="2026-01-23T01:21:41.070790079Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 01:21:41.070830 containerd[1549]: time="2026-01-23T01:21:41.070825289Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 01:21:41.070950 containerd[1549]: time="2026-01-23T01:21:41.070843219Z" level=info msg="runtime interface created" Jan 23 01:21:41.070950 containerd[1549]: time="2026-01-23T01:21:41.070849209Z" level=info msg="created NRI interface" Jan 23 01:21:41.070950 containerd[1549]: time="2026-01-23T01:21:41.070857469Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 01:21:41.070950 containerd[1549]: time="2026-01-23T01:21:41.070869179Z" level=info msg="Connect containerd service" Jan 23 01:21:41.070950 containerd[1549]: time="2026-01-23T01:21:41.070930009Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 01:21:41.073730 containerd[1549]: time="2026-01-23T01:21:41.073695216Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:21:41.105507 systemd-networkd[1427]: eth0: DHCPv4 address 172.238.187.240/24, gateway 172.238.187.1 acquired from 23.205.167.160 Jan 23 01:21:41.105590 dbus-daemon[1513]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1427 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 01:21:41.108249 systemd-timesyncd[1461]: Network configuration changed, trying to establish connection. Jan 23 01:21:41.111636 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 01:21:41.149457 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Jan 23 01:21:41.176867 extend-filesystems[1568]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 01:21:41.176867 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 23 01:21:41.176867 extend-filesystems[1568]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Jan 23 01:21:41.191356 extend-filesystems[1516]: Resized filesystem in /dev/sda9 Jan 23 01:21:41.181186 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 01:21:41.181481 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 01:21:41.248121 containerd[1549]: time="2026-01-23T01:21:41.247621362Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 01:21:41.248121 containerd[1549]: time="2026-01-23T01:21:41.247774482Z" level=info msg="Start subscribing containerd event" Jan 23 01:21:41.248458 containerd[1549]: time="2026-01-23T01:21:41.248329731Z" level=info msg="Start recovering state" Jan 23 01:21:41.248625 containerd[1549]: time="2026-01-23T01:21:41.248611271Z" level=info msg="Start event monitor" Jan 23 01:21:41.248697 containerd[1549]: time="2026-01-23T01:21:41.248685721Z" level=info msg="Start cni network conf syncer for default" Jan 23 01:21:41.248803 containerd[1549]: time="2026-01-23T01:21:41.248730621Z" level=info msg="Start streaming server" Jan 23 01:21:41.249007 containerd[1549]: time="2026-01-23T01:21:41.248944381Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 01:21:41.249007 containerd[1549]: time="2026-01-23T01:21:41.248957711Z" level=info msg="runtime interface starting up..." Jan 23 01:21:41.249007 containerd[1549]: time="2026-01-23T01:21:41.248964461Z" level=info msg="starting plugins..." Jan 23 01:21:41.249007 containerd[1549]: time="2026-01-23T01:21:41.248983671Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 01:21:41.250560 containerd[1549]: time="2026-01-23T01:21:41.250398539Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 01:21:41.252702 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 01:21:41.258685 containerd[1549]: time="2026-01-23T01:21:41.256676743Z" level=info msg="containerd successfully booted in 0.260344s" Jan 23 01:21:41.276869 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 01:21:41.280946 dbus-daemon[1513]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 01:21:41.283830 dbus-daemon[1513]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1603 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 01:21:41.292363 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 01:21:42.221389 systemd-timesyncd[1461]: Contacted time server 185.234.20.134:123 (0.flatcar.pool.ntp.org). Jan 23 01:21:42.221597 systemd-timesyncd[1461]: Initial clock synchronization to Fri 2026-01-23 01:21:42.220606 UTC. Jan 23 01:21:42.223096 systemd-resolved[1429]: Clock change detected. Flushing caches. Jan 23 01:21:42.251094 tar[1536]: linux-amd64/README.md Jan 23 01:21:42.278317 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 01:21:42.308570 polkitd[1615]: Started polkitd version 126 Jan 23 01:21:42.315473 polkitd[1615]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 01:21:42.315798 polkitd[1615]: Loading rules from directory /run/polkit-1/rules.d Jan 23 01:21:42.315856 polkitd[1615]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:21:42.316072 polkitd[1615]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 01:21:42.316101 polkitd[1615]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:21:42.316138 polkitd[1615]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 01:21:42.316788 polkitd[1615]: Finished loading, compiling and executing 2 rules Jan 23 01:21:42.317048 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 01:21:42.318154 dbus-daemon[1513]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 01:21:42.319219 polkitd[1615]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 01:21:42.324174 systemd-networkd[1427]: eth0: Gained IPv6LL Jan 23 01:21:42.328486 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 01:21:42.331287 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 01:21:42.335854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:21:42.338418 sshd_keygen[1551]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 01:21:42.339380 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 01:21:42.345971 systemd-hostnamed[1603]: Hostname set to <172-238-187-240> (transient) Jan 23 01:21:42.346274 systemd-resolved[1429]: System hostname changed to '172-238-187-240'. Jan 23 01:21:42.376313 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 01:21:42.381945 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 01:21:42.392141 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 01:21:42.403632 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 01:21:42.404045 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 01:21:42.409169 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 01:21:42.430396 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 01:21:42.434566 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 01:21:42.437920 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 01:21:42.439076 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 01:21:42.575346 coreos-metadata[1512]: Jan 23 01:21:42.575 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 23 01:21:42.677814 coreos-metadata[1512]: Jan 23 01:21:42.677 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jan 23 01:21:42.905837 coreos-metadata[1512]: Jan 23 01:21:42.905 INFO Fetch successful Jan 23 01:21:42.905837 coreos-metadata[1512]: Jan 23 01:21:42.905 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jan 23 01:21:42.951804 coreos-metadata[1594]: Jan 23 01:21:42.951 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 23 01:21:43.071102 coreos-metadata[1594]: Jan 23 01:21:43.071 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jan 23 01:21:43.172296 coreos-metadata[1512]: Jan 23 01:21:43.172 INFO Fetch successful Jan 23 01:21:43.209512 coreos-metadata[1594]: Jan 23 01:21:43.209 INFO Fetch successful Jan 23 01:21:43.248615 update-ssh-keys[1666]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:21:43.257786 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 01:21:43.262207 systemd[1]: Finished sshkeys.service. Jan 23 01:21:43.307054 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 01:21:43.310520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:21:43.313147 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 01:21:43.314121 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 01:21:43.315876 systemd[1]: Startup finished in 3.186s (kernel) + 8.677s (initrd) + 5.158s (userspace) = 17.022s. Jan 23 01:21:43.322210 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:21:43.850730 kubelet[1686]: E0123 01:21:43.850665 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:21:43.853998 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:21:43.854195 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:21:43.854609 systemd[1]: kubelet.service: Consumed 904ms CPU time, 265M memory peak. Jan 23 01:21:43.915355 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 01:21:43.916829 systemd[1]: Started sshd@0-172.238.187.240:22-68.220.241.50:42924.service - OpenSSH per-connection server daemon (68.220.241.50:42924). Jan 23 01:21:44.095884 sshd[1699]: Accepted publickey for core from 68.220.241.50 port 42924 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:21:44.097713 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:21:44.104843 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 01:21:44.106590 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 01:21:44.115184 systemd-logind[1524]: New session 1 of user core. Jan 23 01:21:44.126265 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 01:21:44.130135 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 01:21:44.151112 (systemd)[1704]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 01:21:44.154088 systemd-logind[1524]: New session c1 of user core. Jan 23 01:21:44.300034 systemd[1704]: Queued start job for default target default.target. Jan 23 01:21:44.315224 systemd[1704]: Created slice app.slice - User Application Slice. Jan 23 01:21:44.315251 systemd[1704]: Reached target paths.target - Paths. Jan 23 01:21:44.315293 systemd[1704]: Reached target timers.target - Timers. Jan 23 01:21:44.317267 systemd[1704]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 01:21:44.330485 systemd[1704]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 01:21:44.330558 systemd[1704]: Reached target sockets.target - Sockets. Jan 23 01:21:44.330617 systemd[1704]: Reached target basic.target - Basic System. Jan 23 01:21:44.330708 systemd[1704]: Reached target default.target - Main User Target. Jan 23 01:21:44.330744 systemd[1704]: Startup finished in 169ms. Jan 23 01:21:44.330811 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 01:21:44.338945 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 01:21:44.511843 systemd[1]: Started sshd@1-172.238.187.240:22-68.220.241.50:42926.service - OpenSSH per-connection server daemon (68.220.241.50:42926). Jan 23 01:21:44.716573 sshd[1715]: Accepted publickey for core from 68.220.241.50 port 42926 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:21:44.718098 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:21:44.723517 systemd-logind[1524]: New session 2 of user core. Jan 23 01:21:44.733754 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 01:21:44.884409 sshd[1718]: Connection closed by 68.220.241.50 port 42926 Jan 23 01:21:44.884909 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Jan 23 01:21:44.889452 systemd-logind[1524]: Session 2 logged out. Waiting for processes to exit. Jan 23 01:21:44.890328 systemd[1]: sshd@1-172.238.187.240:22-68.220.241.50:42926.service: Deactivated successfully. Jan 23 01:21:44.892210 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 01:21:44.893903 systemd-logind[1524]: Removed session 2. Jan 23 01:21:44.931753 systemd[1]: Started sshd@2-172.238.187.240:22-68.220.241.50:42928.service - OpenSSH per-connection server daemon (68.220.241.50:42928). Jan 23 01:21:45.210362 sshd[1724]: Accepted publickey for core from 68.220.241.50 port 42928 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:21:45.211865 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:21:45.216871 systemd-logind[1524]: New session 3 of user core. Jan 23 01:21:45.227771 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 01:21:45.402141 sshd[1727]: Connection closed by 68.220.241.50 port 42928 Jan 23 01:21:45.402824 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Jan 23 01:21:45.406744 systemd[1]: sshd@2-172.238.187.240:22-68.220.241.50:42928.service: Deactivated successfully. Jan 23 01:21:45.408941 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 01:21:45.409649 systemd-logind[1524]: Session 3 logged out. Waiting for processes to exit. Jan 23 01:21:45.411382 systemd-logind[1524]: Removed session 3. Jan 23 01:21:45.433866 systemd[1]: Started sshd@3-172.238.187.240:22-68.220.241.50:42930.service - OpenSSH per-connection server daemon (68.220.241.50:42930). Jan 23 01:21:45.598366 sshd[1733]: Accepted publickey for core from 68.220.241.50 port 42930 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:21:45.599560 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:21:45.605077 systemd-logind[1524]: New session 4 of user core. Jan 23 01:21:45.613774 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 01:21:45.725897 sshd[1736]: Connection closed by 68.220.241.50 port 42930 Jan 23 01:21:45.726399 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Jan 23 01:21:45.730131 systemd[1]: sshd@3-172.238.187.240:22-68.220.241.50:42930.service: Deactivated successfully. Jan 23 01:21:45.732015 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 01:21:45.733030 systemd-logind[1524]: Session 4 logged out. Waiting for processes to exit. Jan 23 01:21:45.734818 systemd-logind[1524]: Removed session 4. Jan 23 01:21:45.767295 systemd[1]: Started sshd@4-172.238.187.240:22-68.220.241.50:42934.service - OpenSSH per-connection server daemon (68.220.241.50:42934). Jan 23 01:21:45.993836 sshd[1742]: Accepted publickey for core from 68.220.241.50 port 42934 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:21:45.995527 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:21:46.001471 systemd-logind[1524]: New session 5 of user core. Jan 23 01:21:46.007952 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 01:21:46.143390 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 01:21:46.143726 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:21:46.159297 sudo[1746]: pam_unix(sudo:session): session closed for user root Jan 23 01:21:46.191752 sshd[1745]: Connection closed by 68.220.241.50 port 42934 Jan 23 01:21:46.193466 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Jan 23 01:21:46.197653 systemd-logind[1524]: Session 5 logged out. Waiting for processes to exit. Jan 23 01:21:46.198389 systemd[1]: sshd@4-172.238.187.240:22-68.220.241.50:42934.service: Deactivated successfully. Jan 23 01:21:46.200594 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 01:21:46.202408 systemd-logind[1524]: Removed session 5. Jan 23 01:21:46.219951 systemd[1]: Started sshd@5-172.238.187.240:22-68.220.241.50:42948.service - OpenSSH per-connection server daemon (68.220.241.50:42948). Jan 23 01:21:46.383155 sshd[1752]: Accepted publickey for core from 68.220.241.50 port 42948 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:21:46.384208 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:21:46.389691 systemd-logind[1524]: New session 6 of user core. Jan 23 01:21:46.398778 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 01:21:46.495085 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 01:21:46.495534 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:21:46.500725 sudo[1757]: pam_unix(sudo:session): session closed for user root Jan 23 01:21:46.506909 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 01:21:46.507275 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:21:46.517979 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:21:46.560846 augenrules[1779]: No rules Jan 23 01:21:46.562481 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:21:46.563028 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:21:46.563849 sudo[1756]: pam_unix(sudo:session): session closed for user root Jan 23 01:21:46.585561 sshd[1755]: Connection closed by 68.220.241.50 port 42948 Jan 23 01:21:46.587417 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Jan 23 01:21:46.590936 systemd[1]: sshd@5-172.238.187.240:22-68.220.241.50:42948.service: Deactivated successfully. Jan 23 01:21:46.593037 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 01:21:46.594306 systemd-logind[1524]: Session 6 logged out. Waiting for processes to exit. Jan 23 01:21:46.596341 systemd-logind[1524]: Removed session 6. Jan 23 01:21:46.616166 systemd[1]: Started sshd@6-172.238.187.240:22-68.220.241.50:42958.service - OpenSSH per-connection server daemon (68.220.241.50:42958). Jan 23 01:21:46.785465 sshd[1788]: Accepted publickey for core from 68.220.241.50 port 42958 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:21:46.787382 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:21:46.792968 systemd-logind[1524]: New session 7 of user core. Jan 23 01:21:46.799758 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 01:21:46.895190 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 01:21:46.895511 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:21:47.177084 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 01:21:47.196049 (dockerd)[1809]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 01:21:47.412170 dockerd[1809]: time="2026-01-23T01:21:47.412089909Z" level=info msg="Starting up" Jan 23 01:21:47.413176 dockerd[1809]: time="2026-01-23T01:21:47.413151868Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 01:21:47.425042 dockerd[1809]: time="2026-01-23T01:21:47.425017116Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 01:21:47.438895 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3842530412-merged.mount: Deactivated successfully. Jan 23 01:21:47.478081 systemd[1]: var-lib-docker-metacopy\x2dcheck3191322915-merged.mount: Deactivated successfully. Jan 23 01:21:47.498832 dockerd[1809]: time="2026-01-23T01:21:47.498801432Z" level=info msg="Loading containers: start." Jan 23 01:21:47.508892 kernel: Initializing XFRM netlink socket Jan 23 01:21:47.767509 systemd-networkd[1427]: docker0: Link UP Jan 23 01:21:47.770774 dockerd[1809]: time="2026-01-23T01:21:47.770734480Z" level=info msg="Loading containers: done." Jan 23 01:21:47.785745 dockerd[1809]: time="2026-01-23T01:21:47.785709975Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 01:21:47.785865 dockerd[1809]: time="2026-01-23T01:21:47.785771495Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 01:21:47.785865 dockerd[1809]: time="2026-01-23T01:21:47.785847725Z" level=info msg="Initializing buildkit" Jan 23 01:21:47.810562 dockerd[1809]: time="2026-01-23T01:21:47.810539020Z" level=info msg="Completed buildkit initialization" Jan 23 01:21:47.815059 dockerd[1809]: time="2026-01-23T01:21:47.815035156Z" level=info msg="Daemon has completed initialization" Jan 23 01:21:47.815135 dockerd[1809]: time="2026-01-23T01:21:47.815078746Z" level=info msg="API listen on /run/docker.sock" Jan 23 01:21:47.815319 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 01:21:48.434211 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2209469236-merged.mount: Deactivated successfully. Jan 23 01:21:48.458505 containerd[1549]: time="2026-01-23T01:21:48.458474082Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 01:21:48.995534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895056981.mount: Deactivated successfully. Jan 23 01:21:50.379119 containerd[1549]: time="2026-01-23T01:21:50.379052082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:50.381424 containerd[1549]: time="2026-01-23T01:21:50.381269570Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070653" Jan 23 01:21:50.382219 containerd[1549]: time="2026-01-23T01:21:50.382193639Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:50.384720 containerd[1549]: time="2026-01-23T01:21:50.384492756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:50.385574 containerd[1549]: time="2026-01-23T01:21:50.385551215Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.927043193s" Jan 23 01:21:50.385668 containerd[1549]: time="2026-01-23T01:21:50.385634165Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 23 01:21:50.386813 containerd[1549]: time="2026-01-23T01:21:50.386791144Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 01:21:51.758719 containerd[1549]: time="2026-01-23T01:21:51.758578602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:51.759700 containerd[1549]: time="2026-01-23T01:21:51.759448341Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993360" Jan 23 01:21:51.760232 containerd[1549]: time="2026-01-23T01:21:51.760203451Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:51.762096 containerd[1549]: time="2026-01-23T01:21:51.762075859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:51.763086 containerd[1549]: time="2026-01-23T01:21:51.763059048Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.376245284s" Jan 23 01:21:51.763148 containerd[1549]: time="2026-01-23T01:21:51.763089168Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 23 01:21:51.764001 containerd[1549]: time="2026-01-23T01:21:51.763982467Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 01:21:52.979455 containerd[1549]: time="2026-01-23T01:21:52.979388171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:52.980534 containerd[1549]: time="2026-01-23T01:21:52.980507150Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405082" Jan 23 01:21:52.980909 containerd[1549]: time="2026-01-23T01:21:52.980830510Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:52.986581 containerd[1549]: time="2026-01-23T01:21:52.986551444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:52.988773 containerd[1549]: time="2026-01-23T01:21:52.988211253Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.224145836s" Jan 23 01:21:52.988773 containerd[1549]: time="2026-01-23T01:21:52.988297382Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 23 01:21:52.989297 containerd[1549]: time="2026-01-23T01:21:52.989245612Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 01:21:54.006509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount602028566.mount: Deactivated successfully. Jan 23 01:21:54.008692 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 01:21:54.012922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:21:54.230028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:21:54.239290 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:21:54.288926 kubelet[2101]: E0123 01:21:54.288792 2101 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:21:54.295622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:21:54.296125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:21:54.296588 systemd[1]: kubelet.service: Consumed 216ms CPU time, 110.6M memory peak. Jan 23 01:21:54.487884 containerd[1549]: time="2026-01-23T01:21:54.487821313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:54.488794 containerd[1549]: time="2026-01-23T01:21:54.488651382Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161905" Jan 23 01:21:54.489302 containerd[1549]: time="2026-01-23T01:21:54.489278031Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:54.490701 containerd[1549]: time="2026-01-23T01:21:54.490680700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:54.491158 containerd[1549]: time="2026-01-23T01:21:54.491133230Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.501859299s" Jan 23 01:21:54.491196 containerd[1549]: time="2026-01-23T01:21:54.491162680Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 23 01:21:54.492143 containerd[1549]: time="2026-01-23T01:21:54.491984459Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 01:21:54.999987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3169399292.mount: Deactivated successfully. Jan 23 01:21:55.653974 containerd[1549]: time="2026-01-23T01:21:55.653926957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:55.654851 containerd[1549]: time="2026-01-23T01:21:55.654828346Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565247" Jan 23 01:21:55.655349 containerd[1549]: time="2026-01-23T01:21:55.655310715Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:55.657263 containerd[1549]: time="2026-01-23T01:21:55.657230514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:55.658227 containerd[1549]: time="2026-01-23T01:21:55.658071773Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.166064304s" Jan 23 01:21:55.658227 containerd[1549]: time="2026-01-23T01:21:55.658096023Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 23 01:21:55.658838 containerd[1549]: time="2026-01-23T01:21:55.658814402Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 01:21:56.200163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount175487979.mount: Deactivated successfully. Jan 23 01:21:56.204975 containerd[1549]: time="2026-01-23T01:21:56.204929946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:21:56.206085 containerd[1549]: time="2026-01-23T01:21:56.206032645Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Jan 23 01:21:56.206981 containerd[1549]: time="2026-01-23T01:21:56.206948284Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:21:56.209741 containerd[1549]: time="2026-01-23T01:21:56.209000062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:21:56.209741 containerd[1549]: time="2026-01-23T01:21:56.209585541Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 550.744959ms" Jan 23 01:21:56.209741 containerd[1549]: time="2026-01-23T01:21:56.209610671Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 01:21:56.210285 containerd[1549]: time="2026-01-23T01:21:56.210260800Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 01:21:56.792020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3258219610.mount: Deactivated successfully. Jan 23 01:21:58.414381 containerd[1549]: time="2026-01-23T01:21:58.414329626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:58.416062 containerd[1549]: time="2026-01-23T01:21:58.415539195Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682062" Jan 23 01:21:58.416606 containerd[1549]: time="2026-01-23T01:21:58.416570414Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:58.419216 containerd[1549]: time="2026-01-23T01:21:58.419194302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:21:58.420262 containerd[1549]: time="2026-01-23T01:21:58.420241240Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.20995274s" Jan 23 01:21:58.420337 containerd[1549]: time="2026-01-23T01:21:58.420322880Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 23 01:22:00.761912 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:22:00.762057 systemd[1]: kubelet.service: Consumed 216ms CPU time, 110.6M memory peak. Jan 23 01:22:00.765878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:22:00.797064 systemd[1]: Reload requested from client PID 2245 ('systemctl') (unit session-7.scope)... Jan 23 01:22:00.797179 systemd[1]: Reloading... Jan 23 01:22:00.947684 zram_generator::config[2295]: No configuration found. Jan 23 01:22:01.169289 systemd[1]: Reloading finished in 371 ms. Jan 23 01:22:01.233356 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:22:01.233458 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:22:01.233753 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:22:01.233797 systemd[1]: kubelet.service: Consumed 151ms CPU time, 98.3M memory peak. Jan 23 01:22:01.235469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:22:01.419534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:22:01.428226 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:22:01.463987 kubelet[2343]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:22:01.463987 kubelet[2343]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:22:01.463987 kubelet[2343]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:22:01.464335 kubelet[2343]: I0123 01:22:01.464038 2343 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:22:01.677036 kubelet[2343]: I0123 01:22:01.676943 2343 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 01:22:01.677036 kubelet[2343]: I0123 01:22:01.676969 2343 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:22:01.677394 kubelet[2343]: I0123 01:22:01.677183 2343 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 01:22:01.710845 kubelet[2343]: I0123 01:22:01.710275 2343 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:22:01.710845 kubelet[2343]: E0123 01:22:01.710489 2343 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.238.187.240:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.238.187.240:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:22:01.719111 kubelet[2343]: I0123 01:22:01.719091 2343 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:22:01.723558 kubelet[2343]: I0123 01:22:01.723538 2343 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:22:01.725503 kubelet[2343]: I0123 01:22:01.725467 2343 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:22:01.725631 kubelet[2343]: I0123 01:22:01.725498 2343 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-187-240","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:22:01.725946 kubelet[2343]: I0123 01:22:01.725654 2343 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:22:01.725946 kubelet[2343]: I0123 01:22:01.725664 2343 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 01:22:01.726002 kubelet[2343]: I0123 01:22:01.725964 2343 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:22:01.729871 kubelet[2343]: I0123 01:22:01.729856 2343 kubelet.go:446] "Attempting to sync node with API server" Jan 23 01:22:01.730131 kubelet[2343]: I0123 01:22:01.730084 2343 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:22:01.730131 kubelet[2343]: I0123 01:22:01.730102 2343 kubelet.go:352] "Adding apiserver pod source" Jan 23 01:22:01.730131 kubelet[2343]: I0123 01:22:01.730112 2343 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:22:01.737011 kubelet[2343]: I0123 01:22:01.736990 2343 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:22:01.737320 kubelet[2343]: I0123 01:22:01.737298 2343 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 01:22:01.738559 kubelet[2343]: W0123 01:22:01.737976 2343 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:22:01.738869 kubelet[2343]: W0123 01:22:01.738814 2343 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.238.187.240:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-238-187-240&limit=500&resourceVersion=0": dial tcp 172.238.187.240:6443: connect: connection refused Jan 23 01:22:01.738920 kubelet[2343]: E0123 01:22:01.738877 2343 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.238.187.240:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-238-187-240&limit=500&resourceVersion=0\": dial tcp 172.238.187.240:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:22:01.739895 kubelet[2343]: W0123 01:22:01.738949 2343 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.238.187.240:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.238.187.240:6443: connect: connection refused Jan 23 01:22:01.739895 kubelet[2343]: E0123 01:22:01.738984 2343 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.238.187.240:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.238.187.240:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:22:01.740021 kubelet[2343]: I0123 01:22:01.739997 2343 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:22:01.740121 kubelet[2343]: I0123 01:22:01.740027 2343 server.go:1287] "Started kubelet" Jan 23 01:22:01.741290 kubelet[2343]: I0123 01:22:01.740751 2343 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:22:01.746830 kubelet[2343]: I0123 01:22:01.746784 2343 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:22:01.747325 kubelet[2343]: I0123 01:22:01.747304 2343 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:22:01.748523 kubelet[2343]: I0123 01:22:01.748494 2343 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:22:01.751236 kubelet[2343]: I0123 01:22:01.751210 2343 server.go:479] "Adding debug handlers to kubelet server" Jan 23 01:22:01.752074 kubelet[2343]: I0123 01:22:01.752046 2343 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:22:01.753102 kubelet[2343]: E0123 01:22:01.751917 2343 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.238.187.240:6443/api/v1/namespaces/default/events\": dial tcp 172.238.187.240:6443: connect: connection refused" event="&Event{ObjectMeta:{172-238-187-240.188d37905954ac2f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-238-187-240,UID:172-238-187-240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-238-187-240,},FirstTimestamp:2026-01-23 01:22:01.740012591 +0000 UTC m=+0.307465144,LastTimestamp:2026-01-23 01:22:01.740012591 +0000 UTC m=+0.307465144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-238-187-240,}" Jan 23 01:22:01.754673 kubelet[2343]: I0123 01:22:01.754200 2343 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:22:01.754673 kubelet[2343]: E0123 01:22:01.754325 2343 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-187-240\" not found" Jan 23 01:22:01.754673 kubelet[2343]: I0123 01:22:01.754357 2343 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:22:01.754673 kubelet[2343]: I0123 01:22:01.754391 2343 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:22:01.754673 kubelet[2343]: W0123 01:22:01.754605 2343 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.238.187.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.238.187.240:6443: connect: connection refused Jan 23 01:22:01.754830 kubelet[2343]: E0123 01:22:01.754812 2343 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.238.187.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.238.187.240:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:22:01.754931 kubelet[2343]: E0123 01:22:01.754912 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.187.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-187-240?timeout=10s\": dial tcp 172.238.187.240:6443: connect: connection refused" interval="200ms" Jan 23 01:22:01.755168 kubelet[2343]: I0123 01:22:01.755155 2343 factory.go:221] Registration of the systemd container factory successfully Jan 23 01:22:01.755285 kubelet[2343]: I0123 01:22:01.755272 2343 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:22:01.756266 kubelet[2343]: I0123 01:22:01.756253 2343 factory.go:221] Registration of the containerd container factory successfully Jan 23 01:22:01.774256 kubelet[2343]: I0123 01:22:01.774209 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 01:22:01.775589 kubelet[2343]: I0123 01:22:01.775559 2343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 01:22:01.775589 kubelet[2343]: I0123 01:22:01.775583 2343 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 01:22:01.775694 kubelet[2343]: I0123 01:22:01.775603 2343 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:22:01.775694 kubelet[2343]: I0123 01:22:01.775611 2343 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 01:22:01.775741 kubelet[2343]: E0123 01:22:01.775700 2343 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:22:01.782691 kubelet[2343]: W0123 01:22:01.782570 2343 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.238.187.240:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.238.187.240:6443: connect: connection refused Jan 23 01:22:01.782771 kubelet[2343]: E0123 01:22:01.782631 2343 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.238.187.240:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.238.187.240:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:22:01.785332 kubelet[2343]: E0123 01:22:01.785315 2343 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:22:01.791795 kubelet[2343]: I0123 01:22:01.791780 2343 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:22:01.791795 kubelet[2343]: I0123 01:22:01.791793 2343 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:22:01.791876 kubelet[2343]: I0123 01:22:01.791810 2343 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:22:01.793379 kubelet[2343]: I0123 01:22:01.793364 2343 policy_none.go:49] "None policy: Start" Jan 23 01:22:01.793422 kubelet[2343]: I0123 01:22:01.793383 2343 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:22:01.793422 kubelet[2343]: I0123 01:22:01.793395 2343 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:22:01.799832 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:22:01.814130 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:22:01.817610 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:22:01.827803 kubelet[2343]: I0123 01:22:01.827711 2343 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 01:22:01.827919 kubelet[2343]: I0123 01:22:01.827902 2343 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:22:01.827969 kubelet[2343]: I0123 01:22:01.827919 2343 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:22:01.829135 kubelet[2343]: I0123 01:22:01.828886 2343 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:22:01.830360 kubelet[2343]: E0123 01:22:01.830311 2343 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:22:01.830360 kubelet[2343]: E0123 01:22:01.830352 2343 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-238-187-240\" not found" Jan 23 01:22:01.888506 systemd[1]: Created slice kubepods-burstable-pod129c8df939ddad477c65313707e4d343.slice - libcontainer container kubepods-burstable-pod129c8df939ddad477c65313707e4d343.slice. Jan 23 01:22:01.906367 kubelet[2343]: E0123 01:22:01.906335 2343 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-187-240\" not found" node="172-238-187-240" Jan 23 01:22:01.910248 systemd[1]: Created slice kubepods-burstable-pod43b69f3e913f9e654a7dce8088f93adf.slice - libcontainer container kubepods-burstable-pod43b69f3e913f9e654a7dce8088f93adf.slice. Jan 23 01:22:01.913059 kubelet[2343]: E0123 01:22:01.913043 2343 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-187-240\" not found" node="172-238-187-240" Jan 23 01:22:01.914883 systemd[1]: Created slice kubepods-burstable-podfd0862947f6409f29f85f845de82a4f0.slice - libcontainer container kubepods-burstable-podfd0862947f6409f29f85f845de82a4f0.slice. Jan 23 01:22:01.916280 kubelet[2343]: E0123 01:22:01.916265 2343 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-187-240\" not found" node="172-238-187-240" Jan 23 01:22:01.930291 kubelet[2343]: I0123 01:22:01.930074 2343 kubelet_node_status.go:75] "Attempting to register node" node="172-238-187-240" Jan 23 01:22:01.931601 kubelet[2343]: E0123 01:22:01.931576 2343 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.187.240:6443/api/v1/nodes\": dial tcp 172.238.187.240:6443: connect: connection refused" node="172-238-187-240" Jan 23 01:22:01.954826 kubelet[2343]: I0123 01:22:01.954799 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/129c8df939ddad477c65313707e4d343-ca-certs\") pod \"kube-apiserver-172-238-187-240\" (UID: \"129c8df939ddad477c65313707e4d343\") " pod="kube-system/kube-apiserver-172-238-187-240" Jan 23 01:22:01.954826 kubelet[2343]: I0123 01:22:01.954828 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/129c8df939ddad477c65313707e4d343-k8s-certs\") pod \"kube-apiserver-172-238-187-240\" (UID: \"129c8df939ddad477c65313707e4d343\") " pod="kube-system/kube-apiserver-172-238-187-240" Jan 23 01:22:01.954919 kubelet[2343]: I0123 01:22:01.954843 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43b69f3e913f9e654a7dce8088f93adf-ca-certs\") pod \"kube-controller-manager-172-238-187-240\" (UID: \"43b69f3e913f9e654a7dce8088f93adf\") " pod="kube-system/kube-controller-manager-172-238-187-240" Jan 23 01:22:01.954919 kubelet[2343]: I0123 01:22:01.954856 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43b69f3e913f9e654a7dce8088f93adf-k8s-certs\") pod \"kube-controller-manager-172-238-187-240\" (UID: \"43b69f3e913f9e654a7dce8088f93adf\") " pod="kube-system/kube-controller-manager-172-238-187-240" Jan 23 01:22:01.954919 kubelet[2343]: I0123 01:22:01.954870 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd0862947f6409f29f85f845de82a4f0-kubeconfig\") pod \"kube-scheduler-172-238-187-240\" (UID: \"fd0862947f6409f29f85f845de82a4f0\") " pod="kube-system/kube-scheduler-172-238-187-240" Jan 23 01:22:01.954919 kubelet[2343]: I0123 01:22:01.954884 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/129c8df939ddad477c65313707e4d343-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-187-240\" (UID: \"129c8df939ddad477c65313707e4d343\") " pod="kube-system/kube-apiserver-172-238-187-240" Jan 23 01:22:01.954919 kubelet[2343]: I0123 01:22:01.954897 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/43b69f3e913f9e654a7dce8088f93adf-flexvolume-dir\") pod \"kube-controller-manager-172-238-187-240\" (UID: \"43b69f3e913f9e654a7dce8088f93adf\") " pod="kube-system/kube-controller-manager-172-238-187-240" Jan 23 01:22:01.955047 kubelet[2343]: I0123 01:22:01.954909 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43b69f3e913f9e654a7dce8088f93adf-kubeconfig\") pod \"kube-controller-manager-172-238-187-240\" (UID: \"43b69f3e913f9e654a7dce8088f93adf\") " pod="kube-system/kube-controller-manager-172-238-187-240" Jan 23 01:22:01.955047 kubelet[2343]: I0123 01:22:01.954922 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43b69f3e913f9e654a7dce8088f93adf-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-187-240\" (UID: \"43b69f3e913f9e654a7dce8088f93adf\") " pod="kube-system/kube-controller-manager-172-238-187-240" Jan 23 01:22:01.955951 kubelet[2343]: E0123 01:22:01.955928 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.187.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-187-240?timeout=10s\": dial tcp 172.238.187.240:6443: connect: connection refused" interval="400ms" Jan 23 01:22:02.134381 kubelet[2343]: I0123 01:22:02.134351 2343 kubelet_node_status.go:75] "Attempting to register node" node="172-238-187-240" Jan 23 01:22:02.134678 kubelet[2343]: E0123 01:22:02.134657 2343 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.187.240:6443/api/v1/nodes\": dial tcp 172.238.187.240:6443: connect: connection refused" node="172-238-187-240" Jan 23 01:22:02.207572 kubelet[2343]: E0123 01:22:02.207406 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:02.208517 containerd[1549]: time="2026-01-23T01:22:02.208486512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-187-240,Uid:129c8df939ddad477c65313707e4d343,Namespace:kube-system,Attempt:0,}" Jan 23 01:22:02.213670 kubelet[2343]: E0123 01:22:02.213616 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:02.214137 containerd[1549]: time="2026-01-23T01:22:02.214110497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-187-240,Uid:43b69f3e913f9e654a7dce8088f93adf,Namespace:kube-system,Attempt:0,}" Jan 23 01:22:02.217630 kubelet[2343]: E0123 01:22:02.217588 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:02.236606 containerd[1549]: time="2026-01-23T01:22:02.236580184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-187-240,Uid:fd0862947f6409f29f85f845de82a4f0,Namespace:kube-system,Attempt:0,}" Jan 23 01:22:02.239849 containerd[1549]: time="2026-01-23T01:22:02.239741601Z" level=info msg="connecting to shim 6053a490735c3d07dca3b1aa66d63f1a63405e10b26f7ba359d8c625d3d37226" address="unix:///run/containerd/s/1c40a4bfa674685a809a9f273d9e22a0d90a5467afb07a568387f29ae885d552" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:22:02.240105 containerd[1549]: time="2026-01-23T01:22:02.240086881Z" level=info msg="connecting to shim 75facf5090ec8831e1f32cb2b1a7e94d6d1f3f6ff1977db8357888b1de3f05e2" address="unix:///run/containerd/s/c669eb52a239996d9423d494708d3ac5883bd43bd65689896bb3cfdaa35f92eb" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:22:02.273694 containerd[1549]: time="2026-01-23T01:22:02.273630927Z" level=info msg="connecting to shim f2f76a0a9dab85ce1d3038f41297ce1ac8f0a4caf190eadeef2f3305f84201ab" address="unix:///run/containerd/s/59ee67a532e27bbe864d38d35c91ceafb22815201f02bde0ef90b4ee1e85ca70" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:22:02.287932 systemd[1]: Started cri-containerd-6053a490735c3d07dca3b1aa66d63f1a63405e10b26f7ba359d8c625d3d37226.scope - libcontainer container 6053a490735c3d07dca3b1aa66d63f1a63405e10b26f7ba359d8c625d3d37226. Jan 23 01:22:02.293861 systemd[1]: Started cri-containerd-75facf5090ec8831e1f32cb2b1a7e94d6d1f3f6ff1977db8357888b1de3f05e2.scope - libcontainer container 75facf5090ec8831e1f32cb2b1a7e94d6d1f3f6ff1977db8357888b1de3f05e2. Jan 23 01:22:02.320824 systemd[1]: Started cri-containerd-f2f76a0a9dab85ce1d3038f41297ce1ac8f0a4caf190eadeef2f3305f84201ab.scope - libcontainer container f2f76a0a9dab85ce1d3038f41297ce1ac8f0a4caf190eadeef2f3305f84201ab. Jan 23 01:22:02.357823 kubelet[2343]: E0123 01:22:02.357776 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.187.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-187-240?timeout=10s\": dial tcp 172.238.187.240:6443: connect: connection refused" interval="800ms" Jan 23 01:22:02.381705 containerd[1549]: time="2026-01-23T01:22:02.381423079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-187-240,Uid:fd0862947f6409f29f85f845de82a4f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2f76a0a9dab85ce1d3038f41297ce1ac8f0a4caf190eadeef2f3305f84201ab\"" Jan 23 01:22:02.386954 kubelet[2343]: E0123 01:22:02.386722 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:02.388653 containerd[1549]: time="2026-01-23T01:22:02.388596112Z" level=info msg="CreateContainer within sandbox \"f2f76a0a9dab85ce1d3038f41297ce1ac8f0a4caf190eadeef2f3305f84201ab\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 01:22:02.390805 containerd[1549]: time="2026-01-23T01:22:02.390777170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-187-240,Uid:129c8df939ddad477c65313707e4d343,Namespace:kube-system,Attempt:0,} returns sandbox id \"75facf5090ec8831e1f32cb2b1a7e94d6d1f3f6ff1977db8357888b1de3f05e2\"" Jan 23 01:22:02.398662 kubelet[2343]: E0123 01:22:02.397838 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:02.404186 containerd[1549]: time="2026-01-23T01:22:02.404048637Z" level=info msg="Container 4b53e7ccb49a97fdf8fd1b5bee65bb37a5657993a798f05bc9cebcf5ab11dc8e: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:22:02.408818 containerd[1549]: time="2026-01-23T01:22:02.408794062Z" level=info msg="CreateContainer within sandbox \"75facf5090ec8831e1f32cb2b1a7e94d6d1f3f6ff1977db8357888b1de3f05e2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 01:22:02.414388 containerd[1549]: time="2026-01-23T01:22:02.414358576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-187-240,Uid:43b69f3e913f9e654a7dce8088f93adf,Namespace:kube-system,Attempt:0,} returns sandbox id \"6053a490735c3d07dca3b1aa66d63f1a63405e10b26f7ba359d8c625d3d37226\"" Jan 23 01:22:02.415517 kubelet[2343]: E0123 01:22:02.415499 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:02.416434 containerd[1549]: time="2026-01-23T01:22:02.416403714Z" level=info msg="CreateContainer within sandbox \"f2f76a0a9dab85ce1d3038f41297ce1ac8f0a4caf190eadeef2f3305f84201ab\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4b53e7ccb49a97fdf8fd1b5bee65bb37a5657993a798f05bc9cebcf5ab11dc8e\"" Jan 23 01:22:02.417463 containerd[1549]: time="2026-01-23T01:22:02.417417833Z" level=info msg="StartContainer for \"4b53e7ccb49a97fdf8fd1b5bee65bb37a5657993a798f05bc9cebcf5ab11dc8e\"" Jan 23 01:22:02.418459 containerd[1549]: time="2026-01-23T01:22:02.418438532Z" level=info msg="connecting to shim 4b53e7ccb49a97fdf8fd1b5bee65bb37a5657993a798f05bc9cebcf5ab11dc8e" address="unix:///run/containerd/s/59ee67a532e27bbe864d38d35c91ceafb22815201f02bde0ef90b4ee1e85ca70" protocol=ttrpc version=3 Jan 23 01:22:02.418833 containerd[1549]: time="2026-01-23T01:22:02.418799642Z" level=info msg="CreateContainer within sandbox \"6053a490735c3d07dca3b1aa66d63f1a63405e10b26f7ba359d8c625d3d37226\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 01:22:02.422052 containerd[1549]: time="2026-01-23T01:22:02.422026439Z" level=info msg="Container 7a182fb8467766d319018a6bdb1483194ed4ed50f1f26dbc041375b640e27896: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:22:02.428294 containerd[1549]: time="2026-01-23T01:22:02.427749263Z" level=info msg="CreateContainer within sandbox \"75facf5090ec8831e1f32cb2b1a7e94d6d1f3f6ff1977db8357888b1de3f05e2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7a182fb8467766d319018a6bdb1483194ed4ed50f1f26dbc041375b640e27896\"" Jan 23 01:22:02.428294 containerd[1549]: time="2026-01-23T01:22:02.427835923Z" level=info msg="Container 66426b2af5e65c6e32663c02730b172870e6348bb4ef662d6b456e32c9b41106: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:22:02.428294 containerd[1549]: time="2026-01-23T01:22:02.428214742Z" level=info msg="StartContainer for \"7a182fb8467766d319018a6bdb1483194ed4ed50f1f26dbc041375b640e27896\"" Jan 23 01:22:02.429744 containerd[1549]: time="2026-01-23T01:22:02.429716471Z" level=info msg="connecting to shim 7a182fb8467766d319018a6bdb1483194ed4ed50f1f26dbc041375b640e27896" address="unix:///run/containerd/s/c669eb52a239996d9423d494708d3ac5883bd43bd65689896bb3cfdaa35f92eb" protocol=ttrpc version=3 Jan 23 01:22:02.431945 containerd[1549]: time="2026-01-23T01:22:02.431916909Z" level=info msg="CreateContainer within sandbox \"6053a490735c3d07dca3b1aa66d63f1a63405e10b26f7ba359d8c625d3d37226\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"66426b2af5e65c6e32663c02730b172870e6348bb4ef662d6b456e32c9b41106\"" Jan 23 01:22:02.433268 containerd[1549]: time="2026-01-23T01:22:02.433240027Z" level=info msg="StartContainer for \"66426b2af5e65c6e32663c02730b172870e6348bb4ef662d6b456e32c9b41106\"" Jan 23 01:22:02.434253 containerd[1549]: time="2026-01-23T01:22:02.434223356Z" level=info msg="connecting to shim 66426b2af5e65c6e32663c02730b172870e6348bb4ef662d6b456e32c9b41106" address="unix:///run/containerd/s/1c40a4bfa674685a809a9f273d9e22a0d90a5467afb07a568387f29ae885d552" protocol=ttrpc version=3 Jan 23 01:22:02.449246 systemd[1]: Started cri-containerd-4b53e7ccb49a97fdf8fd1b5bee65bb37a5657993a798f05bc9cebcf5ab11dc8e.scope - libcontainer container 4b53e7ccb49a97fdf8fd1b5bee65bb37a5657993a798f05bc9cebcf5ab11dc8e. Jan 23 01:22:02.463387 systemd[1]: Started cri-containerd-7a182fb8467766d319018a6bdb1483194ed4ed50f1f26dbc041375b640e27896.scope - libcontainer container 7a182fb8467766d319018a6bdb1483194ed4ed50f1f26dbc041375b640e27896. Jan 23 01:22:02.485781 systemd[1]: Started cri-containerd-66426b2af5e65c6e32663c02730b172870e6348bb4ef662d6b456e32c9b41106.scope - libcontainer container 66426b2af5e65c6e32663c02730b172870e6348bb4ef662d6b456e32c9b41106. Jan 23 01:22:02.538285 kubelet[2343]: I0123 01:22:02.538255 2343 kubelet_node_status.go:75] "Attempting to register node" node="172-238-187-240" Jan 23 01:22:02.539082 kubelet[2343]: E0123 01:22:02.538691 2343 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.187.240:6443/api/v1/nodes\": dial tcp 172.238.187.240:6443: connect: connection refused" node="172-238-187-240" Jan 23 01:22:02.541753 containerd[1549]: time="2026-01-23T01:22:02.541725619Z" level=info msg="StartContainer for \"7a182fb8467766d319018a6bdb1483194ed4ed50f1f26dbc041375b640e27896\" returns successfully" Jan 23 01:22:02.568801 containerd[1549]: time="2026-01-23T01:22:02.568763902Z" level=info msg="StartContainer for \"4b53e7ccb49a97fdf8fd1b5bee65bb37a5657993a798f05bc9cebcf5ab11dc8e\" returns successfully" Jan 23 01:22:02.590671 containerd[1549]: time="2026-01-23T01:22:02.590616420Z" level=info msg="StartContainer for \"66426b2af5e65c6e32663c02730b172870e6348bb4ef662d6b456e32c9b41106\" returns successfully" Jan 23 01:22:02.796246 kubelet[2343]: E0123 01:22:02.796151 2343 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-187-240\" not found" node="172-238-187-240" Jan 23 01:22:02.796330 kubelet[2343]: E0123 01:22:02.796260 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:02.800688 kubelet[2343]: E0123 01:22:02.800627 2343 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-187-240\" not found" node="172-238-187-240" Jan 23 01:22:02.801194 kubelet[2343]: E0123 01:22:02.801173 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:02.804204 kubelet[2343]: E0123 01:22:02.804184 2343 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-187-240\" not found" node="172-238-187-240" Jan 23 01:22:02.804298 kubelet[2343]: E0123 01:22:02.804281 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:03.343228 kubelet[2343]: I0123 01:22:03.343193 2343 kubelet_node_status.go:75] "Attempting to register node" node="172-238-187-240" Jan 23 01:22:03.809202 kubelet[2343]: E0123 01:22:03.809103 2343 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-187-240\" not found" node="172-238-187-240" Jan 23 01:22:03.809671 kubelet[2343]: E0123 01:22:03.809236 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:03.811145 kubelet[2343]: E0123 01:22:03.809947 2343 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-187-240\" not found" node="172-238-187-240" Jan 23 01:22:03.811145 kubelet[2343]: E0123 01:22:03.810031 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:03.811221 kubelet[2343]: E0123 01:22:03.811189 2343 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-187-240\" not found" node="172-238-187-240" Jan 23 01:22:03.811299 kubelet[2343]: E0123 01:22:03.811281 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:04.018185 kubelet[2343]: E0123 01:22:04.018122 2343 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-238-187-240\" not found" node="172-238-187-240" Jan 23 01:22:04.088465 kubelet[2343]: I0123 01:22:04.088111 2343 kubelet_node_status.go:78] "Successfully registered node" node="172-238-187-240" Jan 23 01:22:04.155169 kubelet[2343]: I0123 01:22:04.155142 2343 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-187-240" Jan 23 01:22:04.182797 kubelet[2343]: E0123 01:22:04.182762 2343 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-238-187-240\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-238-187-240" Jan 23 01:22:04.182797 kubelet[2343]: I0123 01:22:04.182792 2343 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-187-240" Jan 23 01:22:04.185307 kubelet[2343]: E0123 01:22:04.185270 2343 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-187-240\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-238-187-240" Jan 23 01:22:04.185349 kubelet[2343]: I0123 01:22:04.185308 2343 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-187-240" Jan 23 01:22:04.190244 kubelet[2343]: E0123 01:22:04.190205 2343 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-187-240\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-238-187-240" Jan 23 01:22:04.733468 kubelet[2343]: I0123 01:22:04.733430 2343 apiserver.go:52] "Watching apiserver" Jan 23 01:22:04.755089 kubelet[2343]: I0123 01:22:04.755064 2343 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:22:05.480535 kubelet[2343]: I0123 01:22:05.480489 2343 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-187-240" Jan 23 01:22:05.485217 kubelet[2343]: E0123 01:22:05.485165 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:05.810785 kubelet[2343]: E0123 01:22:05.810694 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:05.877824 systemd[1]: Reload requested from client PID 2609 ('systemctl') (unit session-7.scope)... Jan 23 01:22:05.877844 systemd[1]: Reloading... Jan 23 01:22:06.003678 zram_generator::config[2653]: No configuration found. Jan 23 01:22:06.163483 kubelet[2343]: I0123 01:22:06.162774 2343 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-187-240" Jan 23 01:22:06.172298 kubelet[2343]: E0123 01:22:06.172258 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:06.242791 systemd[1]: Reloading finished in 364 ms. Jan 23 01:22:06.276027 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:22:06.283253 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 01:22:06.283966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:22:06.284019 systemd[1]: kubelet.service: Consumed 695ms CPU time, 131.7M memory peak. Jan 23 01:22:06.287227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:22:06.484961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:22:06.489913 (kubelet)[2704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:22:06.543211 kubelet[2704]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:22:06.543211 kubelet[2704]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:22:06.543211 kubelet[2704]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:22:06.543583 kubelet[2704]: I0123 01:22:06.543253 2704 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:22:06.549064 kubelet[2704]: I0123 01:22:06.549034 2704 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 01:22:06.549064 kubelet[2704]: I0123 01:22:06.549054 2704 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:22:06.549255 kubelet[2704]: I0123 01:22:06.549212 2704 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 01:22:06.550224 kubelet[2704]: I0123 01:22:06.550201 2704 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 01:22:06.552622 kubelet[2704]: I0123 01:22:06.552037 2704 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:22:06.556177 kubelet[2704]: I0123 01:22:06.556161 2704 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:22:06.561714 kubelet[2704]: I0123 01:22:06.561677 2704 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:22:06.561949 kubelet[2704]: I0123 01:22:06.561913 2704 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:22:06.562287 kubelet[2704]: I0123 01:22:06.561942 2704 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-187-240","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:22:06.562287 kubelet[2704]: I0123 01:22:06.562275 2704 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:22:06.562287 kubelet[2704]: I0123 01:22:06.562286 2704 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 01:22:06.562455 kubelet[2704]: I0123 01:22:06.562337 2704 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:22:06.562536 kubelet[2704]: I0123 01:22:06.562515 2704 kubelet.go:446] "Attempting to sync node with API server" Jan 23 01:22:06.562536 kubelet[2704]: I0123 01:22:06.562539 2704 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:22:06.563028 kubelet[2704]: I0123 01:22:06.562911 2704 kubelet.go:352] "Adding apiserver pod source" Jan 23 01:22:06.563028 kubelet[2704]: I0123 01:22:06.562931 2704 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:22:06.564728 kubelet[2704]: I0123 01:22:06.564714 2704 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:22:06.565212 kubelet[2704]: I0123 01:22:06.565186 2704 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 01:22:06.565595 kubelet[2704]: I0123 01:22:06.565549 2704 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:22:06.565595 kubelet[2704]: I0123 01:22:06.565583 2704 server.go:1287] "Started kubelet" Jan 23 01:22:06.570118 kubelet[2704]: I0123 01:22:06.569696 2704 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:22:06.577679 kubelet[2704]: I0123 01:22:06.577278 2704 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:22:06.578313 kubelet[2704]: I0123 01:22:06.578125 2704 server.go:479] "Adding debug handlers to kubelet server" Jan 23 01:22:06.579630 kubelet[2704]: I0123 01:22:06.578944 2704 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:22:06.579630 kubelet[2704]: I0123 01:22:06.579133 2704 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:22:06.579630 kubelet[2704]: I0123 01:22:06.579273 2704 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:22:06.581310 kubelet[2704]: I0123 01:22:06.580786 2704 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:22:06.581310 kubelet[2704]: E0123 01:22:06.580956 2704 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-187-240\" not found" Jan 23 01:22:06.583280 kubelet[2704]: I0123 01:22:06.583180 2704 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:22:06.583329 kubelet[2704]: I0123 01:22:06.583285 2704 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:22:06.585508 kubelet[2704]: I0123 01:22:06.585437 2704 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 01:22:06.587986 kubelet[2704]: I0123 01:22:06.586625 2704 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 01:22:06.587986 kubelet[2704]: I0123 01:22:06.586682 2704 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 01:22:06.587986 kubelet[2704]: I0123 01:22:06.586696 2704 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:22:06.587986 kubelet[2704]: I0123 01:22:06.586702 2704 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 01:22:06.587986 kubelet[2704]: E0123 01:22:06.586745 2704 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:22:06.598309 kubelet[2704]: I0123 01:22:06.598286 2704 factory.go:221] Registration of the containerd container factory successfully Jan 23 01:22:06.598309 kubelet[2704]: I0123 01:22:06.598305 2704 factory.go:221] Registration of the systemd container factory successfully Jan 23 01:22:06.598433 kubelet[2704]: I0123 01:22:06.598373 2704 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:22:06.600503 kubelet[2704]: E0123 01:22:06.599713 2704 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:22:06.653624 kubelet[2704]: I0123 01:22:06.653594 2704 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:22:06.653624 kubelet[2704]: I0123 01:22:06.653610 2704 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:22:06.653624 kubelet[2704]: I0123 01:22:06.653626 2704 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:22:06.654222 kubelet[2704]: I0123 01:22:06.654069 2704 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 01:22:06.654222 kubelet[2704]: I0123 01:22:06.654084 2704 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 01:22:06.654222 kubelet[2704]: I0123 01:22:06.654101 2704 policy_none.go:49] "None policy: Start" Jan 23 01:22:06.654222 kubelet[2704]: I0123 01:22:06.654142 2704 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:22:06.654222 kubelet[2704]: I0123 01:22:06.654159 2704 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:22:06.655144 kubelet[2704]: I0123 01:22:06.655100 2704 state_mem.go:75] "Updated machine memory state" Jan 23 01:22:06.661656 kubelet[2704]: I0123 01:22:06.661563 2704 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 01:22:06.663887 kubelet[2704]: I0123 01:22:06.663754 2704 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:22:06.664832 kubelet[2704]: I0123 01:22:06.664702 2704 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:22:06.664976 kubelet[2704]: I0123 01:22:06.664956 2704 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:22:06.666886 kubelet[2704]: E0123 01:22:06.666837 2704 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:22:06.688242 kubelet[2704]: I0123 01:22:06.688213 2704 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-187-240" Jan 23 01:22:06.688554 kubelet[2704]: I0123 01:22:06.688505 2704 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-187-240" Jan 23 01:22:06.689752 kubelet[2704]: I0123 01:22:06.689702 2704 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-187-240" Jan 23 01:22:06.697470 kubelet[2704]: E0123 01:22:06.697450 2704 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-238-187-240\" already exists" pod="kube-system/kube-controller-manager-172-238-187-240" Jan 23 01:22:06.700166 kubelet[2704]: E0123 01:22:06.700123 2704 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-187-240\" already exists" pod="kube-system/kube-apiserver-172-238-187-240" Jan 23 01:22:06.769188 kubelet[2704]: I0123 01:22:06.768292 2704 kubelet_node_status.go:75] "Attempting to register node" node="172-238-187-240" Jan 23 01:22:06.780665 kubelet[2704]: I0123 01:22:06.780398 2704 kubelet_node_status.go:124] "Node was previously registered" node="172-238-187-240" Jan 23 01:22:06.780665 kubelet[2704]: I0123 01:22:06.780453 2704 kubelet_node_status.go:78] "Successfully registered node" node="172-238-187-240" Jan 23 01:22:06.785036 kubelet[2704]: I0123 01:22:06.785019 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/129c8df939ddad477c65313707e4d343-k8s-certs\") pod \"kube-apiserver-172-238-187-240\" (UID: \"129c8df939ddad477c65313707e4d343\") " pod="kube-system/kube-apiserver-172-238-187-240" Jan 23 01:22:06.785205 kubelet[2704]: I0123 01:22:06.785158 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/129c8df939ddad477c65313707e4d343-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-187-240\" (UID: \"129c8df939ddad477c65313707e4d343\") " pod="kube-system/kube-apiserver-172-238-187-240" Jan 23 01:22:06.785272 kubelet[2704]: I0123 01:22:06.785259 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43b69f3e913f9e654a7dce8088f93adf-ca-certs\") pod \"kube-controller-manager-172-238-187-240\" (UID: \"43b69f3e913f9e654a7dce8088f93adf\") " pod="kube-system/kube-controller-manager-172-238-187-240" Jan 23 01:22:06.785389 kubelet[2704]: I0123 01:22:06.785376 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43b69f3e913f9e654a7dce8088f93adf-k8s-certs\") pod \"kube-controller-manager-172-238-187-240\" (UID: \"43b69f3e913f9e654a7dce8088f93adf\") " pod="kube-system/kube-controller-manager-172-238-187-240" Jan 23 01:22:06.785527 kubelet[2704]: I0123 01:22:06.785473 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43b69f3e913f9e654a7dce8088f93adf-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-187-240\" (UID: \"43b69f3e913f9e654a7dce8088f93adf\") " pod="kube-system/kube-controller-manager-172-238-187-240" Jan 23 01:22:06.785527 kubelet[2704]: I0123 01:22:06.785492 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/129c8df939ddad477c65313707e4d343-ca-certs\") pod \"kube-apiserver-172-238-187-240\" (UID: \"129c8df939ddad477c65313707e4d343\") " pod="kube-system/kube-apiserver-172-238-187-240" Jan 23 01:22:06.785623 kubelet[2704]: I0123 01:22:06.785610 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/43b69f3e913f9e654a7dce8088f93adf-flexvolume-dir\") pod \"kube-controller-manager-172-238-187-240\" (UID: \"43b69f3e913f9e654a7dce8088f93adf\") " pod="kube-system/kube-controller-manager-172-238-187-240" Jan 23 01:22:06.785762 kubelet[2704]: I0123 01:22:06.785744 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43b69f3e913f9e654a7dce8088f93adf-kubeconfig\") pod \"kube-controller-manager-172-238-187-240\" (UID: \"43b69f3e913f9e654a7dce8088f93adf\") " pod="kube-system/kube-controller-manager-172-238-187-240" Jan 23 01:22:06.785762 kubelet[2704]: I0123 01:22:06.785764 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd0862947f6409f29f85f845de82a4f0-kubeconfig\") pod \"kube-scheduler-172-238-187-240\" (UID: \"fd0862947f6409f29f85f845de82a4f0\") " pod="kube-system/kube-scheduler-172-238-187-240" Jan 23 01:22:06.881898 sudo[2736]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 01:22:06.882254 sudo[2736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 01:22:06.996195 kubelet[2704]: E0123 01:22:06.994598 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:06.998188 kubelet[2704]: E0123 01:22:06.998166 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:07.001361 kubelet[2704]: E0123 01:22:07.001327 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:07.208103 sudo[2736]: pam_unix(sudo:session): session closed for user root Jan 23 01:22:07.564676 kubelet[2704]: I0123 01:22:07.564413 2704 apiserver.go:52] "Watching apiserver" Jan 23 01:22:07.584318 kubelet[2704]: I0123 01:22:07.584274 2704 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:22:07.634817 kubelet[2704]: I0123 01:22:07.634391 2704 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-187-240" Jan 23 01:22:07.637566 kubelet[2704]: E0123 01:22:07.636048 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:07.638269 kubelet[2704]: E0123 01:22:07.638209 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:07.648823 kubelet[2704]: E0123 01:22:07.648145 2704 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-187-240\" already exists" pod="kube-system/kube-apiserver-172-238-187-240" Jan 23 01:22:07.648823 kubelet[2704]: E0123 01:22:07.648257 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:07.680272 kubelet[2704]: I0123 01:22:07.680174 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-238-187-240" podStartSLOduration=1.680123161 podStartE2EDuration="1.680123161s" podCreationTimestamp="2026-01-23 01:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:22:07.668903212 +0000 UTC m=+1.174427677" watchObservedRunningTime="2026-01-23 01:22:07.680123161 +0000 UTC m=+1.185647626" Jan 23 01:22:07.689430 kubelet[2704]: I0123 01:22:07.689387 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-238-187-240" podStartSLOduration=2.689369931 podStartE2EDuration="2.689369931s" podCreationTimestamp="2026-01-23 01:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:22:07.688731582 +0000 UTC m=+1.194256057" watchObservedRunningTime="2026-01-23 01:22:07.689369931 +0000 UTC m=+1.194894406" Jan 23 01:22:07.689676 kubelet[2704]: I0123 01:22:07.689467 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-238-187-240" podStartSLOduration=1.6894629110000001 podStartE2EDuration="1.689462911s" podCreationTimestamp="2026-01-23 01:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:22:07.68055915 +0000 UTC m=+1.186083615" watchObservedRunningTime="2026-01-23 01:22:07.689462911 +0000 UTC m=+1.194987386" Jan 23 01:22:08.522574 sudo[1792]: pam_unix(sudo:session): session closed for user root Jan 23 01:22:08.543801 sshd[1791]: Connection closed by 68.220.241.50 port 42958 Jan 23 01:22:08.544444 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Jan 23 01:22:08.549828 systemd[1]: sshd@6-172.238.187.240:22-68.220.241.50:42958.service: Deactivated successfully. Jan 23 01:22:08.552548 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:22:08.552794 systemd[1]: session-7.scope: Consumed 4.073s CPU time, 269.3M memory peak. Jan 23 01:22:08.554708 systemd-logind[1524]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:22:08.557125 systemd-logind[1524]: Removed session 7. Jan 23 01:22:08.636206 kubelet[2704]: E0123 01:22:08.636168 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:08.637129 kubelet[2704]: E0123 01:22:08.636606 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:12.379423 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 01:22:12.450406 kubelet[2704]: E0123 01:22:12.450366 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:12.642715 kubelet[2704]: E0123 01:22:12.642080 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:12.995205 kubelet[2704]: I0123 01:22:12.995080 2704 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 01:22:12.995473 containerd[1549]: time="2026-01-23T01:22:12.995430345Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:22:12.995905 kubelet[2704]: I0123 01:22:12.995572 2704 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 01:22:13.645921 kubelet[2704]: E0123 01:22:13.645872 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:13.647651 systemd[1]: Created slice kubepods-besteffort-pod897c10b3_b21c_4e85_b0c0_1e8e2f8685d9.slice - libcontainer container kubepods-besteffort-pod897c10b3_b21c_4e85_b0c0_1e8e2f8685d9.slice. Jan 23 01:22:13.677215 systemd[1]: Created slice kubepods-burstable-pod960e8f72_2a96_4356_96a4_71f44baf117f.slice - libcontainer container kubepods-burstable-pod960e8f72_2a96_4356_96a4_71f44baf117f.slice. Jan 23 01:22:13.731273 kubelet[2704]: I0123 01:22:13.731231 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/960e8f72-2a96-4356-96a4-71f44baf117f-cilium-config-path\") pod \"cilium-rv85q\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " pod="kube-system/cilium-rv85q" Jan 23 01:22:13.731273 kubelet[2704]: I0123 01:22:13.731271 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/897c10b3-b21c-4e85-b0c0-1e8e2f8685d9-lib-modules\") pod \"kube-proxy-jt9xc\" (UID: \"897c10b3-b21c-4e85-b0c0-1e8e2f8685d9\") " pod="kube-system/kube-proxy-jt9xc" Jan 23 01:22:13.731445 kubelet[2704]: I0123 01:22:13.731292 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khkzz\" (UniqueName: \"kubernetes.io/projected/897c10b3-b21c-4e85-b0c0-1e8e2f8685d9-kube-api-access-khkzz\") pod \"kube-proxy-jt9xc\" (UID: \"897c10b3-b21c-4e85-b0c0-1e8e2f8685d9\") " pod="kube-system/kube-proxy-jt9xc" Jan 23 01:22:13.731445 kubelet[2704]: I0123 01:22:13.731310 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-hostproc\") pod \"cilium-rv85q\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " pod="kube-system/cilium-rv85q" Jan 23 01:22:13.731445 kubelet[2704]: I0123 01:22:13.731322 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-lib-modules\") pod \"cilium-rv85q\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " pod="kube-system/cilium-rv85q" Jan 23 01:22:13.731445 kubelet[2704]: I0123 01:22:13.731335 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-xtables-lock\") pod \"cilium-rv85q\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " pod="kube-system/cilium-rv85q" Jan 23 01:22:13.731445 kubelet[2704]: I0123 01:22:13.731348 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/960e8f72-2a96-4356-96a4-71f44baf117f-hubble-tls\") pod \"cilium-rv85q\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " pod="kube-system/cilium-rv85q" Jan 23 01:22:13.731445 kubelet[2704]: I0123 01:22:13.731360 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72mwb\" (UniqueName: \"kubernetes.io/projected/960e8f72-2a96-4356-96a4-71f44baf117f-kube-api-access-72mwb\") pod \"cilium-rv85q\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " pod="kube-system/cilium-rv85q" Jan 23 01:22:13.731713 kubelet[2704]: I0123 01:22:13.731374 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/960e8f72-2a96-4356-96a4-71f44baf117f-clustermesh-secrets\") pod \"cilium-rv85q\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " pod="kube-system/cilium-rv85q" Jan 23 01:22:13.731713 kubelet[2704]: I0123 01:22:13.731388 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-host-proc-sys-net\") pod \"cilium-rv85q\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " pod="kube-system/cilium-rv85q" Jan 23 01:22:13.731713 kubelet[2704]: I0123 01:22:13.731400 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-cni-path\") pod \"cilium-rv85q\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " pod="kube-system/cilium-rv85q" Jan 23 01:22:13.731713 kubelet[2704]: I0123 01:22:13.731414 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-host-proc-sys-kernel\") pod \"cilium-rv85q\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " pod="kube-system/cilium-rv85q" Jan 23 01:22:13.731713 kubelet[2704]: I0123 01:22:13.731428 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/897c10b3-b21c-4e85-b0c0-1e8e2f8685d9-kube-proxy\") pod \"kube-proxy-jt9xc\" (UID: \"897c10b3-b21c-4e85-b0c0-1e8e2f8685d9\") " pod="kube-system/kube-proxy-jt9xc" Jan 23 01:22:13.731838 kubelet[2704]: I0123 01:22:13.731441 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/897c10b3-b21c-4e85-b0c0-1e8e2f8685d9-xtables-lock\") pod \"kube-proxy-jt9xc\" (UID: \"897c10b3-b21c-4e85-b0c0-1e8e2f8685d9\") " pod="kube-system/kube-proxy-jt9xc" Jan 23 01:22:13.731838 kubelet[2704]: I0123 01:22:13.731455 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-cilium-run\") pod \"cilium-rv85q\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " pod="kube-system/cilium-rv85q" Jan 23 01:22:13.731838 kubelet[2704]: I0123 01:22:13.731468 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-bpf-maps\") pod \"cilium-rv85q\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " pod="kube-system/cilium-rv85q" Jan 23 01:22:13.731838 kubelet[2704]: I0123 01:22:13.731482 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-cilium-cgroup\") pod \"cilium-rv85q\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " pod="kube-system/cilium-rv85q" Jan 23 01:22:13.731838 kubelet[2704]: I0123 01:22:13.731497 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-etc-cni-netd\") pod \"cilium-rv85q\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " pod="kube-system/cilium-rv85q" Jan 23 01:22:13.958096 kubelet[2704]: E0123 01:22:13.957967 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:13.959258 containerd[1549]: time="2026-01-23T01:22:13.958758042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jt9xc,Uid:897c10b3-b21c-4e85-b0c0-1e8e2f8685d9,Namespace:kube-system,Attempt:0,}" Jan 23 01:22:13.981501 containerd[1549]: time="2026-01-23T01:22:13.981451189Z" level=info msg="connecting to shim 18f1be46d497a158d132273ca6cd053d0360d1b682eb932b2109ef5e5d2d6c0e" address="unix:///run/containerd/s/170ed60192efb26f7aca33aa71be01ecd4809cc41b22bc7c7d2d4510327220db" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:22:13.982672 kubelet[2704]: E0123 01:22:13.982459 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:13.985001 containerd[1549]: time="2026-01-23T01:22:13.984954296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rv85q,Uid:960e8f72-2a96-4356-96a4-71f44baf117f,Namespace:kube-system,Attempt:0,}" Jan 23 01:22:14.036151 systemd[1]: Started cri-containerd-18f1be46d497a158d132273ca6cd053d0360d1b682eb932b2109ef5e5d2d6c0e.scope - libcontainer container 18f1be46d497a158d132273ca6cd053d0360d1b682eb932b2109ef5e5d2d6c0e. Jan 23 01:22:14.050231 containerd[1549]: time="2026-01-23T01:22:14.050171240Z" level=info msg="connecting to shim db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0" address="unix:///run/containerd/s/f59c7ae0061b607fef3c0ab7d5ee94707563644c99b3ef65f8c23e94cdfea788" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:22:14.058293 systemd[1]: Created slice kubepods-besteffort-pod226cfc49_acf8_4abc_bf26_69ad52ccf7e9.slice - libcontainer container kubepods-besteffort-pod226cfc49_acf8_4abc_bf26_69ad52ccf7e9.slice. Jan 23 01:22:14.093769 systemd[1]: Started cri-containerd-db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0.scope - libcontainer container db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0. Jan 23 01:22:14.106447 containerd[1549]: time="2026-01-23T01:22:14.106328634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jt9xc,Uid:897c10b3-b21c-4e85-b0c0-1e8e2f8685d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"18f1be46d497a158d132273ca6cd053d0360d1b682eb932b2109ef5e5d2d6c0e\"" Jan 23 01:22:14.108728 kubelet[2704]: E0123 01:22:14.108697 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:14.114368 containerd[1549]: time="2026-01-23T01:22:14.114287506Z" level=info msg="CreateContainer within sandbox \"18f1be46d497a158d132273ca6cd053d0360d1b682eb932b2109ef5e5d2d6c0e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:22:14.129779 containerd[1549]: time="2026-01-23T01:22:14.129753221Z" level=info msg="Container 6dbc9179cf7ed3e19e647176057db245984d55784fcdc5f6958cd3b39dffb7c1: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:22:14.136025 containerd[1549]: time="2026-01-23T01:22:14.135106785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rv85q,Uid:960e8f72-2a96-4356-96a4-71f44baf117f,Namespace:kube-system,Attempt:0,} returns sandbox id \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\"" Jan 23 01:22:14.136080 kubelet[2704]: I0123 01:22:14.135454 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/226cfc49-acf8-4abc-bf26-69ad52ccf7e9-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-z6v5h\" (UID: \"226cfc49-acf8-4abc-bf26-69ad52ccf7e9\") " pod="kube-system/cilium-operator-6c4d7847fc-z6v5h" Jan 23 01:22:14.136080 kubelet[2704]: I0123 01:22:14.135481 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkwdt\" (UniqueName: \"kubernetes.io/projected/226cfc49-acf8-4abc-bf26-69ad52ccf7e9-kube-api-access-kkwdt\") pod \"cilium-operator-6c4d7847fc-z6v5h\" (UID: \"226cfc49-acf8-4abc-bf26-69ad52ccf7e9\") " pod="kube-system/cilium-operator-6c4d7847fc-z6v5h" Jan 23 01:22:14.136364 kubelet[2704]: E0123 01:22:14.136313 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:14.136549 containerd[1549]: time="2026-01-23T01:22:14.136499784Z" level=info msg="CreateContainer within sandbox \"18f1be46d497a158d132273ca6cd053d0360d1b682eb932b2109ef5e5d2d6c0e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6dbc9179cf7ed3e19e647176057db245984d55784fcdc5f6958cd3b39dffb7c1\"" Jan 23 01:22:14.137757 containerd[1549]: time="2026-01-23T01:22:14.137738473Z" level=info msg="StartContainer for \"6dbc9179cf7ed3e19e647176057db245984d55784fcdc5f6958cd3b39dffb7c1\"" Jan 23 01:22:14.139267 containerd[1549]: time="2026-01-23T01:22:14.139248081Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 01:22:14.140127 containerd[1549]: time="2026-01-23T01:22:14.140105120Z" level=info msg="connecting to shim 6dbc9179cf7ed3e19e647176057db245984d55784fcdc5f6958cd3b39dffb7c1" address="unix:///run/containerd/s/170ed60192efb26f7aca33aa71be01ecd4809cc41b22bc7c7d2d4510327220db" protocol=ttrpc version=3 Jan 23 01:22:14.162776 systemd[1]: Started cri-containerd-6dbc9179cf7ed3e19e647176057db245984d55784fcdc5f6958cd3b39dffb7c1.scope - libcontainer container 6dbc9179cf7ed3e19e647176057db245984d55784fcdc5f6958cd3b39dffb7c1. Jan 23 01:22:14.275060 containerd[1549]: time="2026-01-23T01:22:14.274828096Z" level=info msg="StartContainer for \"6dbc9179cf7ed3e19e647176057db245984d55784fcdc5f6958cd3b39dffb7c1\" returns successfully" Jan 23 01:22:14.361690 kubelet[2704]: E0123 01:22:14.361632 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:14.363013 containerd[1549]: time="2026-01-23T01:22:14.362881338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-z6v5h,Uid:226cfc49-acf8-4abc-bf26-69ad52ccf7e9,Namespace:kube-system,Attempt:0,}" Jan 23 01:22:14.386229 containerd[1549]: time="2026-01-23T01:22:14.386121484Z" level=info msg="connecting to shim b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c" address="unix:///run/containerd/s/b42a23b2cae96732e75b6271e028813c180cbb96b10435626b2a23f6291df1fd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:22:14.420986 systemd[1]: Started cri-containerd-b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c.scope - libcontainer container b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c. Jan 23 01:22:14.497518 containerd[1549]: time="2026-01-23T01:22:14.497466163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-z6v5h,Uid:226cfc49-acf8-4abc-bf26-69ad52ccf7e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c\"" Jan 23 01:22:14.498430 kubelet[2704]: E0123 01:22:14.498374 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:14.653428 kubelet[2704]: E0123 01:22:14.653390 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:15.625597 kubelet[2704]: E0123 01:22:15.625551 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:15.652328 kubelet[2704]: I0123 01:22:15.652019 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jt9xc" podStartSLOduration=2.652001769 podStartE2EDuration="2.652001769s" podCreationTimestamp="2026-01-23 01:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:22:14.667997073 +0000 UTC m=+8.173521538" watchObservedRunningTime="2026-01-23 01:22:15.652001769 +0000 UTC m=+9.157526234" Jan 23 01:22:15.655764 kubelet[2704]: E0123 01:22:15.655731 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:16.049240 kubelet[2704]: E0123 01:22:16.049101 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:16.656696 kubelet[2704]: E0123 01:22:16.656665 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:25.948535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount722706192.mount: Deactivated successfully. Jan 23 01:22:27.305923 update_engine[1525]: I20260123 01:22:27.305878 1525 update_attempter.cc:509] Updating boot flags... Jan 23 01:22:27.952823 containerd[1549]: time="2026-01-23T01:22:27.952759660Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:22:27.953667 containerd[1549]: time="2026-01-23T01:22:27.953544508Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 01:22:27.954567 containerd[1549]: time="2026-01-23T01:22:27.954520340Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:22:27.956752 containerd[1549]: time="2026-01-23T01:22:27.956696836Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.817378753s" Jan 23 01:22:27.956752 containerd[1549]: time="2026-01-23T01:22:27.956734083Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 01:22:27.959334 containerd[1549]: time="2026-01-23T01:22:27.959285200Z" level=info msg="CreateContainer within sandbox \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 01:22:27.959878 containerd[1549]: time="2026-01-23T01:22:27.959462133Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 01:22:27.968991 containerd[1549]: time="2026-01-23T01:22:27.968969309Z" level=info msg="Container 95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:22:27.975193 containerd[1549]: time="2026-01-23T01:22:27.975159045Z" level=info msg="CreateContainer within sandbox \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9\"" Jan 23 01:22:27.976122 containerd[1549]: time="2026-01-23T01:22:27.976103902Z" level=info msg="StartContainer for \"95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9\"" Jan 23 01:22:27.977206 containerd[1549]: time="2026-01-23T01:22:27.977186564Z" level=info msg="connecting to shim 95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9" address="unix:///run/containerd/s/f59c7ae0061b607fef3c0ab7d5ee94707563644c99b3ef65f8c23e94cdfea788" protocol=ttrpc version=3 Jan 23 01:22:28.005799 systemd[1]: Started cri-containerd-95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9.scope - libcontainer container 95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9. Jan 23 01:22:28.042088 containerd[1549]: time="2026-01-23T01:22:28.042035533Z" level=info msg="StartContainer for \"95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9\" returns successfully" Jan 23 01:22:28.060350 systemd[1]: cri-containerd-95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9.scope: Deactivated successfully. Jan 23 01:22:28.065064 containerd[1549]: time="2026-01-23T01:22:28.065018457Z" level=info msg="received container exit event container_id:\"95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9\" id:\"95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9\" pid:3148 exited_at:{seconds:1769131348 nanos:63387452}" Jan 23 01:22:28.098628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9-rootfs.mount: Deactivated successfully. Jan 23 01:22:28.678498 kubelet[2704]: E0123 01:22:28.678459 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:28.683601 containerd[1549]: time="2026-01-23T01:22:28.683561052Z" level=info msg="CreateContainer within sandbox \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 01:22:28.692312 containerd[1549]: time="2026-01-23T01:22:28.692247863Z" level=info msg="Container 12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:22:28.698678 containerd[1549]: time="2026-01-23T01:22:28.698632561Z" level=info msg="CreateContainer within sandbox \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d\"" Jan 23 01:22:28.699743 containerd[1549]: time="2026-01-23T01:22:28.699705219Z" level=info msg="StartContainer for \"12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d\"" Jan 23 01:22:28.701145 containerd[1549]: time="2026-01-23T01:22:28.701119256Z" level=info msg="connecting to shim 12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d" address="unix:///run/containerd/s/f59c7ae0061b607fef3c0ab7d5ee94707563644c99b3ef65f8c23e94cdfea788" protocol=ttrpc version=3 Jan 23 01:22:28.725220 systemd[1]: Started cri-containerd-12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d.scope - libcontainer container 12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d. Jan 23 01:22:28.767151 containerd[1549]: time="2026-01-23T01:22:28.767109299Z" level=info msg="StartContainer for \"12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d\" returns successfully" Jan 23 01:22:28.785301 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:22:28.785946 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:22:28.786160 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:22:28.788735 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:22:28.789006 systemd[1]: cri-containerd-12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d.scope: Deactivated successfully. Jan 23 01:22:28.792982 containerd[1549]: time="2026-01-23T01:22:28.792948603Z" level=info msg="received container exit event container_id:\"12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d\" id:\"12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d\" pid:3200 exited_at:{seconds:1769131348 nanos:788918677}" Jan 23 01:22:28.815681 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:22:29.681318 kubelet[2704]: E0123 01:22:29.681281 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:29.683886 containerd[1549]: time="2026-01-23T01:22:29.683723917Z" level=info msg="CreateContainer within sandbox \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 01:22:29.700657 containerd[1549]: time="2026-01-23T01:22:29.698232087Z" level=info msg="Container 69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:22:29.706510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548658664.mount: Deactivated successfully. Jan 23 01:22:29.712297 containerd[1549]: time="2026-01-23T01:22:29.712269621Z" level=info msg="CreateContainer within sandbox \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b\"" Jan 23 01:22:29.715678 containerd[1549]: time="2026-01-23T01:22:29.715246739Z" level=info msg="StartContainer for \"69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b\"" Jan 23 01:22:29.717201 containerd[1549]: time="2026-01-23T01:22:29.717170065Z" level=info msg="connecting to shim 69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b" address="unix:///run/containerd/s/f59c7ae0061b607fef3c0ab7d5ee94707563644c99b3ef65f8c23e94cdfea788" protocol=ttrpc version=3 Jan 23 01:22:29.747791 systemd[1]: Started cri-containerd-69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b.scope - libcontainer container 69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b. Jan 23 01:22:29.828586 containerd[1549]: time="2026-01-23T01:22:29.828554661Z" level=info msg="StartContainer for \"69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b\" returns successfully" Jan 23 01:22:29.830596 systemd[1]: cri-containerd-69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b.scope: Deactivated successfully. Jan 23 01:22:29.834971 containerd[1549]: time="2026-01-23T01:22:29.834914495Z" level=info msg="received container exit event container_id:\"69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b\" id:\"69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b\" pid:3246 exited_at:{seconds:1769131349 nanos:833795581}" Jan 23 01:22:29.970015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b-rootfs.mount: Deactivated successfully. Jan 23 01:22:30.687545 kubelet[2704]: E0123 01:22:30.687495 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:30.691864 containerd[1549]: time="2026-01-23T01:22:30.691813536Z" level=info msg="CreateContainer within sandbox \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 01:22:30.712667 containerd[1549]: time="2026-01-23T01:22:30.710901131Z" level=info msg="Container 47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:22:30.722381 containerd[1549]: time="2026-01-23T01:22:30.722314685Z" level=info msg="CreateContainer within sandbox \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759\"" Jan 23 01:22:30.723009 containerd[1549]: time="2026-01-23T01:22:30.722980858Z" level=info msg="StartContainer for \"47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759\"" Jan 23 01:22:30.724676 containerd[1549]: time="2026-01-23T01:22:30.724410867Z" level=info msg="connecting to shim 47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759" address="unix:///run/containerd/s/f59c7ae0061b607fef3c0ab7d5ee94707563644c99b3ef65f8c23e94cdfea788" protocol=ttrpc version=3 Jan 23 01:22:30.747759 systemd[1]: Started cri-containerd-47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759.scope - libcontainer container 47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759. Jan 23 01:22:30.791979 systemd[1]: cri-containerd-47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759.scope: Deactivated successfully. Jan 23 01:22:30.792317 containerd[1549]: time="2026-01-23T01:22:30.792272262Z" level=info msg="received container exit event container_id:\"47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759\" id:\"47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759\" pid:3285 exited_at:{seconds:1769131350 nanos:792125239}" Jan 23 01:22:30.802861 containerd[1549]: time="2026-01-23T01:22:30.802818854Z" level=info msg="StartContainer for \"47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759\" returns successfully" Jan 23 01:22:30.968432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759-rootfs.mount: Deactivated successfully. Jan 23 01:22:31.695167 kubelet[2704]: E0123 01:22:31.694956 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:31.698245 containerd[1549]: time="2026-01-23T01:22:31.698194465Z" level=info msg="CreateContainer within sandbox \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 01:22:31.720533 containerd[1549]: time="2026-01-23T01:22:31.719795227Z" level=info msg="Container d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:22:31.722576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2743103697.mount: Deactivated successfully. Jan 23 01:22:31.728197 containerd[1549]: time="2026-01-23T01:22:31.728149721Z" level=info msg="CreateContainer within sandbox \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871\"" Jan 23 01:22:31.729886 containerd[1549]: time="2026-01-23T01:22:31.729046671Z" level=info msg="StartContainer for \"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871\"" Jan 23 01:22:31.730228 containerd[1549]: time="2026-01-23T01:22:31.730208948Z" level=info msg="connecting to shim d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871" address="unix:///run/containerd/s/f59c7ae0061b607fef3c0ab7d5ee94707563644c99b3ef65f8c23e94cdfea788" protocol=ttrpc version=3 Jan 23 01:22:31.758769 systemd[1]: Started cri-containerd-d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871.scope - libcontainer container d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871. Jan 23 01:22:31.812186 containerd[1549]: time="2026-01-23T01:22:31.812152545Z" level=info msg="StartContainer for \"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871\" returns successfully" Jan 23 01:22:31.967059 kubelet[2704]: I0123 01:22:31.966723 2704 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 01:22:31.998790 systemd[1]: Created slice kubepods-burstable-podfde3f8b9_ebdd_46df_8948_bdfdf1721068.slice - libcontainer container kubepods-burstable-podfde3f8b9_ebdd_46df_8948_bdfdf1721068.slice. Jan 23 01:22:32.007768 systemd[1]: Created slice kubepods-burstable-pod86d6458f_d8d6_4fac_a2c4_ba9882837627.slice - libcontainer container kubepods-burstable-pod86d6458f_d8d6_4fac_a2c4_ba9882837627.slice. Jan 23 01:22:32.073566 kubelet[2704]: I0123 01:22:32.073533 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86d6458f-d8d6-4fac-a2c4-ba9882837627-config-volume\") pod \"coredns-668d6bf9bc-zj22g\" (UID: \"86d6458f-d8d6-4fac-a2c4-ba9882837627\") " pod="kube-system/coredns-668d6bf9bc-zj22g" Jan 23 01:22:32.073566 kubelet[2704]: I0123 01:22:32.073570 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fde3f8b9-ebdd-46df-8948-bdfdf1721068-config-volume\") pod \"coredns-668d6bf9bc-cwd7s\" (UID: \"fde3f8b9-ebdd-46df-8948-bdfdf1721068\") " pod="kube-system/coredns-668d6bf9bc-cwd7s" Jan 23 01:22:32.073770 kubelet[2704]: I0123 01:22:32.073594 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7lmc\" (UniqueName: \"kubernetes.io/projected/fde3f8b9-ebdd-46df-8948-bdfdf1721068-kube-api-access-m7lmc\") pod \"coredns-668d6bf9bc-cwd7s\" (UID: \"fde3f8b9-ebdd-46df-8948-bdfdf1721068\") " pod="kube-system/coredns-668d6bf9bc-cwd7s" Jan 23 01:22:32.073770 kubelet[2704]: I0123 01:22:32.073623 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zngfh\" (UniqueName: \"kubernetes.io/projected/86d6458f-d8d6-4fac-a2c4-ba9882837627-kube-api-access-zngfh\") pod \"coredns-668d6bf9bc-zj22g\" (UID: \"86d6458f-d8d6-4fac-a2c4-ba9882837627\") " pod="kube-system/coredns-668d6bf9bc-zj22g" Jan 23 01:22:32.306034 kubelet[2704]: E0123 01:22:32.305901 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:32.307661 containerd[1549]: time="2026-01-23T01:22:32.307257395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cwd7s,Uid:fde3f8b9-ebdd-46df-8948-bdfdf1721068,Namespace:kube-system,Attempt:0,}" Jan 23 01:22:32.314817 kubelet[2704]: E0123 01:22:32.314782 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:32.315931 containerd[1549]: time="2026-01-23T01:22:32.315855595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zj22g,Uid:86d6458f-d8d6-4fac-a2c4-ba9882837627,Namespace:kube-system,Attempt:0,}" Jan 23 01:22:32.701235 kubelet[2704]: E0123 01:22:32.701202 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:32.715101 kubelet[2704]: I0123 01:22:32.715060 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rv85q" podStartSLOduration=5.894871613 podStartE2EDuration="19.715049086s" podCreationTimestamp="2026-01-23 01:22:13 +0000 UTC" firstStartedPulling="2026-01-23 01:22:14.137593203 +0000 UTC m=+7.643117668" lastFinishedPulling="2026-01-23 01:22:27.957770676 +0000 UTC m=+21.463295141" observedRunningTime="2026-01-23 01:22:32.713680471 +0000 UTC m=+26.219204966" watchObservedRunningTime="2026-01-23 01:22:32.715049086 +0000 UTC m=+26.220573551" Jan 23 01:22:33.707251 kubelet[2704]: E0123 01:22:33.707193 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:34.705077 kubelet[2704]: E0123 01:22:34.705033 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:45.195285 containerd[1549]: time="2026-01-23T01:22:45.195220357Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:22:45.196152 containerd[1549]: time="2026-01-23T01:22:45.195978180Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 01:22:45.196615 containerd[1549]: time="2026-01-23T01:22:45.196590095Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:22:45.197694 containerd[1549]: time="2026-01-23T01:22:45.197673598Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 17.238186221s" Jan 23 01:22:45.197770 containerd[1549]: time="2026-01-23T01:22:45.197755303Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 01:22:45.199555 containerd[1549]: time="2026-01-23T01:22:45.199522245Z" level=info msg="CreateContainer within sandbox \"b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 01:22:45.211148 containerd[1549]: time="2026-01-23T01:22:45.210519230Z" level=info msg="Container fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:22:45.217548 containerd[1549]: time="2026-01-23T01:22:45.217527205Z" level=info msg="CreateContainer within sandbox \"b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160\"" Jan 23 01:22:45.218867 containerd[1549]: time="2026-01-23T01:22:45.217965861Z" level=info msg="StartContainer for \"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160\"" Jan 23 01:22:45.218867 containerd[1549]: time="2026-01-23T01:22:45.218815420Z" level=info msg="connecting to shim fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160" address="unix:///run/containerd/s/b42a23b2cae96732e75b6271e028813c180cbb96b10435626b2a23f6291df1fd" protocol=ttrpc version=3 Jan 23 01:22:45.243765 systemd[1]: Started cri-containerd-fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160.scope - libcontainer container fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160. Jan 23 01:22:45.277420 containerd[1549]: time="2026-01-23T01:22:45.277392765Z" level=info msg="StartContainer for \"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160\" returns successfully" Jan 23 01:22:45.726289 kubelet[2704]: E0123 01:22:45.725666 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:46.727213 kubelet[2704]: E0123 01:22:46.727166 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:49.173459 systemd-networkd[1427]: cilium_host: Link UP Jan 23 01:22:49.174768 systemd-networkd[1427]: cilium_net: Link UP Jan 23 01:22:49.175408 systemd-networkd[1427]: cilium_net: Gained carrier Jan 23 01:22:49.175855 systemd-networkd[1427]: cilium_host: Gained carrier Jan 23 01:22:49.310014 systemd-networkd[1427]: cilium_vxlan: Link UP Jan 23 01:22:49.310023 systemd-networkd[1427]: cilium_vxlan: Gained carrier Jan 23 01:22:49.577689 kernel: NET: Registered PF_ALG protocol family Jan 23 01:22:49.907962 systemd-networkd[1427]: cilium_net: Gained IPv6LL Jan 23 01:22:50.165597 systemd-networkd[1427]: cilium_host: Gained IPv6LL Jan 23 01:22:50.424313 systemd-networkd[1427]: lxc_health: Link UP Jan 23 01:22:50.426676 systemd-networkd[1427]: lxc_health: Gained carrier Jan 23 01:22:50.675870 systemd-networkd[1427]: cilium_vxlan: Gained IPv6LL Jan 23 01:22:50.908747 systemd-networkd[1427]: lxcfdbf5469bc34: Link UP Jan 23 01:22:50.919409 kernel: eth0: renamed from tmpabc88 Jan 23 01:22:50.925259 systemd-networkd[1427]: lxc937989fc3684: Link UP Jan 23 01:22:50.932337 systemd-networkd[1427]: lxcfdbf5469bc34: Gained carrier Jan 23 01:22:50.934696 kernel: eth0: renamed from tmpd4456 Jan 23 01:22:50.939973 systemd-networkd[1427]: lxc937989fc3684: Gained carrier Jan 23 01:22:51.571906 systemd-networkd[1427]: lxc_health: Gained IPv6LL Jan 23 01:22:51.988596 kubelet[2704]: E0123 01:22:51.988435 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:52.012210 kubelet[2704]: I0123 01:22:52.012159 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-z6v5h" podStartSLOduration=8.313253041 podStartE2EDuration="39.012144071s" podCreationTimestamp="2026-01-23 01:22:13 +0000 UTC" firstStartedPulling="2026-01-23 01:22:14.499524641 +0000 UTC m=+8.005049106" lastFinishedPulling="2026-01-23 01:22:45.198415671 +0000 UTC m=+38.703940136" observedRunningTime="2026-01-23 01:22:45.738534872 +0000 UTC m=+39.244059337" watchObservedRunningTime="2026-01-23 01:22:52.012144071 +0000 UTC m=+45.517668536" Jan 23 01:22:52.147900 systemd-networkd[1427]: lxcfdbf5469bc34: Gained IPv6LL Jan 23 01:22:52.339874 systemd-networkd[1427]: lxc937989fc3684: Gained IPv6LL Jan 23 01:22:52.748676 kubelet[2704]: E0123 01:22:52.746231 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:53.749370 kubelet[2704]: E0123 01:22:53.747830 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:54.604173 containerd[1549]: time="2026-01-23T01:22:54.604070812Z" level=info msg="connecting to shim d4456701470131d8486a83c7e43d73c8c0c7ce64dc4a6e7f7a19d82743a434ad" address="unix:///run/containerd/s/e236b029de468474c9b908f4a9ba36845dc6ea44605c95ca4096def9940afc7b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:22:54.641763 systemd[1]: Started cri-containerd-d4456701470131d8486a83c7e43d73c8c0c7ce64dc4a6e7f7a19d82743a434ad.scope - libcontainer container d4456701470131d8486a83c7e43d73c8c0c7ce64dc4a6e7f7a19d82743a434ad. Jan 23 01:22:54.679377 containerd[1549]: time="2026-01-23T01:22:54.679270010Z" level=info msg="connecting to shim abc88a1f4be2d6986238c5372e7e11642f8212f547a3c7a34c0e92970f944398" address="unix:///run/containerd/s/c7c765431e7e9452a88c5c4897c766b87bea9b906e169a48044d22ba7c82f694" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:22:54.708772 systemd[1]: Started cri-containerd-abc88a1f4be2d6986238c5372e7e11642f8212f547a3c7a34c0e92970f944398.scope - libcontainer container abc88a1f4be2d6986238c5372e7e11642f8212f547a3c7a34c0e92970f944398. Jan 23 01:22:54.778119 containerd[1549]: time="2026-01-23T01:22:54.778054730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zj22g,Uid:86d6458f-d8d6-4fac-a2c4-ba9882837627,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4456701470131d8486a83c7e43d73c8c0c7ce64dc4a6e7f7a19d82743a434ad\"" Jan 23 01:22:54.779607 kubelet[2704]: E0123 01:22:54.779317 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:54.783705 containerd[1549]: time="2026-01-23T01:22:54.783244556Z" level=info msg="CreateContainer within sandbox \"d4456701470131d8486a83c7e43d73c8c0c7ce64dc4a6e7f7a19d82743a434ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:22:54.798127 containerd[1549]: time="2026-01-23T01:22:54.798081448Z" level=info msg="Container 3b61d52b704ada68cb2540fdfabeb21c1f675e972f59c7fd9b7a1d13da959e39: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:22:54.806091 containerd[1549]: time="2026-01-23T01:22:54.806031332Z" level=info msg="CreateContainer within sandbox \"d4456701470131d8486a83c7e43d73c8c0c7ce64dc4a6e7f7a19d82743a434ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b61d52b704ada68cb2540fdfabeb21c1f675e972f59c7fd9b7a1d13da959e39\"" Jan 23 01:22:54.807925 containerd[1549]: time="2026-01-23T01:22:54.807887801Z" level=info msg="StartContainer for \"3b61d52b704ada68cb2540fdfabeb21c1f675e972f59c7fd9b7a1d13da959e39\"" Jan 23 01:22:54.810671 containerd[1549]: time="2026-01-23T01:22:54.810517305Z" level=info msg="connecting to shim 3b61d52b704ada68cb2540fdfabeb21c1f675e972f59c7fd9b7a1d13da959e39" address="unix:///run/containerd/s/e236b029de468474c9b908f4a9ba36845dc6ea44605c95ca4096def9940afc7b" protocol=ttrpc version=3 Jan 23 01:22:54.822952 containerd[1549]: time="2026-01-23T01:22:54.822892979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cwd7s,Uid:fde3f8b9-ebdd-46df-8948-bdfdf1721068,Namespace:kube-system,Attempt:0,} returns sandbox id \"abc88a1f4be2d6986238c5372e7e11642f8212f547a3c7a34c0e92970f944398\"" Jan 23 01:22:54.824033 kubelet[2704]: E0123 01:22:54.823859 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:54.835302 containerd[1549]: time="2026-01-23T01:22:54.834996626Z" level=info msg="CreateContainer within sandbox \"abc88a1f4be2d6986238c5372e7e11642f8212f547a3c7a34c0e92970f944398\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:22:54.848780 systemd[1]: Started cri-containerd-3b61d52b704ada68cb2540fdfabeb21c1f675e972f59c7fd9b7a1d13da959e39.scope - libcontainer container 3b61d52b704ada68cb2540fdfabeb21c1f675e972f59c7fd9b7a1d13da959e39. Jan 23 01:22:54.850392 containerd[1549]: time="2026-01-23T01:22:54.850363036Z" level=info msg="Container 40c01677cf7a4a76d3dd01c1bbd738488bb1eb950118dce5786f9effcf5d8157: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:22:54.856416 containerd[1549]: time="2026-01-23T01:22:54.855978145Z" level=info msg="CreateContainer within sandbox \"abc88a1f4be2d6986238c5372e7e11642f8212f547a3c7a34c0e92970f944398\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"40c01677cf7a4a76d3dd01c1bbd738488bb1eb950118dce5786f9effcf5d8157\"" Jan 23 01:22:54.859283 containerd[1549]: time="2026-01-23T01:22:54.857488233Z" level=info msg="StartContainer for \"40c01677cf7a4a76d3dd01c1bbd738488bb1eb950118dce5786f9effcf5d8157\"" Jan 23 01:22:54.859454 containerd[1549]: time="2026-01-23T01:22:54.859434735Z" level=info msg="connecting to shim 40c01677cf7a4a76d3dd01c1bbd738488bb1eb950118dce5786f9effcf5d8157" address="unix:///run/containerd/s/c7c765431e7e9452a88c5c4897c766b87bea9b906e169a48044d22ba7c82f694" protocol=ttrpc version=3 Jan 23 01:22:54.886863 systemd[1]: Started cri-containerd-40c01677cf7a4a76d3dd01c1bbd738488bb1eb950118dce5786f9effcf5d8157.scope - libcontainer container 40c01677cf7a4a76d3dd01c1bbd738488bb1eb950118dce5786f9effcf5d8157. Jan 23 01:22:54.918768 containerd[1549]: time="2026-01-23T01:22:54.918596152Z" level=info msg="StartContainer for \"3b61d52b704ada68cb2540fdfabeb21c1f675e972f59c7fd9b7a1d13da959e39\" returns successfully" Jan 23 01:22:54.944841 containerd[1549]: time="2026-01-23T01:22:54.944800777Z" level=info msg="StartContainer for \"40c01677cf7a4a76d3dd01c1bbd738488bb1eb950118dce5786f9effcf5d8157\" returns successfully" Jan 23 01:22:55.759090 kubelet[2704]: E0123 01:22:55.759052 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:55.764003 kubelet[2704]: E0123 01:22:55.762963 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:55.772460 kubelet[2704]: I0123 01:22:55.772412 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cwd7s" podStartSLOduration=42.772210347 podStartE2EDuration="42.772210347s" podCreationTimestamp="2026-01-23 01:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:22:55.771385073 +0000 UTC m=+49.276909538" watchObservedRunningTime="2026-01-23 01:22:55.772210347 +0000 UTC m=+49.277734812" Jan 23 01:22:56.765171 kubelet[2704]: E0123 01:22:56.765086 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:56.765171 kubelet[2704]: E0123 01:22:56.765104 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:57.767706 kubelet[2704]: E0123 01:22:57.767170 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:22:57.767706 kubelet[2704]: E0123 01:22:57.767170 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:23:22.591070 kubelet[2704]: E0123 01:23:22.590979 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:23:28.588660 kubelet[2704]: E0123 01:23:28.588126 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:23:28.588660 kubelet[2704]: E0123 01:23:28.588576 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:23:38.588721 kubelet[2704]: E0123 01:23:38.588075 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:23:55.508876 systemd[1]: Started sshd@7-172.238.187.240:22-68.220.241.50:40230.service - OpenSSH per-connection server daemon (68.220.241.50:40230). Jan 23 01:23:55.683946 sshd[4040]: Accepted publickey for core from 68.220.241.50 port 40230 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:23:55.686053 sshd-session[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:23:55.691071 systemd-logind[1524]: New session 8 of user core. Jan 23 01:23:55.699998 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 01:23:55.894899 sshd[4043]: Connection closed by 68.220.241.50 port 40230 Jan 23 01:23:55.895838 sshd-session[4040]: pam_unix(sshd:session): session closed for user core Jan 23 01:23:55.900171 systemd-logind[1524]: Session 8 logged out. Waiting for processes to exit. Jan 23 01:23:55.900614 systemd[1]: sshd@7-172.238.187.240:22-68.220.241.50:40230.service: Deactivated successfully. Jan 23 01:23:55.903293 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 01:23:55.905268 systemd-logind[1524]: Removed session 8. Jan 23 01:24:00.930773 systemd[1]: Started sshd@8-172.238.187.240:22-68.220.241.50:40246.service - OpenSSH per-connection server daemon (68.220.241.50:40246). Jan 23 01:24:01.096679 sshd[4056]: Accepted publickey for core from 68.220.241.50 port 40246 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:01.098187 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:01.103376 systemd-logind[1524]: New session 9 of user core. Jan 23 01:24:01.107765 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 01:24:01.286834 sshd[4059]: Connection closed by 68.220.241.50 port 40246 Jan 23 01:24:01.287743 sshd-session[4056]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:01.292245 systemd[1]: sshd@8-172.238.187.240:22-68.220.241.50:40246.service: Deactivated successfully. Jan 23 01:24:01.295004 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:24:01.295897 systemd-logind[1524]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:24:01.297289 systemd-logind[1524]: Removed session 9. Jan 23 01:24:06.324083 systemd[1]: Started sshd@9-172.238.187.240:22-68.220.241.50:39818.service - OpenSSH per-connection server daemon (68.220.241.50:39818). Jan 23 01:24:06.501675 sshd[4071]: Accepted publickey for core from 68.220.241.50 port 39818 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:06.502605 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:06.508216 systemd-logind[1524]: New session 10 of user core. Jan 23 01:24:06.517767 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 01:24:06.694423 sshd[4074]: Connection closed by 68.220.241.50 port 39818 Jan 23 01:24:06.695121 sshd-session[4071]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:06.700777 systemd[1]: sshd@9-172.238.187.240:22-68.220.241.50:39818.service: Deactivated successfully. Jan 23 01:24:06.703207 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 01:24:06.704055 systemd-logind[1524]: Session 10 logged out. Waiting for processes to exit. Jan 23 01:24:06.705599 systemd-logind[1524]: Removed session 10. Jan 23 01:24:09.589076 kubelet[2704]: E0123 01:24:09.587878 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:24:10.588606 kubelet[2704]: E0123 01:24:10.587877 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:24:11.738532 systemd[1]: Started sshd@10-172.238.187.240:22-68.220.241.50:39820.service - OpenSSH per-connection server daemon (68.220.241.50:39820). Jan 23 01:24:11.924380 sshd[4089]: Accepted publickey for core from 68.220.241.50 port 39820 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:11.926113 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:11.932711 systemd-logind[1524]: New session 11 of user core. Jan 23 01:24:11.943866 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 01:24:12.130858 sshd[4092]: Connection closed by 68.220.241.50 port 39820 Jan 23 01:24:12.131757 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:12.137541 systemd[1]: sshd@10-172.238.187.240:22-68.220.241.50:39820.service: Deactivated successfully. Jan 23 01:24:12.140394 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 01:24:12.141376 systemd-logind[1524]: Session 11 logged out. Waiting for processes to exit. Jan 23 01:24:12.143601 systemd-logind[1524]: Removed session 11. Jan 23 01:24:15.588811 kubelet[2704]: E0123 01:24:15.588695 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:24:17.172442 systemd[1]: Started sshd@11-172.238.187.240:22-68.220.241.50:54290.service - OpenSSH per-connection server daemon (68.220.241.50:54290). Jan 23 01:24:17.365925 sshd[4107]: Accepted publickey for core from 68.220.241.50 port 54290 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:17.367956 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:17.372810 systemd-logind[1524]: New session 12 of user core. Jan 23 01:24:17.387849 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 01:24:17.573873 sshd[4110]: Connection closed by 68.220.241.50 port 54290 Jan 23 01:24:17.574413 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:17.578446 systemd-logind[1524]: Session 12 logged out. Waiting for processes to exit. Jan 23 01:24:17.579077 systemd[1]: sshd@11-172.238.187.240:22-68.220.241.50:54290.service: Deactivated successfully. Jan 23 01:24:17.581271 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 01:24:17.582903 systemd-logind[1524]: Removed session 12. Jan 23 01:24:17.587757 kubelet[2704]: E0123 01:24:17.587717 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:24:17.605984 systemd[1]: Started sshd@12-172.238.187.240:22-68.220.241.50:54294.service - OpenSSH per-connection server daemon (68.220.241.50:54294). Jan 23 01:24:17.768946 sshd[4123]: Accepted publickey for core from 68.220.241.50 port 54294 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:17.770931 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:17.777718 systemd-logind[1524]: New session 13 of user core. Jan 23 01:24:17.788828 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 01:24:18.018206 sshd[4126]: Connection closed by 68.220.241.50 port 54294 Jan 23 01:24:18.019036 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:18.028982 systemd[1]: sshd@12-172.238.187.240:22-68.220.241.50:54294.service: Deactivated successfully. Jan 23 01:24:18.032168 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 01:24:18.034718 systemd-logind[1524]: Session 13 logged out. Waiting for processes to exit. Jan 23 01:24:18.036995 systemd-logind[1524]: Removed session 13. Jan 23 01:24:18.055421 systemd[1]: Started sshd@13-172.238.187.240:22-68.220.241.50:54308.service - OpenSSH per-connection server daemon (68.220.241.50:54308). Jan 23 01:24:18.236869 sshd[4136]: Accepted publickey for core from 68.220.241.50 port 54308 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:18.239719 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:18.246720 systemd-logind[1524]: New session 14 of user core. Jan 23 01:24:18.255805 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 01:24:18.460775 sshd[4139]: Connection closed by 68.220.241.50 port 54308 Jan 23 01:24:18.461589 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:18.467240 systemd[1]: sshd@13-172.238.187.240:22-68.220.241.50:54308.service: Deactivated successfully. Jan 23 01:24:18.470678 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 01:24:18.472460 systemd-logind[1524]: Session 14 logged out. Waiting for processes to exit. Jan 23 01:24:18.475072 systemd-logind[1524]: Removed session 14. Jan 23 01:24:23.494250 systemd[1]: Started sshd@14-172.238.187.240:22-68.220.241.50:43752.service - OpenSSH per-connection server daemon (68.220.241.50:43752). Jan 23 01:24:23.659866 sshd[4152]: Accepted publickey for core from 68.220.241.50 port 43752 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:23.661705 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:23.666734 systemd-logind[1524]: New session 15 of user core. Jan 23 01:24:23.670780 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 01:24:23.841129 sshd[4155]: Connection closed by 68.220.241.50 port 43752 Jan 23 01:24:23.841829 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:23.846352 systemd[1]: sshd@14-172.238.187.240:22-68.220.241.50:43752.service: Deactivated successfully. Jan 23 01:24:23.852169 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 01:24:23.853446 systemd-logind[1524]: Session 15 logged out. Waiting for processes to exit. Jan 23 01:24:23.854858 systemd-logind[1524]: Removed session 15. Jan 23 01:24:27.587812 kubelet[2704]: E0123 01:24:27.587778 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:24:28.875119 systemd[1]: Started sshd@15-172.238.187.240:22-68.220.241.50:43758.service - OpenSSH per-connection server daemon (68.220.241.50:43758). Jan 23 01:24:29.041559 sshd[4167]: Accepted publickey for core from 68.220.241.50 port 43758 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:29.043349 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:29.048562 systemd-logind[1524]: New session 16 of user core. Jan 23 01:24:29.053751 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 01:24:29.229612 sshd[4170]: Connection closed by 68.220.241.50 port 43758 Jan 23 01:24:29.231513 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:29.235508 systemd[1]: sshd@15-172.238.187.240:22-68.220.241.50:43758.service: Deactivated successfully. Jan 23 01:24:29.238201 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 01:24:29.239677 systemd-logind[1524]: Session 16 logged out. Waiting for processes to exit. Jan 23 01:24:29.240986 systemd-logind[1524]: Removed session 16. Jan 23 01:24:29.264223 systemd[1]: Started sshd@16-172.238.187.240:22-68.220.241.50:43762.service - OpenSSH per-connection server daemon (68.220.241.50:43762). Jan 23 01:24:29.456879 sshd[4181]: Accepted publickey for core from 68.220.241.50 port 43762 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:29.458331 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:29.463696 systemd-logind[1524]: New session 17 of user core. Jan 23 01:24:29.475005 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 01:24:29.588579 kubelet[2704]: E0123 01:24:29.587825 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:24:29.682203 sshd[4184]: Connection closed by 68.220.241.50 port 43762 Jan 23 01:24:29.682837 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:29.687408 systemd[1]: sshd@16-172.238.187.240:22-68.220.241.50:43762.service: Deactivated successfully. Jan 23 01:24:29.689358 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 01:24:29.692770 systemd-logind[1524]: Session 17 logged out. Waiting for processes to exit. Jan 23 01:24:29.693973 systemd-logind[1524]: Removed session 17. Jan 23 01:24:29.716819 systemd[1]: Started sshd@17-172.238.187.240:22-68.220.241.50:43778.service - OpenSSH per-connection server daemon (68.220.241.50:43778). Jan 23 01:24:29.882504 sshd[4194]: Accepted publickey for core from 68.220.241.50 port 43778 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:29.884061 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:29.889471 systemd-logind[1524]: New session 18 of user core. Jan 23 01:24:29.893926 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 01:24:30.617975 sshd[4197]: Connection closed by 68.220.241.50 port 43778 Jan 23 01:24:30.620147 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:30.626164 systemd[1]: sshd@17-172.238.187.240:22-68.220.241.50:43778.service: Deactivated successfully. Jan 23 01:24:30.627811 systemd-logind[1524]: Session 18 logged out. Waiting for processes to exit. Jan 23 01:24:30.631566 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 01:24:30.634962 systemd-logind[1524]: Removed session 18. Jan 23 01:24:30.648307 systemd[1]: Started sshd@18-172.238.187.240:22-68.220.241.50:43794.service - OpenSSH per-connection server daemon (68.220.241.50:43794). Jan 23 01:24:30.814840 sshd[4214]: Accepted publickey for core from 68.220.241.50 port 43794 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:30.816661 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:30.821811 systemd-logind[1524]: New session 19 of user core. Jan 23 01:24:30.832782 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:24:31.151417 sshd[4217]: Connection closed by 68.220.241.50 port 43794 Jan 23 01:24:31.149311 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:31.154762 systemd-logind[1524]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:24:31.158795 systemd[1]: sshd@18-172.238.187.240:22-68.220.241.50:43794.service: Deactivated successfully. Jan 23 01:24:31.163143 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:24:31.165346 systemd-logind[1524]: Removed session 19. Jan 23 01:24:31.183070 systemd[1]: Started sshd@19-172.238.187.240:22-68.220.241.50:43808.service - OpenSSH per-connection server daemon (68.220.241.50:43808). Jan 23 01:24:31.363110 sshd[4227]: Accepted publickey for core from 68.220.241.50 port 43808 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:31.364650 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:31.368976 systemd-logind[1524]: New session 20 of user core. Jan 23 01:24:31.372761 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 01:24:31.548715 sshd[4230]: Connection closed by 68.220.241.50 port 43808 Jan 23 01:24:31.550018 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:31.557332 systemd[1]: sshd@19-172.238.187.240:22-68.220.241.50:43808.service: Deactivated successfully. Jan 23 01:24:31.560313 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 01:24:31.561365 systemd-logind[1524]: Session 20 logged out. Waiting for processes to exit. Jan 23 01:24:31.563390 systemd-logind[1524]: Removed session 20. Jan 23 01:24:34.588804 kubelet[2704]: E0123 01:24:34.588339 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:24:36.586878 systemd[1]: Started sshd@20-172.238.187.240:22-68.220.241.50:46046.service - OpenSSH per-connection server daemon (68.220.241.50:46046). Jan 23 01:24:36.772306 sshd[4244]: Accepted publickey for core from 68.220.241.50 port 46046 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:36.774287 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:36.781241 systemd-logind[1524]: New session 21 of user core. Jan 23 01:24:36.791784 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 01:24:36.967442 sshd[4247]: Connection closed by 68.220.241.50 port 46046 Jan 23 01:24:36.968795 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:36.973395 systemd-logind[1524]: Session 21 logged out. Waiting for processes to exit. Jan 23 01:24:36.973875 systemd[1]: sshd@20-172.238.187.240:22-68.220.241.50:46046.service: Deactivated successfully. Jan 23 01:24:36.976178 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 01:24:36.978291 systemd-logind[1524]: Removed session 21. Jan 23 01:24:40.587760 kubelet[2704]: E0123 01:24:40.587621 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:24:41.995945 systemd[1]: Started sshd@21-172.238.187.240:22-68.220.241.50:46050.service - OpenSSH per-connection server daemon (68.220.241.50:46050). Jan 23 01:24:42.157680 sshd[4259]: Accepted publickey for core from 68.220.241.50 port 46050 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:42.158874 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:42.164392 systemd-logind[1524]: New session 22 of user core. Jan 23 01:24:42.167757 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 01:24:42.349749 sshd[4262]: Connection closed by 68.220.241.50 port 46050 Jan 23 01:24:42.350830 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:42.356267 systemd[1]: sshd@21-172.238.187.240:22-68.220.241.50:46050.service: Deactivated successfully. Jan 23 01:24:42.359202 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 01:24:42.360144 systemd-logind[1524]: Session 22 logged out. Waiting for processes to exit. Jan 23 01:24:42.362478 systemd-logind[1524]: Removed session 22. Jan 23 01:24:47.386017 systemd[1]: Started sshd@22-172.238.187.240:22-68.220.241.50:56688.service - OpenSSH per-connection server daemon (68.220.241.50:56688). Jan 23 01:24:47.564926 sshd[4276]: Accepted publickey for core from 68.220.241.50 port 56688 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:47.566420 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:47.572321 systemd-logind[1524]: New session 23 of user core. Jan 23 01:24:47.581766 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 01:24:47.760737 sshd[4279]: Connection closed by 68.220.241.50 port 56688 Jan 23 01:24:47.761813 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:47.766504 systemd[1]: sshd@22-172.238.187.240:22-68.220.241.50:56688.service: Deactivated successfully. Jan 23 01:24:47.768772 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 01:24:47.770315 systemd-logind[1524]: Session 23 logged out. Waiting for processes to exit. Jan 23 01:24:47.771523 systemd-logind[1524]: Removed session 23. Jan 23 01:24:47.793837 systemd[1]: Started sshd@23-172.238.187.240:22-68.220.241.50:56690.service - OpenSSH per-connection server daemon (68.220.241.50:56690). Jan 23 01:24:47.981720 sshd[4291]: Accepted publickey for core from 68.220.241.50 port 56690 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:47.983719 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:47.994434 systemd-logind[1524]: New session 24 of user core. Jan 23 01:24:48.000759 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 01:24:49.561380 kubelet[2704]: I0123 01:24:49.561253 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zj22g" podStartSLOduration=156.561229222 podStartE2EDuration="2m36.561229222s" podCreationTimestamp="2026-01-23 01:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:22:55.802474321 +0000 UTC m=+49.307998786" watchObservedRunningTime="2026-01-23 01:24:49.561229222 +0000 UTC m=+163.066753687" Jan 23 01:24:49.581523 containerd[1549]: time="2026-01-23T01:24:49.581460527Z" level=info msg="StopContainer for \"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160\" with timeout 30 (s)" Jan 23 01:24:49.584769 containerd[1549]: time="2026-01-23T01:24:49.584693071Z" level=info msg="Stop container \"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160\" with signal terminated" Jan 23 01:24:49.608448 systemd[1]: cri-containerd-fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160.scope: Deactivated successfully. Jan 23 01:24:49.611799 containerd[1549]: time="2026-01-23T01:24:49.611604996Z" level=info msg="received container exit event container_id:\"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160\" id:\"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160\" pid:3466 exited_at:{seconds:1769131489 nanos:611093096}" Jan 23 01:24:49.614934 containerd[1549]: time="2026-01-23T01:24:49.614898268Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:24:49.627375 containerd[1549]: time="2026-01-23T01:24:49.627313696Z" level=info msg="StopContainer for \"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871\" with timeout 2 (s)" Jan 23 01:24:49.627847 containerd[1549]: time="2026-01-23T01:24:49.627830116Z" level=info msg="Stop container \"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871\" with signal terminated" Jan 23 01:24:49.640182 systemd-networkd[1427]: lxc_health: Link DOWN Jan 23 01:24:49.641326 systemd-networkd[1427]: lxc_health: Lost carrier Jan 23 01:24:49.661831 systemd[1]: cri-containerd-d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871.scope: Deactivated successfully. Jan 23 01:24:49.662364 systemd[1]: cri-containerd-d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871.scope: Consumed 7.138s CPU time, 123.8M memory peak, 112K read from disk, 13.3M written to disk. Jan 23 01:24:49.672123 containerd[1549]: time="2026-01-23T01:24:49.670276938Z" level=info msg="received container exit event container_id:\"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871\" id:\"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871\" pid:3322 exited_at:{seconds:1769131489 nanos:669821285}" Jan 23 01:24:49.676244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160-rootfs.mount: Deactivated successfully. Jan 23 01:24:49.693244 containerd[1549]: time="2026-01-23T01:24:49.693199358Z" level=info msg="StopContainer for \"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160\" returns successfully" Jan 23 01:24:49.695824 containerd[1549]: time="2026-01-23T01:24:49.695721639Z" level=info msg="StopPodSandbox for \"b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c\"" Jan 23 01:24:49.695901 containerd[1549]: time="2026-01-23T01:24:49.695875434Z" level=info msg="Container to stop \"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:24:49.709417 systemd[1]: cri-containerd-b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c.scope: Deactivated successfully. Jan 23 01:24:49.718269 containerd[1549]: time="2026-01-23T01:24:49.717933037Z" level=info msg="received sandbox exit event container_id:\"b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c\" id:\"b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c\" exit_status:137 exited_at:{seconds:1769131489 nanos:717102050}" monitor_name=podsandbox Jan 23 01:24:49.731834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871-rootfs.mount: Deactivated successfully. Jan 23 01:24:49.741633 containerd[1549]: time="2026-01-23T01:24:49.741188174Z" level=info msg="StopContainer for \"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871\" returns successfully" Jan 23 01:24:49.741799 containerd[1549]: time="2026-01-23T01:24:49.741733213Z" level=info msg="StopPodSandbox for \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\"" Jan 23 01:24:49.741867 containerd[1549]: time="2026-01-23T01:24:49.741831019Z" level=info msg="Container to stop \"47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:24:49.741910 containerd[1549]: time="2026-01-23T01:24:49.741876987Z" level=info msg="Container to stop \"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:24:49.741910 containerd[1549]: time="2026-01-23T01:24:49.741888037Z" level=info msg="Container to stop \"95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:24:49.741910 containerd[1549]: time="2026-01-23T01:24:49.741897097Z" level=info msg="Container to stop \"12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:24:49.741910 containerd[1549]: time="2026-01-23T01:24:49.741904486Z" level=info msg="Container to stop \"69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:24:49.762991 systemd[1]: cri-containerd-db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0.scope: Deactivated successfully. Jan 23 01:24:49.770968 containerd[1549]: time="2026-01-23T01:24:49.770923649Z" level=info msg="received sandbox exit event container_id:\"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" id:\"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" exit_status:137 exited_at:{seconds:1769131489 nanos:769721986}" monitor_name=podsandbox Jan 23 01:24:49.782955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c-rootfs.mount: Deactivated successfully. Jan 23 01:24:49.789961 containerd[1549]: time="2026-01-23T01:24:49.789925992Z" level=info msg="shim disconnected" id=b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c namespace=k8s.io Jan 23 01:24:49.790622 containerd[1549]: time="2026-01-23T01:24:49.790425133Z" level=warning msg="cleaning up after shim disconnected" id=b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c namespace=k8s.io Jan 23 01:24:49.790622 containerd[1549]: time="2026-01-23T01:24:49.790443522Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 01:24:49.821132 containerd[1549]: time="2026-01-23T01:24:49.821090732Z" level=info msg="received sandbox container exit event sandbox_id:\"b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c\" exit_status:137 exited_at:{seconds:1769131489 nanos:717102050}" monitor_name=criService Jan 23 01:24:49.822124 containerd[1549]: time="2026-01-23T01:24:49.821835352Z" level=info msg="TearDown network for sandbox \"b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c\" successfully" Jan 23 01:24:49.822124 containerd[1549]: time="2026-01-23T01:24:49.821862171Z" level=info msg="StopPodSandbox for \"b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c\" returns successfully" Jan 23 01:24:49.821587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0-rootfs.mount: Deactivated successfully. Jan 23 01:24:49.826430 containerd[1549]: time="2026-01-23T01:24:49.826123466Z" level=info msg="shim disconnected" id=db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0 namespace=k8s.io Jan 23 01:24:49.826430 containerd[1549]: time="2026-01-23T01:24:49.826149515Z" level=warning msg="cleaning up after shim disconnected" id=db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0 namespace=k8s.io Jan 23 01:24:49.826430 containerd[1549]: time="2026-01-23T01:24:49.826357647Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 01:24:49.827416 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c-shm.mount: Deactivated successfully. Jan 23 01:24:49.849894 containerd[1549]: time="2026-01-23T01:24:49.849799476Z" level=info msg="received sandbox container exit event sandbox_id:\"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" exit_status:137 exited_at:{seconds:1769131489 nanos:769721986}" monitor_name=criService Jan 23 01:24:49.850564 containerd[1549]: time="2026-01-23T01:24:49.850436722Z" level=info msg="TearDown network for sandbox \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" successfully" Jan 23 01:24:49.850564 containerd[1549]: time="2026-01-23T01:24:49.850483741Z" level=info msg="StopPodSandbox for \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" returns successfully" Jan 23 01:24:49.955278 kubelet[2704]: I0123 01:24:49.955217 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-host-proc-sys-kernel\") pod \"960e8f72-2a96-4356-96a4-71f44baf117f\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " Jan 23 01:24:49.955278 kubelet[2704]: I0123 01:24:49.955284 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-cilium-run\") pod \"960e8f72-2a96-4356-96a4-71f44baf117f\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " Jan 23 01:24:49.955563 kubelet[2704]: I0123 01:24:49.955319 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-etc-cni-netd\") pod \"960e8f72-2a96-4356-96a4-71f44baf117f\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " Jan 23 01:24:49.955563 kubelet[2704]: I0123 01:24:49.955346 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72mwb\" (UniqueName: \"kubernetes.io/projected/960e8f72-2a96-4356-96a4-71f44baf117f-kube-api-access-72mwb\") pod \"960e8f72-2a96-4356-96a4-71f44baf117f\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " Jan 23 01:24:49.955563 kubelet[2704]: I0123 01:24:49.955368 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-lib-modules\") pod \"960e8f72-2a96-4356-96a4-71f44baf117f\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " Jan 23 01:24:49.955563 kubelet[2704]: I0123 01:24:49.955383 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-xtables-lock\") pod \"960e8f72-2a96-4356-96a4-71f44baf117f\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " Jan 23 01:24:49.955563 kubelet[2704]: I0123 01:24:49.955398 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-host-proc-sys-net\") pod \"960e8f72-2a96-4356-96a4-71f44baf117f\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " Jan 23 01:24:49.955563 kubelet[2704]: I0123 01:24:49.955415 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-cilium-cgroup\") pod \"960e8f72-2a96-4356-96a4-71f44baf117f\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " Jan 23 01:24:49.955755 kubelet[2704]: I0123 01:24:49.955432 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-hostproc\") pod \"960e8f72-2a96-4356-96a4-71f44baf117f\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " Jan 23 01:24:49.955755 kubelet[2704]: I0123 01:24:49.955447 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/960e8f72-2a96-4356-96a4-71f44baf117f-hubble-tls\") pod \"960e8f72-2a96-4356-96a4-71f44baf117f\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " Jan 23 01:24:49.955755 kubelet[2704]: I0123 01:24:49.955463 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-bpf-maps\") pod \"960e8f72-2a96-4356-96a4-71f44baf117f\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " Jan 23 01:24:49.955755 kubelet[2704]: I0123 01:24:49.955480 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/226cfc49-acf8-4abc-bf26-69ad52ccf7e9-cilium-config-path\") pod \"226cfc49-acf8-4abc-bf26-69ad52ccf7e9\" (UID: \"226cfc49-acf8-4abc-bf26-69ad52ccf7e9\") " Jan 23 01:24:49.955755 kubelet[2704]: I0123 01:24:49.955500 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/960e8f72-2a96-4356-96a4-71f44baf117f-cilium-config-path\") pod \"960e8f72-2a96-4356-96a4-71f44baf117f\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " Jan 23 01:24:49.955755 kubelet[2704]: I0123 01:24:49.955518 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkwdt\" (UniqueName: \"kubernetes.io/projected/226cfc49-acf8-4abc-bf26-69ad52ccf7e9-kube-api-access-kkwdt\") pod \"226cfc49-acf8-4abc-bf26-69ad52ccf7e9\" (UID: \"226cfc49-acf8-4abc-bf26-69ad52ccf7e9\") " Jan 23 01:24:49.955943 kubelet[2704]: I0123 01:24:49.955533 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-cni-path\") pod \"960e8f72-2a96-4356-96a4-71f44baf117f\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " Jan 23 01:24:49.955943 kubelet[2704]: I0123 01:24:49.955564 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/960e8f72-2a96-4356-96a4-71f44baf117f-clustermesh-secrets\") pod \"960e8f72-2a96-4356-96a4-71f44baf117f\" (UID: \"960e8f72-2a96-4356-96a4-71f44baf117f\") " Jan 23 01:24:49.956504 kubelet[2704]: I0123 01:24:49.956466 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "960e8f72-2a96-4356-96a4-71f44baf117f" (UID: "960e8f72-2a96-4356-96a4-71f44baf117f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:24:49.956556 kubelet[2704]: I0123 01:24:49.956526 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "960e8f72-2a96-4356-96a4-71f44baf117f" (UID: "960e8f72-2a96-4356-96a4-71f44baf117f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:24:49.956556 kubelet[2704]: I0123 01:24:49.956544 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "960e8f72-2a96-4356-96a4-71f44baf117f" (UID: "960e8f72-2a96-4356-96a4-71f44baf117f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:24:49.956625 kubelet[2704]: I0123 01:24:49.956557 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "960e8f72-2a96-4356-96a4-71f44baf117f" (UID: "960e8f72-2a96-4356-96a4-71f44baf117f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:24:49.957908 kubelet[2704]: I0123 01:24:49.957861 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "960e8f72-2a96-4356-96a4-71f44baf117f" (UID: "960e8f72-2a96-4356-96a4-71f44baf117f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:24:49.957908 kubelet[2704]: I0123 01:24:49.957904 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "960e8f72-2a96-4356-96a4-71f44baf117f" (UID: "960e8f72-2a96-4356-96a4-71f44baf117f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:24:49.958869 kubelet[2704]: I0123 01:24:49.957928 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "960e8f72-2a96-4356-96a4-71f44baf117f" (UID: "960e8f72-2a96-4356-96a4-71f44baf117f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:24:49.961958 kubelet[2704]: I0123 01:24:49.961921 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-hostproc" (OuterVolumeSpecName: "hostproc") pod "960e8f72-2a96-4356-96a4-71f44baf117f" (UID: "960e8f72-2a96-4356-96a4-71f44baf117f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:24:49.963659 kubelet[2704]: I0123 01:24:49.963573 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "960e8f72-2a96-4356-96a4-71f44baf117f" (UID: "960e8f72-2a96-4356-96a4-71f44baf117f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:24:49.965720 kubelet[2704]: I0123 01:24:49.965348 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-cni-path" (OuterVolumeSpecName: "cni-path") pod "960e8f72-2a96-4356-96a4-71f44baf117f" (UID: "960e8f72-2a96-4356-96a4-71f44baf117f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:24:49.966519 kubelet[2704]: I0123 01:24:49.966481 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/226cfc49-acf8-4abc-bf26-69ad52ccf7e9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "226cfc49-acf8-4abc-bf26-69ad52ccf7e9" (UID: "226cfc49-acf8-4abc-bf26-69ad52ccf7e9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:24:49.967600 kubelet[2704]: I0123 01:24:49.966606 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960e8f72-2a96-4356-96a4-71f44baf117f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "960e8f72-2a96-4356-96a4-71f44baf117f" (UID: "960e8f72-2a96-4356-96a4-71f44baf117f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:24:49.969982 kubelet[2704]: I0123 01:24:49.969947 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960e8f72-2a96-4356-96a4-71f44baf117f-kube-api-access-72mwb" (OuterVolumeSpecName: "kube-api-access-72mwb") pod "960e8f72-2a96-4356-96a4-71f44baf117f" (UID: "960e8f72-2a96-4356-96a4-71f44baf117f"). InnerVolumeSpecName "kube-api-access-72mwb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:24:49.971800 kubelet[2704]: I0123 01:24:49.971770 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960e8f72-2a96-4356-96a4-71f44baf117f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "960e8f72-2a96-4356-96a4-71f44baf117f" (UID: "960e8f72-2a96-4356-96a4-71f44baf117f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:24:49.972803 kubelet[2704]: I0123 01:24:49.972762 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226cfc49-acf8-4abc-bf26-69ad52ccf7e9-kube-api-access-kkwdt" (OuterVolumeSpecName: "kube-api-access-kkwdt") pod "226cfc49-acf8-4abc-bf26-69ad52ccf7e9" (UID: "226cfc49-acf8-4abc-bf26-69ad52ccf7e9"). InnerVolumeSpecName "kube-api-access-kkwdt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:24:49.973401 kubelet[2704]: I0123 01:24:49.973376 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960e8f72-2a96-4356-96a4-71f44baf117f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "960e8f72-2a96-4356-96a4-71f44baf117f" (UID: "960e8f72-2a96-4356-96a4-71f44baf117f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:24:49.998893 kubelet[2704]: I0123 01:24:49.998850 2704 scope.go:117] "RemoveContainer" containerID="fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160" Jan 23 01:24:50.005842 containerd[1549]: time="2026-01-23T01:24:50.005765912Z" level=info msg="RemoveContainer for \"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160\"" Jan 23 01:24:50.013032 containerd[1549]: time="2026-01-23T01:24:50.012975705Z" level=info msg="RemoveContainer for \"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160\" returns successfully" Jan 23 01:24:50.013799 kubelet[2704]: I0123 01:24:50.013746 2704 scope.go:117] "RemoveContainer" containerID="fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160" Jan 23 01:24:50.014506 containerd[1549]: time="2026-01-23T01:24:50.014443779Z" level=error msg="ContainerStatus for \"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160\": not found" Jan 23 01:24:50.014692 kubelet[2704]: E0123 01:24:50.014609 2704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160\": not found" containerID="fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160" Jan 23 01:24:50.014785 kubelet[2704]: I0123 01:24:50.014662 2704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160"} err="failed to get container status \"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb28247071af2c506cf4e5ef894306ac9841e0748a75af168f94467073355160\": not found" Jan 23 01:24:50.014785 kubelet[2704]: I0123 01:24:50.014770 2704 scope.go:117] "RemoveContainer" containerID="d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871" Jan 23 01:24:50.016217 systemd[1]: Removed slice kubepods-besteffort-pod226cfc49_acf8_4abc_bf26_69ad52ccf7e9.slice - libcontainer container kubepods-besteffort-pod226cfc49_acf8_4abc_bf26_69ad52ccf7e9.slice. Jan 23 01:24:50.035937 containerd[1549]: time="2026-01-23T01:24:50.035393133Z" level=info msg="RemoveContainer for \"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871\"" Jan 23 01:24:50.039752 systemd[1]: Removed slice kubepods-burstable-pod960e8f72_2a96_4356_96a4_71f44baf117f.slice - libcontainer container kubepods-burstable-pod960e8f72_2a96_4356_96a4_71f44baf117f.slice. Jan 23 01:24:50.039997 systemd[1]: kubepods-burstable-pod960e8f72_2a96_4356_96a4_71f44baf117f.slice: Consumed 7.264s CPU time, 124.2M memory peak, 112K read from disk, 13.3M written to disk. Jan 23 01:24:50.045540 containerd[1549]: time="2026-01-23T01:24:50.045481195Z" level=info msg="RemoveContainer for \"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871\" returns successfully" Jan 23 01:24:50.045825 kubelet[2704]: I0123 01:24:50.045793 2704 scope.go:117] "RemoveContainer" containerID="47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759" Jan 23 01:24:50.047526 containerd[1549]: time="2026-01-23T01:24:50.047485398Z" level=info msg="RemoveContainer for \"47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759\"" Jan 23 01:24:50.052912 containerd[1549]: time="2026-01-23T01:24:50.052803693Z" level=info msg="RemoveContainer for \"47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759\" returns successfully" Jan 23 01:24:50.053600 kubelet[2704]: I0123 01:24:50.053546 2704 scope.go:117] "RemoveContainer" containerID="69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b" Jan 23 01:24:50.055761 kubelet[2704]: I0123 01:24:50.055740 2704 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/960e8f72-2a96-4356-96a4-71f44baf117f-clustermesh-secrets\") on node \"172-238-187-240\" DevicePath \"\"" Jan 23 01:24:50.055761 kubelet[2704]: I0123 01:24:50.055759 2704 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-cni-path\") on node \"172-238-187-240\" DevicePath \"\"" Jan 23 01:24:50.055761 kubelet[2704]: I0123 01:24:50.055768 2704 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-etc-cni-netd\") on node \"172-238-187-240\" DevicePath \"\"" Jan 23 01:24:50.055892 kubelet[2704]: I0123 01:24:50.055777 2704 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-host-proc-sys-kernel\") on node \"172-238-187-240\" DevicePath \"\"" Jan 23 01:24:50.055892 kubelet[2704]: I0123 01:24:50.055787 2704 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-cilium-run\") on node \"172-238-187-240\" DevicePath \"\"" Jan 23 01:24:50.055892 kubelet[2704]: I0123 01:24:50.055795 2704 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-lib-modules\") on node \"172-238-187-240\" DevicePath \"\"" Jan 23 01:24:50.055892 kubelet[2704]: I0123 01:24:50.055803 2704 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-xtables-lock\") on node \"172-238-187-240\" DevicePath \"\"" Jan 23 01:24:50.055892 kubelet[2704]: I0123 01:24:50.055811 2704 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-72mwb\" (UniqueName: \"kubernetes.io/projected/960e8f72-2a96-4356-96a4-71f44baf117f-kube-api-access-72mwb\") on node \"172-238-187-240\" DevicePath \"\"" Jan 23 01:24:50.055892 kubelet[2704]: I0123 01:24:50.055820 2704 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-hostproc\") on node \"172-238-187-240\" DevicePath \"\"" Jan 23 01:24:50.055892 kubelet[2704]: I0123 01:24:50.055830 2704 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/960e8f72-2a96-4356-96a4-71f44baf117f-hubble-tls\") on node \"172-238-187-240\" DevicePath \"\"" Jan 23 01:24:50.055892 kubelet[2704]: I0123 01:24:50.055838 2704 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-host-proc-sys-net\") on node \"172-238-187-240\" DevicePath \"\"" Jan 23 01:24:50.056054 kubelet[2704]: I0123 01:24:50.055848 2704 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-cilium-cgroup\") on node \"172-238-187-240\" DevicePath \"\"" Jan 23 01:24:50.056054 kubelet[2704]: I0123 01:24:50.055857 2704 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/960e8f72-2a96-4356-96a4-71f44baf117f-cilium-config-path\") on node \"172-238-187-240\" DevicePath \"\"" Jan 23 01:24:50.056054 kubelet[2704]: I0123 01:24:50.055865 2704 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/960e8f72-2a96-4356-96a4-71f44baf117f-bpf-maps\") on node \"172-238-187-240\" DevicePath \"\"" Jan 23 01:24:50.056054 kubelet[2704]: I0123 01:24:50.055873 2704 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/226cfc49-acf8-4abc-bf26-69ad52ccf7e9-cilium-config-path\") on node \"172-238-187-240\" DevicePath \"\"" Jan 23 01:24:50.056054 kubelet[2704]: I0123 01:24:50.055881 2704 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kkwdt\" (UniqueName: \"kubernetes.io/projected/226cfc49-acf8-4abc-bf26-69ad52ccf7e9-kube-api-access-kkwdt\") on node \"172-238-187-240\" DevicePath \"\"" Jan 23 01:24:50.057272 containerd[1549]: time="2026-01-23T01:24:50.057244173Z" level=info msg="RemoveContainer for \"69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b\"" Jan 23 01:24:50.061335 containerd[1549]: time="2026-01-23T01:24:50.061231869Z" level=info msg="RemoveContainer for \"69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b\" returns successfully" Jan 23 01:24:50.061416 kubelet[2704]: I0123 01:24:50.061392 2704 scope.go:117] "RemoveContainer" containerID="12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d" Jan 23 01:24:50.065237 containerd[1549]: time="2026-01-23T01:24:50.065209976Z" level=info msg="RemoveContainer for \"12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d\"" Jan 23 01:24:50.070021 containerd[1549]: time="2026-01-23T01:24:50.069993372Z" level=info msg="RemoveContainer for \"12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d\" returns successfully" Jan 23 01:24:50.070513 kubelet[2704]: I0123 01:24:50.070496 2704 scope.go:117] "RemoveContainer" containerID="95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9" Jan 23 01:24:50.072778 containerd[1549]: time="2026-01-23T01:24:50.072444138Z" level=info msg="RemoveContainer for \"95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9\"" Jan 23 01:24:50.075787 containerd[1549]: time="2026-01-23T01:24:50.075764980Z" level=info msg="RemoveContainer for \"95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9\" returns successfully" Jan 23 01:24:50.076260 kubelet[2704]: I0123 01:24:50.076183 2704 scope.go:117] "RemoveContainer" containerID="d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871" Jan 23 01:24:50.076530 containerd[1549]: time="2026-01-23T01:24:50.076493183Z" level=error msg="ContainerStatus for \"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871\": not found" Jan 23 01:24:50.076728 kubelet[2704]: E0123 01:24:50.076624 2704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871\": not found" containerID="d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871" Jan 23 01:24:50.076812 kubelet[2704]: I0123 01:24:50.076735 2704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871"} err="failed to get container status \"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8b0ac0d7cb0792cc2e08451e8420bae3db7352655652bb24a2505af11049871\": not found" Jan 23 01:24:50.076812 kubelet[2704]: I0123 01:24:50.076757 2704 scope.go:117] "RemoveContainer" containerID="47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759" Jan 23 01:24:50.076985 containerd[1549]: time="2026-01-23T01:24:50.076890147Z" level=error msg="ContainerStatus for \"47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759\": not found" Jan 23 01:24:50.077327 kubelet[2704]: E0123 01:24:50.077295 2704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759\": not found" containerID="47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759" Jan 23 01:24:50.077376 kubelet[2704]: I0123 01:24:50.077329 2704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759"} err="failed to get container status \"47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759\": rpc error: code = NotFound desc = an error occurred when try to find container \"47e3e8f60745e8abf4994640851cf946753659035991459635435c9643397759\": not found" Jan 23 01:24:50.077376 kubelet[2704]: I0123 01:24:50.077352 2704 scope.go:117] "RemoveContainer" containerID="69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b" Jan 23 01:24:50.077563 containerd[1549]: time="2026-01-23T01:24:50.077533522Z" level=error msg="ContainerStatus for \"69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b\": not found" Jan 23 01:24:50.077701 kubelet[2704]: E0123 01:24:50.077670 2704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b\": not found" containerID="69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b" Jan 23 01:24:50.077701 kubelet[2704]: I0123 01:24:50.077690 2704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b"} err="failed to get container status \"69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b\": rpc error: code = NotFound desc = an error occurred when try to find container \"69c166d978ec55c663a82cc8e5775c28de4847e6b256cafad2d5bd791518a04b\": not found" Jan 23 01:24:50.077772 kubelet[2704]: I0123 01:24:50.077705 2704 scope.go:117] "RemoveContainer" containerID="12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d" Jan 23 01:24:50.077908 containerd[1549]: time="2026-01-23T01:24:50.077888088Z" level=error msg="ContainerStatus for \"12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d\": not found" Jan 23 01:24:50.078009 kubelet[2704]: E0123 01:24:50.077983 2704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d\": not found" containerID="12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d" Jan 23 01:24:50.078044 kubelet[2704]: I0123 01:24:50.078012 2704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d"} err="failed to get container status \"12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d\": rpc error: code = NotFound desc = an error occurred when try to find container \"12bca857ad43a60d95a55225a5aa3d334a2e11c4954a06c446fc9d4d5509038d\": not found" Jan 23 01:24:50.078044 kubelet[2704]: I0123 01:24:50.078030 2704 scope.go:117] "RemoveContainer" containerID="95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9" Jan 23 01:24:50.078273 containerd[1549]: time="2026-01-23T01:24:50.078172108Z" level=error msg="ContainerStatus for \"95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9\": not found" Jan 23 01:24:50.078371 kubelet[2704]: E0123 01:24:50.078355 2704 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9\": not found" containerID="95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9" Jan 23 01:24:50.078398 kubelet[2704]: I0123 01:24:50.078374 2704 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9"} err="failed to get container status \"95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"95087cee314af376473a9fb2bb9a179a81245ef6897f011f8c5afc23b2fb09d9\": not found" Jan 23 01:24:50.590626 kubelet[2704]: I0123 01:24:50.590557 2704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="226cfc49-acf8-4abc-bf26-69ad52ccf7e9" path="/var/lib/kubelet/pods/226cfc49-acf8-4abc-bf26-69ad52ccf7e9/volumes" Jan 23 01:24:50.591237 kubelet[2704]: I0123 01:24:50.591206 2704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="960e8f72-2a96-4356-96a4-71f44baf117f" path="/var/lib/kubelet/pods/960e8f72-2a96-4356-96a4-71f44baf117f/volumes" Jan 23 01:24:50.672810 systemd[1]: var-lib-kubelet-pods-226cfc49\x2dacf8\x2d4abc\x2dbf26\x2d69ad52ccf7e9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkkwdt.mount: Deactivated successfully. Jan 23 01:24:50.673152 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0-shm.mount: Deactivated successfully. Jan 23 01:24:50.673235 systemd[1]: var-lib-kubelet-pods-960e8f72\x2d2a96\x2d4356\x2d96a4\x2d71f44baf117f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d72mwb.mount: Deactivated successfully. Jan 23 01:24:50.673315 systemd[1]: var-lib-kubelet-pods-960e8f72\x2d2a96\x2d4356\x2d96a4\x2d71f44baf117f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 01:24:50.673402 systemd[1]: var-lib-kubelet-pods-960e8f72\x2d2a96\x2d4356\x2d96a4\x2d71f44baf117f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 01:24:51.545341 sshd[4294]: Connection closed by 68.220.241.50 port 56690 Jan 23 01:24:51.546733 sshd-session[4291]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:51.552547 systemd[1]: sshd@23-172.238.187.240:22-68.220.241.50:56690.service: Deactivated successfully. Jan 23 01:24:51.555073 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 01:24:51.556703 systemd-logind[1524]: Session 24 logged out. Waiting for processes to exit. Jan 23 01:24:51.558471 systemd-logind[1524]: Removed session 24. Jan 23 01:24:51.576929 systemd[1]: Started sshd@24-172.238.187.240:22-68.220.241.50:56696.service - OpenSSH per-connection server daemon (68.220.241.50:56696). Jan 23 01:24:51.703802 kubelet[2704]: E0123 01:24:51.703724 2704 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 01:24:51.749569 sshd[4437]: Accepted publickey for core from 68.220.241.50 port 56696 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:51.750130 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:51.755811 systemd-logind[1524]: New session 25 of user core. Jan 23 01:24:51.764760 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 01:24:52.243726 kubelet[2704]: I0123 01:24:52.243690 2704 memory_manager.go:355] "RemoveStaleState removing state" podUID="960e8f72-2a96-4356-96a4-71f44baf117f" containerName="cilium-agent" Jan 23 01:24:52.243726 kubelet[2704]: I0123 01:24:52.243716 2704 memory_manager.go:355] "RemoveStaleState removing state" podUID="226cfc49-acf8-4abc-bf26-69ad52ccf7e9" containerName="cilium-operator" Jan 23 01:24:52.249785 kubelet[2704]: W0123 01:24:52.249760 2704 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172-238-187-240" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-238-187-240' and this object Jan 23 01:24:52.249853 kubelet[2704]: E0123 01:24:52.249793 2704 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:172-238-187-240\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-238-187-240' and this object" logger="UnhandledError" Jan 23 01:24:52.253100 sshd[4440]: Connection closed by 68.220.241.50 port 56696 Jan 23 01:24:52.254813 kubelet[2704]: W0123 01:24:52.254789 2704 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172-238-187-240" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-238-187-240' and this object Jan 23 01:24:52.254888 kubelet[2704]: E0123 01:24:52.254815 2704 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:172-238-187-240\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-238-187-240' and this object" logger="UnhandledError" Jan 23 01:24:52.256913 sshd-session[4437]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:52.259854 systemd[1]: Created slice kubepods-burstable-pod749de53a_b89c_41df_9560_bcf1b5a41295.slice - libcontainer container kubepods-burstable-pod749de53a_b89c_41df_9560_bcf1b5a41295.slice. Jan 23 01:24:52.262596 kubelet[2704]: I0123 01:24:52.262564 2704 status_manager.go:890] "Failed to get status for pod" podUID="749de53a-b89c-41df-9560-bcf1b5a41295" pod="kube-system/cilium-9kbg7" err="pods \"cilium-9kbg7\" is forbidden: User \"system:node:172-238-187-240\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-238-187-240' and this object" Jan 23 01:24:52.262700 kubelet[2704]: W0123 01:24:52.262620 2704 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172-238-187-240" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-238-187-240' and this object Jan 23 01:24:52.262700 kubelet[2704]: E0123 01:24:52.262656 2704 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:172-238-187-240\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-238-187-240' and this object" logger="UnhandledError" Jan 23 01:24:52.262700 kubelet[2704]: W0123 01:24:52.262692 2704 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172-238-187-240" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172-238-187-240' and this object Jan 23 01:24:52.262788 kubelet[2704]: E0123 01:24:52.262702 2704 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:172-238-187-240\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-238-187-240' and this object" logger="UnhandledError" Jan 23 01:24:52.268924 systemd[1]: sshd@24-172.238.187.240:22-68.220.241.50:56696.service: Deactivated successfully. Jan 23 01:24:52.272115 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 01:24:52.273696 systemd-logind[1524]: Session 25 logged out. Waiting for processes to exit. Jan 23 01:24:52.291028 systemd[1]: Started sshd@25-172.238.187.240:22-68.220.241.50:56708.service - OpenSSH per-connection server daemon (68.220.241.50:56708). Jan 23 01:24:52.292724 systemd-logind[1524]: Removed session 25. Jan 23 01:24:52.369830 kubelet[2704]: I0123 01:24:52.369789 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/749de53a-b89c-41df-9560-bcf1b5a41295-cilium-config-path\") pod \"cilium-9kbg7\" (UID: \"749de53a-b89c-41df-9560-bcf1b5a41295\") " pod="kube-system/cilium-9kbg7" Jan 23 01:24:52.369830 kubelet[2704]: I0123 01:24:52.369825 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/749de53a-b89c-41df-9560-bcf1b5a41295-hubble-tls\") pod \"cilium-9kbg7\" (UID: \"749de53a-b89c-41df-9560-bcf1b5a41295\") " pod="kube-system/cilium-9kbg7" Jan 23 01:24:52.369981 kubelet[2704]: I0123 01:24:52.369842 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/749de53a-b89c-41df-9560-bcf1b5a41295-xtables-lock\") pod \"cilium-9kbg7\" (UID: \"749de53a-b89c-41df-9560-bcf1b5a41295\") " pod="kube-system/cilium-9kbg7" Jan 23 01:24:52.369981 kubelet[2704]: I0123 01:24:52.369857 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmdc2\" (UniqueName: \"kubernetes.io/projected/749de53a-b89c-41df-9560-bcf1b5a41295-kube-api-access-fmdc2\") pod \"cilium-9kbg7\" (UID: \"749de53a-b89c-41df-9560-bcf1b5a41295\") " pod="kube-system/cilium-9kbg7" Jan 23 01:24:52.369981 kubelet[2704]: I0123 01:24:52.369872 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/749de53a-b89c-41df-9560-bcf1b5a41295-lib-modules\") pod \"cilium-9kbg7\" (UID: \"749de53a-b89c-41df-9560-bcf1b5a41295\") " pod="kube-system/cilium-9kbg7" Jan 23 01:24:52.369981 kubelet[2704]: I0123 01:24:52.369885 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/749de53a-b89c-41df-9560-bcf1b5a41295-bpf-maps\") pod \"cilium-9kbg7\" (UID: \"749de53a-b89c-41df-9560-bcf1b5a41295\") " pod="kube-system/cilium-9kbg7" Jan 23 01:24:52.369981 kubelet[2704]: I0123 01:24:52.369899 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/749de53a-b89c-41df-9560-bcf1b5a41295-host-proc-sys-kernel\") pod \"cilium-9kbg7\" (UID: \"749de53a-b89c-41df-9560-bcf1b5a41295\") " pod="kube-system/cilium-9kbg7" Jan 23 01:24:52.369981 kubelet[2704]: I0123 01:24:52.369912 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/749de53a-b89c-41df-9560-bcf1b5a41295-clustermesh-secrets\") pod \"cilium-9kbg7\" (UID: \"749de53a-b89c-41df-9560-bcf1b5a41295\") " pod="kube-system/cilium-9kbg7" Jan 23 01:24:52.370294 kubelet[2704]: I0123 01:24:52.369927 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/749de53a-b89c-41df-9560-bcf1b5a41295-hostproc\") pod \"cilium-9kbg7\" (UID: \"749de53a-b89c-41df-9560-bcf1b5a41295\") " pod="kube-system/cilium-9kbg7" Jan 23 01:24:52.370294 kubelet[2704]: I0123 01:24:52.369941 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/749de53a-b89c-41df-9560-bcf1b5a41295-cni-path\") pod \"cilium-9kbg7\" (UID: \"749de53a-b89c-41df-9560-bcf1b5a41295\") " pod="kube-system/cilium-9kbg7" Jan 23 01:24:52.370294 kubelet[2704]: I0123 01:24:52.369957 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/749de53a-b89c-41df-9560-bcf1b5a41295-host-proc-sys-net\") pod \"cilium-9kbg7\" (UID: \"749de53a-b89c-41df-9560-bcf1b5a41295\") " pod="kube-system/cilium-9kbg7" Jan 23 01:24:52.370294 kubelet[2704]: I0123 01:24:52.369971 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/749de53a-b89c-41df-9560-bcf1b5a41295-cilium-cgroup\") pod \"cilium-9kbg7\" (UID: \"749de53a-b89c-41df-9560-bcf1b5a41295\") " pod="kube-system/cilium-9kbg7" Jan 23 01:24:52.370294 kubelet[2704]: I0123 01:24:52.369984 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/749de53a-b89c-41df-9560-bcf1b5a41295-cilium-ipsec-secrets\") pod \"cilium-9kbg7\" (UID: \"749de53a-b89c-41df-9560-bcf1b5a41295\") " pod="kube-system/cilium-9kbg7" Jan 23 01:24:52.370294 kubelet[2704]: I0123 01:24:52.369997 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/749de53a-b89c-41df-9560-bcf1b5a41295-cilium-run\") pod \"cilium-9kbg7\" (UID: \"749de53a-b89c-41df-9560-bcf1b5a41295\") " pod="kube-system/cilium-9kbg7" Jan 23 01:24:52.370418 kubelet[2704]: I0123 01:24:52.370011 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/749de53a-b89c-41df-9560-bcf1b5a41295-etc-cni-netd\") pod \"cilium-9kbg7\" (UID: \"749de53a-b89c-41df-9560-bcf1b5a41295\") " pod="kube-system/cilium-9kbg7" Jan 23 01:24:52.456587 sshd[4450]: Accepted publickey for core from 68.220.241.50 port 56708 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:52.458118 sshd-session[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:52.462788 systemd-logind[1524]: New session 26 of user core. Jan 23 01:24:52.466754 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 01:24:52.585532 sshd[4453]: Connection closed by 68.220.241.50 port 56708 Jan 23 01:24:52.586808 sshd-session[4450]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:52.592104 systemd[1]: sshd@25-172.238.187.240:22-68.220.241.50:56708.service: Deactivated successfully. Jan 23 01:24:52.594202 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 01:24:52.595272 systemd-logind[1524]: Session 26 logged out. Waiting for processes to exit. Jan 23 01:24:52.596630 systemd-logind[1524]: Removed session 26. Jan 23 01:24:52.619586 systemd[1]: Started sshd@26-172.238.187.240:22-68.220.241.50:40640.service - OpenSSH per-connection server daemon (68.220.241.50:40640). Jan 23 01:24:52.788025 sshd[4461]: Accepted publickey for core from 68.220.241.50 port 40640 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:24:52.789845 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:52.795782 systemd-logind[1524]: New session 27 of user core. Jan 23 01:24:52.802769 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 01:24:53.469301 kubelet[2704]: E0123 01:24:53.469267 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:24:53.470677 containerd[1549]: time="2026-01-23T01:24:53.470626809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9kbg7,Uid:749de53a-b89c-41df-9560-bcf1b5a41295,Namespace:kube-system,Attempt:0,}" Jan 23 01:24:53.493214 containerd[1549]: time="2026-01-23T01:24:53.493169106Z" level=info msg="connecting to shim 7a37e483d583326253f31bf6babced3b1bd99dd4a36bc19e1fd2aa2bb4f81aa5" address="unix:///run/containerd/s/9f409a4912d5c44ca28e250bb213cb77a15bf88943287dd7b58907719ab18377" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:24:53.522766 systemd[1]: Started cri-containerd-7a37e483d583326253f31bf6babced3b1bd99dd4a36bc19e1fd2aa2bb4f81aa5.scope - libcontainer container 7a37e483d583326253f31bf6babced3b1bd99dd4a36bc19e1fd2aa2bb4f81aa5. Jan 23 01:24:53.549264 containerd[1549]: time="2026-01-23T01:24:53.549211330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9kbg7,Uid:749de53a-b89c-41df-9560-bcf1b5a41295,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a37e483d583326253f31bf6babced3b1bd99dd4a36bc19e1fd2aa2bb4f81aa5\"" Jan 23 01:24:53.549907 kubelet[2704]: E0123 01:24:53.549883 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:24:53.553070 containerd[1549]: time="2026-01-23T01:24:53.552773596Z" level=info msg="CreateContainer within sandbox \"7a37e483d583326253f31bf6babced3b1bd99dd4a36bc19e1fd2aa2bb4f81aa5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 01:24:53.561304 containerd[1549]: time="2026-01-23T01:24:53.561250450Z" level=info msg="Container d332852b850f988beeab72f5765b744eeeeae18fefffe44257952e3a2e15e7ee: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:24:53.567368 containerd[1549]: time="2026-01-23T01:24:53.567346262Z" level=info msg="CreateContainer within sandbox \"7a37e483d583326253f31bf6babced3b1bd99dd4a36bc19e1fd2aa2bb4f81aa5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d332852b850f988beeab72f5765b744eeeeae18fefffe44257952e3a2e15e7ee\"" Jan 23 01:24:53.568318 containerd[1549]: time="2026-01-23T01:24:53.567714917Z" level=info msg="StartContainer for \"d332852b850f988beeab72f5765b744eeeeae18fefffe44257952e3a2e15e7ee\"" Jan 23 01:24:53.568654 containerd[1549]: time="2026-01-23T01:24:53.568623033Z" level=info msg="connecting to shim d332852b850f988beeab72f5765b744eeeeae18fefffe44257952e3a2e15e7ee" address="unix:///run/containerd/s/9f409a4912d5c44ca28e250bb213cb77a15bf88943287dd7b58907719ab18377" protocol=ttrpc version=3 Jan 23 01:24:53.594760 systemd[1]: Started cri-containerd-d332852b850f988beeab72f5765b744eeeeae18fefffe44257952e3a2e15e7ee.scope - libcontainer container d332852b850f988beeab72f5765b744eeeeae18fefffe44257952e3a2e15e7ee. Jan 23 01:24:53.629488 containerd[1549]: time="2026-01-23T01:24:53.629419768Z" level=info msg="StartContainer for \"d332852b850f988beeab72f5765b744eeeeae18fefffe44257952e3a2e15e7ee\" returns successfully" Jan 23 01:24:53.640375 systemd[1]: cri-containerd-d332852b850f988beeab72f5765b744eeeeae18fefffe44257952e3a2e15e7ee.scope: Deactivated successfully. Jan 23 01:24:53.645901 containerd[1549]: time="2026-01-23T01:24:53.645870493Z" level=info msg="received container exit event container_id:\"d332852b850f988beeab72f5765b744eeeeae18fefffe44257952e3a2e15e7ee\" id:\"d332852b850f988beeab72f5765b744eeeeae18fefffe44257952e3a2e15e7ee\" pid:4532 exited_at:{seconds:1769131493 nanos:645354272}" Jan 23 01:24:54.021661 kubelet[2704]: E0123 01:24:54.021615 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:24:54.025318 containerd[1549]: time="2026-01-23T01:24:54.025262748Z" level=info msg="CreateContainer within sandbox \"7a37e483d583326253f31bf6babced3b1bd99dd4a36bc19e1fd2aa2bb4f81aa5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 01:24:54.031369 containerd[1549]: time="2026-01-23T01:24:54.031344172Z" level=info msg="Container 2915021a20b80a38729d91520da554f27fd0c2832ac9ed2c8ae69e54240ad94b: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:24:54.036572 containerd[1549]: time="2026-01-23T01:24:54.036544429Z" level=info msg="CreateContainer within sandbox \"7a37e483d583326253f31bf6babced3b1bd99dd4a36bc19e1fd2aa2bb4f81aa5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2915021a20b80a38729d91520da554f27fd0c2832ac9ed2c8ae69e54240ad94b\"" Jan 23 01:24:54.037305 containerd[1549]: time="2026-01-23T01:24:54.037284793Z" level=info msg="StartContainer for \"2915021a20b80a38729d91520da554f27fd0c2832ac9ed2c8ae69e54240ad94b\"" Jan 23 01:24:54.038567 containerd[1549]: time="2026-01-23T01:24:54.038498317Z" level=info msg="connecting to shim 2915021a20b80a38729d91520da554f27fd0c2832ac9ed2c8ae69e54240ad94b" address="unix:///run/containerd/s/9f409a4912d5c44ca28e250bb213cb77a15bf88943287dd7b58907719ab18377" protocol=ttrpc version=3 Jan 23 01:24:54.065896 systemd[1]: Started cri-containerd-2915021a20b80a38729d91520da554f27fd0c2832ac9ed2c8ae69e54240ad94b.scope - libcontainer container 2915021a20b80a38729d91520da554f27fd0c2832ac9ed2c8ae69e54240ad94b. Jan 23 01:24:54.116405 containerd[1549]: time="2026-01-23T01:24:54.116330900Z" level=info msg="StartContainer for \"2915021a20b80a38729d91520da554f27fd0c2832ac9ed2c8ae69e54240ad94b\" returns successfully" Jan 23 01:24:54.130293 systemd[1]: cri-containerd-2915021a20b80a38729d91520da554f27fd0c2832ac9ed2c8ae69e54240ad94b.scope: Deactivated successfully. Jan 23 01:24:54.132477 containerd[1549]: time="2026-01-23T01:24:54.132441013Z" level=info msg="received container exit event container_id:\"2915021a20b80a38729d91520da554f27fd0c2832ac9ed2c8ae69e54240ad94b\" id:\"2915021a20b80a38729d91520da554f27fd0c2832ac9ed2c8ae69e54240ad94b\" pid:4577 exited_at:{seconds:1769131494 nanos:131407301}" Jan 23 01:24:54.483277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d332852b850f988beeab72f5765b744eeeeae18fefffe44257952e3a2e15e7ee-rootfs.mount: Deactivated successfully. Jan 23 01:24:55.026443 kubelet[2704]: E0123 01:24:55.026385 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:24:55.029911 containerd[1549]: time="2026-01-23T01:24:55.029873912Z" level=info msg="CreateContainer within sandbox \"7a37e483d583326253f31bf6babced3b1bd99dd4a36bc19e1fd2aa2bb4f81aa5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 01:24:55.042385 containerd[1549]: time="2026-01-23T01:24:55.042274586Z" level=info msg="Container 534d97572bdc82a21429d5eff7a750e29887c38a6a5ce20d879f9afb5783dfd7: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:24:55.045871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866201824.mount: Deactivated successfully. Jan 23 01:24:55.054846 containerd[1549]: time="2026-01-23T01:24:55.054798896Z" level=info msg="CreateContainer within sandbox \"7a37e483d583326253f31bf6babced3b1bd99dd4a36bc19e1fd2aa2bb4f81aa5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"534d97572bdc82a21429d5eff7a750e29887c38a6a5ce20d879f9afb5783dfd7\"" Jan 23 01:24:55.055658 containerd[1549]: time="2026-01-23T01:24:55.055381724Z" level=info msg="StartContainer for \"534d97572bdc82a21429d5eff7a750e29887c38a6a5ce20d879f9afb5783dfd7\"" Jan 23 01:24:55.056564 containerd[1549]: time="2026-01-23T01:24:55.056528452Z" level=info msg="connecting to shim 534d97572bdc82a21429d5eff7a750e29887c38a6a5ce20d879f9afb5783dfd7" address="unix:///run/containerd/s/9f409a4912d5c44ca28e250bb213cb77a15bf88943287dd7b58907719ab18377" protocol=ttrpc version=3 Jan 23 01:24:55.085754 systemd[1]: Started cri-containerd-534d97572bdc82a21429d5eff7a750e29887c38a6a5ce20d879f9afb5783dfd7.scope - libcontainer container 534d97572bdc82a21429d5eff7a750e29887c38a6a5ce20d879f9afb5783dfd7. Jan 23 01:24:55.207281 containerd[1549]: time="2026-01-23T01:24:55.207225652Z" level=info msg="StartContainer for \"534d97572bdc82a21429d5eff7a750e29887c38a6a5ce20d879f9afb5783dfd7\" returns successfully" Jan 23 01:24:55.210679 systemd[1]: cri-containerd-534d97572bdc82a21429d5eff7a750e29887c38a6a5ce20d879f9afb5783dfd7.scope: Deactivated successfully. Jan 23 01:24:55.215550 containerd[1549]: time="2026-01-23T01:24:55.215352652Z" level=info msg="received container exit event container_id:\"534d97572bdc82a21429d5eff7a750e29887c38a6a5ce20d879f9afb5783dfd7\" id:\"534d97572bdc82a21429d5eff7a750e29887c38a6a5ce20d879f9afb5783dfd7\" pid:4620 exited_at:{seconds:1769131495 nanos:215186549}" Jan 23 01:24:55.252347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-534d97572bdc82a21429d5eff7a750e29887c38a6a5ce20d879f9afb5783dfd7-rootfs.mount: Deactivated successfully. Jan 23 01:24:56.031437 kubelet[2704]: E0123 01:24:56.031382 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:24:56.036458 containerd[1549]: time="2026-01-23T01:24:56.036397147Z" level=info msg="CreateContainer within sandbox \"7a37e483d583326253f31bf6babced3b1bd99dd4a36bc19e1fd2aa2bb4f81aa5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 01:24:56.051126 containerd[1549]: time="2026-01-23T01:24:56.051026635Z" level=info msg="Container 232be24fdefa0b5a5c19a56b697f51ce9160d75fb9cea1436cfcd16292ef3d25: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:24:56.055897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2506752040.mount: Deactivated successfully. Jan 23 01:24:56.061138 containerd[1549]: time="2026-01-23T01:24:56.061093628Z" level=info msg="CreateContainer within sandbox \"7a37e483d583326253f31bf6babced3b1bd99dd4a36bc19e1fd2aa2bb4f81aa5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"232be24fdefa0b5a5c19a56b697f51ce9160d75fb9cea1436cfcd16292ef3d25\"" Jan 23 01:24:56.061570 containerd[1549]: time="2026-01-23T01:24:56.061543581Z" level=info msg="StartContainer for \"232be24fdefa0b5a5c19a56b697f51ce9160d75fb9cea1436cfcd16292ef3d25\"" Jan 23 01:24:56.062603 containerd[1549]: time="2026-01-23T01:24:56.062562713Z" level=info msg="connecting to shim 232be24fdefa0b5a5c19a56b697f51ce9160d75fb9cea1436cfcd16292ef3d25" address="unix:///run/containerd/s/9f409a4912d5c44ca28e250bb213cb77a15bf88943287dd7b58907719ab18377" protocol=ttrpc version=3 Jan 23 01:24:56.092783 systemd[1]: Started cri-containerd-232be24fdefa0b5a5c19a56b697f51ce9160d75fb9cea1436cfcd16292ef3d25.scope - libcontainer container 232be24fdefa0b5a5c19a56b697f51ce9160d75fb9cea1436cfcd16292ef3d25. Jan 23 01:24:56.134333 systemd[1]: cri-containerd-232be24fdefa0b5a5c19a56b697f51ce9160d75fb9cea1436cfcd16292ef3d25.scope: Deactivated successfully. Jan 23 01:24:56.135633 containerd[1549]: time="2026-01-23T01:24:56.135592851Z" level=info msg="received container exit event container_id:\"232be24fdefa0b5a5c19a56b697f51ce9160d75fb9cea1436cfcd16292ef3d25\" id:\"232be24fdefa0b5a5c19a56b697f51ce9160d75fb9cea1436cfcd16292ef3d25\" pid:4658 exited_at:{seconds:1769131496 nanos:134514050}" Jan 23 01:24:56.136718 containerd[1549]: time="2026-01-23T01:24:56.136699831Z" level=info msg="StartContainer for \"232be24fdefa0b5a5c19a56b697f51ce9160d75fb9cea1436cfcd16292ef3d25\" returns successfully" Jan 23 01:24:56.159516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-232be24fdefa0b5a5c19a56b697f51ce9160d75fb9cea1436cfcd16292ef3d25-rootfs.mount: Deactivated successfully. Jan 23 01:24:56.704658 kubelet[2704]: E0123 01:24:56.704568 2704 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 01:24:57.036473 kubelet[2704]: E0123 01:24:57.036307 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:24:57.040395 containerd[1549]: time="2026-01-23T01:24:57.038868510Z" level=info msg="CreateContainer within sandbox \"7a37e483d583326253f31bf6babced3b1bd99dd4a36bc19e1fd2aa2bb4f81aa5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 01:24:57.051947 containerd[1549]: time="2026-01-23T01:24:57.051854440Z" level=info msg="Container be0222e0a1cab126fe80377ce0333e1a203260e87a79071f2a41205e65794114: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:24:57.057976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4244497677.mount: Deactivated successfully. Jan 23 01:24:57.063222 containerd[1549]: time="2026-01-23T01:24:57.063186451Z" level=info msg="CreateContainer within sandbox \"7a37e483d583326253f31bf6babced3b1bd99dd4a36bc19e1fd2aa2bb4f81aa5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"be0222e0a1cab126fe80377ce0333e1a203260e87a79071f2a41205e65794114\"" Jan 23 01:24:57.064652 containerd[1549]: time="2026-01-23T01:24:57.064615809Z" level=info msg="StartContainer for \"be0222e0a1cab126fe80377ce0333e1a203260e87a79071f2a41205e65794114\"" Jan 23 01:24:57.067529 containerd[1549]: time="2026-01-23T01:24:57.066590127Z" level=info msg="connecting to shim be0222e0a1cab126fe80377ce0333e1a203260e87a79071f2a41205e65794114" address="unix:///run/containerd/s/9f409a4912d5c44ca28e250bb213cb77a15bf88943287dd7b58907719ab18377" protocol=ttrpc version=3 Jan 23 01:24:57.099777 systemd[1]: Started cri-containerd-be0222e0a1cab126fe80377ce0333e1a203260e87a79071f2a41205e65794114.scope - libcontainer container be0222e0a1cab126fe80377ce0333e1a203260e87a79071f2a41205e65794114. Jan 23 01:24:57.154215 containerd[1549]: time="2026-01-23T01:24:57.154163001Z" level=info msg="StartContainer for \"be0222e0a1cab126fe80377ce0333e1a203260e87a79071f2a41205e65794114\" returns successfully" Jan 23 01:24:57.636534 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 23 01:24:58.041715 kubelet[2704]: E0123 01:24:58.041584 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:24:58.057745 kubelet[2704]: I0123 01:24:58.057673 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9kbg7" podStartSLOduration=6.057657896 podStartE2EDuration="6.057657896s" podCreationTimestamp="2026-01-23 01:24:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:24:58.057232491 +0000 UTC m=+171.562756956" watchObservedRunningTime="2026-01-23 01:24:58.057657896 +0000 UTC m=+171.563182361" Jan 23 01:24:59.472682 kubelet[2704]: E0123 01:24:59.471573 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:25:00.667838 kubelet[2704]: I0123 01:25:00.667737 2704 setters.go:602] "Node became not ready" node="172-238-187-240" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T01:25:00Z","lastTransitionTime":"2026-01-23T01:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 01:25:00.811071 systemd-networkd[1427]: lxc_health: Link UP Jan 23 01:25:00.812902 systemd-networkd[1427]: lxc_health: Gained carrier Jan 23 01:25:01.471488 kubelet[2704]: E0123 01:25:01.470831 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:25:02.052537 kubelet[2704]: E0123 01:25:02.052483 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:25:02.324130 systemd-networkd[1427]: lxc_health: Gained IPv6LL Jan 23 01:25:03.055381 kubelet[2704]: E0123 01:25:03.054400 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:25:06.604922 containerd[1549]: time="2026-01-23T01:25:06.604721369Z" level=info msg="StopPodSandbox for \"b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c\"" Jan 23 01:25:06.604922 containerd[1549]: time="2026-01-23T01:25:06.604845994Z" level=info msg="TearDown network for sandbox \"b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c\" successfully" Jan 23 01:25:06.604922 containerd[1549]: time="2026-01-23T01:25:06.604859354Z" level=info msg="StopPodSandbox for \"b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c\" returns successfully" Jan 23 01:25:06.607652 containerd[1549]: time="2026-01-23T01:25:06.605929097Z" level=info msg="RemovePodSandbox for \"b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c\"" Jan 23 01:25:06.607652 containerd[1549]: time="2026-01-23T01:25:06.605950357Z" level=info msg="Forcibly stopping sandbox \"b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c\"" Jan 23 01:25:06.607652 containerd[1549]: time="2026-01-23T01:25:06.606008655Z" level=info msg="TearDown network for sandbox \"b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c\" successfully" Jan 23 01:25:06.610524 containerd[1549]: time="2026-01-23T01:25:06.610295790Z" level=info msg="Ensure that sandbox b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c in task-service has been cleanup successfully" Jan 23 01:25:06.613589 containerd[1549]: time="2026-01-23T01:25:06.613563010Z" level=info msg="RemovePodSandbox \"b6e569b81989a4cb8e528886d847de0dd9789ae7e5324cffc16ac2782e32c87c\" returns successfully" Jan 23 01:25:06.613950 containerd[1549]: time="2026-01-23T01:25:06.613924427Z" level=info msg="StopPodSandbox for \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\"" Jan 23 01:25:06.614023 containerd[1549]: time="2026-01-23T01:25:06.614003914Z" level=info msg="TearDown network for sandbox \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" successfully" Jan 23 01:25:06.614023 containerd[1549]: time="2026-01-23T01:25:06.614017284Z" level=info msg="StopPodSandbox for \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" returns successfully" Jan 23 01:25:06.614258 containerd[1549]: time="2026-01-23T01:25:06.614236436Z" level=info msg="RemovePodSandbox for \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\"" Jan 23 01:25:06.614289 containerd[1549]: time="2026-01-23T01:25:06.614257066Z" level=info msg="Forcibly stopping sandbox \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\"" Jan 23 01:25:06.614316 containerd[1549]: time="2026-01-23T01:25:06.614303644Z" level=info msg="TearDown network for sandbox \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" successfully" Jan 23 01:25:06.615790 containerd[1549]: time="2026-01-23T01:25:06.615764315Z" level=info msg="Ensure that sandbox db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0 in task-service has been cleanup successfully" Jan 23 01:25:06.617514 containerd[1549]: time="2026-01-23T01:25:06.617488787Z" level=info msg="RemovePodSandbox \"db4b0a1406063c267d1c9bb9f7d5fd7c0ebb168311f7db2be1fd3c9d0f42bcb0\" returns successfully" Jan 23 01:25:07.773968 sshd[4464]: Connection closed by 68.220.241.50 port 40640 Jan 23 01:25:07.775026 sshd-session[4461]: pam_unix(sshd:session): session closed for user core Jan 23 01:25:07.781374 systemd[1]: sshd@26-172.238.187.240:22-68.220.241.50:40640.service: Deactivated successfully. Jan 23 01:25:07.785682 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 01:25:07.786678 systemd-logind[1524]: Session 27 logged out. Waiting for processes to exit. Jan 23 01:25:07.789019 systemd-logind[1524]: Removed session 27.