Jan 20 00:33:47.587074 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:42:14 -00 2026 Jan 20 00:33:47.587198 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:33:47.587217 kernel: BIOS-provided physical RAM map: Jan 20 00:33:47.587227 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 00:33:47.587236 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 00:33:47.587245 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 00:33:47.587257 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 00:33:47.587267 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 00:33:47.587276 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 20 00:33:47.587286 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 20 00:33:47.587299 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 20 00:33:47.587309 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 20 00:33:47.592754 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 20 00:33:47.592782 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 20 00:33:47.592823 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 20 00:33:47.592836 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 00:33:47.592855 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 20 00:33:47.592865 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 20 00:33:47.592875 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 00:33:47.592886 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 00:33:47.592896 kernel: NX (Execute Disable) protection: active Jan 20 00:33:47.592906 kernel: APIC: Static calls initialized Jan 20 00:33:47.592916 kernel: efi: EFI v2.7 by EDK II Jan 20 00:33:47.592927 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 20 00:33:47.592937 kernel: SMBIOS 2.8 present. Jan 20 00:33:47.592947 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 20 00:33:47.592958 kernel: Hypervisor detected: KVM Jan 20 00:33:47.592973 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 00:33:47.592983 kernel: kvm-clock: using sched offset of 15030239467 cycles Jan 20 00:33:47.592995 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 00:33:47.593005 kernel: tsc: Detected 2445.426 MHz processor Jan 20 00:33:47.593016 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 00:33:47.593027 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 00:33:47.593038 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 20 00:33:47.593048 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 00:33:47.593059 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 00:33:47.593074 kernel: Using GB pages for direct mapping Jan 20 00:33:47.593085 kernel: Secure boot disabled Jan 20 00:33:47.593095 kernel: ACPI: Early table checksum verification disabled Jan 20 00:33:47.593106 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 20 00:33:47.593123 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 20 00:33:47.593134 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:33:47.593146 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:33:47.593161 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 20 00:33:47.593172 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:33:47.593211 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:33:47.593224 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:33:47.593235 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:33:47.593246 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 00:33:47.593257 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 20 00:33:47.593273 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 20 00:33:47.593284 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 20 00:33:47.593295 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 20 00:33:47.593306 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 20 00:33:47.593317 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 20 00:33:47.593364 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 20 00:33:47.593377 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 20 00:33:47.593388 kernel: No NUMA configuration found Jan 20 00:33:47.593422 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 20 00:33:47.593439 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 20 00:33:47.593451 kernel: Zone ranges: Jan 20 00:33:47.593462 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 00:33:47.593473 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 20 00:33:47.593484 kernel: Normal empty Jan 20 00:33:47.593495 kernel: Movable zone start for each node Jan 20 00:33:47.593506 kernel: Early memory node ranges Jan 20 00:33:47.593518 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 00:33:47.593529 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 20 00:33:47.593545 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 20 00:33:47.593556 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 20 00:33:47.593567 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 20 00:33:47.593578 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 20 00:33:47.593612 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 20 00:33:47.593623 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:33:47.593634 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 00:33:47.593646 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 20 00:33:47.593657 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:33:47.593713 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 20 00:33:47.593733 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 20 00:33:47.593744 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 20 00:33:47.593755 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 00:33:47.593766 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 00:33:47.593777 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 00:33:47.593788 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 00:33:47.593799 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 00:33:47.593810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 00:33:47.593821 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 00:33:47.593836 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 00:33:47.593847 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 00:33:47.593859 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 00:33:47.593870 kernel: TSC deadline timer available Jan 20 00:33:47.593881 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 20 00:33:47.593892 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 00:33:47.593903 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 00:33:47.593914 kernel: kvm-guest: setup PV sched yield Jan 20 00:33:47.593925 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 20 00:33:47.593940 kernel: Booting paravirtualized kernel on KVM Jan 20 00:33:47.593951 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 00:33:47.593963 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 00:33:47.593974 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 20 00:33:47.593985 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 20 00:33:47.593997 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 00:33:47.594008 kernel: kvm-guest: PV spinlocks enabled Jan 20 00:33:47.594019 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 00:33:47.594031 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:33:47.595375 kernel: random: crng init done Jan 20 00:33:47.595394 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 00:33:47.595407 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 00:33:47.595418 kernel: Fallback order for Node 0: 0 Jan 20 00:33:47.595430 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 20 00:33:47.595441 kernel: Policy zone: DMA32 Jan 20 00:33:47.595451 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 00:33:47.595463 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42880K init, 2316K bss, 166124K reserved, 0K cma-reserved) Jan 20 00:33:47.595481 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 00:33:47.595492 kernel: ftrace: allocating 37989 entries in 149 pages Jan 20 00:33:47.595503 kernel: ftrace: allocated 149 pages with 4 groups Jan 20 00:33:47.595514 kernel: Dynamic Preempt: voluntary Jan 20 00:33:47.595526 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 00:33:47.595552 kernel: rcu: RCU event tracing is enabled. Jan 20 00:33:47.595568 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 00:33:47.595580 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 00:33:47.595591 kernel: Rude variant of Tasks RCU enabled. Jan 20 00:33:47.595603 kernel: Tracing variant of Tasks RCU enabled. Jan 20 00:33:47.595614 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 00:33:47.595626 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 00:33:47.595642 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 00:33:47.595654 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 00:33:47.595665 kernel: Console: colour dummy device 80x25 Jan 20 00:33:47.595733 kernel: printk: console [ttyS0] enabled Jan 20 00:33:47.595770 kernel: ACPI: Core revision 20230628 Jan 20 00:33:47.595790 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 00:33:47.595802 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 00:33:47.595814 kernel: x2apic enabled Jan 20 00:33:47.595826 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 00:33:47.595837 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 00:33:47.595849 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 00:33:47.595860 kernel: kvm-guest: setup PV IPIs Jan 20 00:33:47.595872 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 00:33:47.595884 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 20 00:33:47.595899 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 00:33:47.595911 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 00:33:47.595923 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 00:33:47.595935 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 00:33:47.595946 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 00:33:47.595958 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 00:33:47.595970 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 00:33:47.595982 kernel: Speculative Store Bypass: Vulnerable Jan 20 00:33:47.595993 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 00:33:47.596010 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 00:33:47.596022 kernel: active return thunk: srso_alias_return_thunk Jan 20 00:33:47.596034 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 00:33:47.596046 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 00:33:47.596081 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 00:33:47.596095 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 00:33:47.596107 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 00:33:47.596119 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 00:33:47.596135 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 00:33:47.596147 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 00:33:47.596159 kernel: Freeing SMP alternatives memory: 32K Jan 20 00:33:47.596171 kernel: pid_max: default: 32768 minimum: 301 Jan 20 00:33:47.596183 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 00:33:47.596194 kernel: landlock: Up and running. Jan 20 00:33:47.596206 kernel: SELinux: Initializing. Jan 20 00:33:47.596218 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:33:47.596230 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:33:47.596246 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 00:33:47.596257 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:33:47.596269 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:33:47.596281 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:33:47.596293 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 00:33:47.596304 kernel: signal: max sigframe size: 1776 Jan 20 00:33:47.596316 kernel: rcu: Hierarchical SRCU implementation. Jan 20 00:33:47.596364 kernel: rcu: Max phase no-delay instances is 400. Jan 20 00:33:47.596378 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 00:33:47.596395 kernel: smp: Bringing up secondary CPUs ... Jan 20 00:33:47.596406 kernel: smpboot: x86: Booting SMP configuration: Jan 20 00:33:47.596418 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 00:33:47.596429 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 00:33:47.596441 kernel: smpboot: Max logical packages: 1 Jan 20 00:33:47.600722 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 00:33:47.600747 kernel: devtmpfs: initialized Jan 20 00:33:47.600761 kernel: x86/mm: Memory block size: 128MB Jan 20 00:33:47.600773 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 20 00:33:47.600794 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 20 00:33:47.600806 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 20 00:33:47.600818 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 20 00:33:47.600831 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 20 00:33:47.600844 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 00:33:47.600857 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 00:33:47.600871 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 00:33:47.600885 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 00:33:47.600897 kernel: audit: initializing netlink subsys (disabled) Jan 20 00:33:47.600917 kernel: audit: type=2000 audit(1768869220.345:1): state=initialized audit_enabled=0 res=1 Jan 20 00:33:47.600931 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 00:33:47.600943 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 00:33:47.600954 kernel: cpuidle: using governor menu Jan 20 00:33:47.600966 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 00:33:47.600977 kernel: dca service started, version 1.12.1 Jan 20 00:33:47.600990 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 20 00:33:47.601002 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 00:33:47.601020 kernel: PCI: Using configuration type 1 for base access Jan 20 00:33:47.601031 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 00:33:47.601043 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 00:33:47.601055 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 00:33:47.601067 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 00:33:47.601078 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 00:33:47.601090 kernel: ACPI: Added _OSI(Module Device) Jan 20 00:33:47.601102 kernel: ACPI: Added _OSI(Processor Device) Jan 20 00:33:47.601113 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 00:33:47.601129 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 00:33:47.601141 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 20 00:33:47.602450 kernel: ACPI: Interpreter enabled Jan 20 00:33:47.602465 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 00:33:47.602476 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 00:33:47.602487 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 00:33:47.602498 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 00:33:47.602508 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 00:33:47.602518 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 00:33:47.603553 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 00:33:47.603963 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 00:33:47.604158 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 00:33:47.604174 kernel: PCI host bridge to bus 0000:00 Jan 20 00:33:47.604541 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 00:33:47.604802 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 00:33:47.604981 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 00:33:47.605174 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 00:33:47.605418 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 00:33:47.605723 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 20 00:33:47.605930 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 00:33:47.606258 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 20 00:33:47.608814 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 20 00:33:47.609058 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 20 00:33:47.609250 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 20 00:33:47.609610 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 20 00:33:47.609947 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 20 00:33:47.610146 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 00:33:47.610548 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 20 00:33:47.610880 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 20 00:33:47.611092 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 20 00:33:47.611288 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 20 00:33:47.611600 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 20 00:33:47.612549 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 20 00:33:47.612825 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 20 00:33:47.613032 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 20 00:33:47.613292 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 20 00:33:47.614600 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 20 00:33:47.614872 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 20 00:33:47.615076 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 20 00:33:47.615273 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 20 00:33:47.615654 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 20 00:33:47.615928 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 00:33:47.616233 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 20 00:33:47.617193 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 20 00:33:47.617463 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 20 00:33:47.619305 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 20 00:33:47.619566 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 20 00:33:47.619586 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 00:33:47.619599 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 00:33:47.619611 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 00:33:47.619630 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 00:33:47.619642 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 00:33:47.619654 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 00:33:47.619665 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 00:33:47.619736 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 00:33:47.619748 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 00:33:47.619760 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 00:33:47.619772 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 00:33:47.619783 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 00:33:47.619800 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 00:33:47.619812 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 00:33:47.619823 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 00:33:47.619835 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 00:33:47.619846 kernel: iommu: Default domain type: Translated Jan 20 00:33:47.619858 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 00:33:47.619870 kernel: efivars: Registered efivars operations Jan 20 00:33:47.619883 kernel: PCI: Using ACPI for IRQ routing Jan 20 00:33:47.619894 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 00:33:47.619910 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 20 00:33:47.619922 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 20 00:33:47.619933 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 20 00:33:47.619945 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 20 00:33:47.620150 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 00:33:47.624116 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 00:33:47.627651 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 00:33:47.627734 kernel: vgaarb: loaded Jan 20 00:33:47.627757 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 00:33:47.627769 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 00:33:47.627781 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 00:33:47.627793 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 00:33:47.627805 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 00:33:47.627817 kernel: pnp: PnP ACPI init Jan 20 00:33:47.628219 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 00:33:47.628240 kernel: pnp: PnP ACPI: found 6 devices Jan 20 00:33:47.628252 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 00:33:47.628271 kernel: NET: Registered PF_INET protocol family Jan 20 00:33:47.628283 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 00:33:47.628295 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 00:33:47.628307 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 00:33:47.628319 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 00:33:47.628368 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 00:33:47.628381 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 00:33:47.628393 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:33:47.628410 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:33:47.628422 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 00:33:47.628433 kernel: NET: Registered PF_XDP protocol family Jan 20 00:33:47.630490 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 20 00:33:47.630777 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 20 00:33:47.630975 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 00:33:47.631157 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 00:33:47.631422 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 00:33:47.631626 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 00:33:47.631869 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 00:33:47.632055 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 20 00:33:47.632071 kernel: PCI: CLS 0 bytes, default 64 Jan 20 00:33:47.632084 kernel: Initialise system trusted keyrings Jan 20 00:33:47.632096 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 00:33:47.632107 kernel: Key type asymmetric registered Jan 20 00:33:47.632119 kernel: Asymmetric key parser 'x509' registered Jan 20 00:33:47.632131 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 20 00:33:47.632150 kernel: io scheduler mq-deadline registered Jan 20 00:33:47.632161 kernel: io scheduler kyber registered Jan 20 00:33:47.632173 kernel: io scheduler bfq registered Jan 20 00:33:47.632185 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 00:33:47.632198 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 00:33:47.632210 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 00:33:47.632222 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 00:33:47.632233 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 00:33:47.632245 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 00:33:47.632261 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 00:33:47.632273 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 00:33:47.632285 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 00:33:47.635759 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 00:33:47.635970 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 00:33:47.635988 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 00:33:47.636174 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T00:33:45 UTC (1768869225) Jan 20 00:33:47.636454 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 00:33:47.636481 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 00:33:47.636494 kernel: efifb: probing for efifb Jan 20 00:33:47.636506 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 20 00:33:47.636517 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 20 00:33:47.636529 kernel: efifb: scrolling: redraw Jan 20 00:33:47.636541 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 20 00:33:47.636553 kernel: Console: switching to colour frame buffer device 100x37 Jan 20 00:33:47.636564 kernel: fb0: EFI VGA frame buffer device Jan 20 00:33:47.636576 kernel: pstore: Using crash dump compression: deflate Jan 20 00:33:47.636593 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 00:33:47.636604 kernel: NET: Registered PF_INET6 protocol family Jan 20 00:33:47.636616 kernel: Segment Routing with IPv6 Jan 20 00:33:47.636627 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 00:33:47.636639 kernel: NET: Registered PF_PACKET protocol family Jan 20 00:33:47.636650 kernel: Key type dns_resolver registered Jan 20 00:33:47.636663 kernel: IPI shorthand broadcast: enabled Jan 20 00:33:47.636759 kernel: sched_clock: Marking stable (4992025725, 718039856)->(6639297021, -929231440) Jan 20 00:33:47.636776 kernel: registered taskstats version 1 Jan 20 00:33:47.636792 kernel: Loading compiled-in X.509 certificates Jan 20 00:33:47.636804 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: ea2d429b6f340e470c7de035feb011ab349763d1' Jan 20 00:33:47.636816 kernel: Key type .fscrypt registered Jan 20 00:33:47.636828 kernel: Key type fscrypt-provisioning registered Jan 20 00:33:47.636841 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 00:33:47.636853 kernel: ima: Allocated hash algorithm: sha1 Jan 20 00:33:47.636865 kernel: ima: No architecture policies found Jan 20 00:33:47.636877 kernel: clk: Disabling unused clocks Jan 20 00:33:47.636889 kernel: Freeing unused kernel image (initmem) memory: 42880K Jan 20 00:33:47.636905 kernel: Write protecting the kernel read-only data: 36864k Jan 20 00:33:47.636918 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 20 00:33:47.636930 kernel: Run /init as init process Jan 20 00:33:47.636942 kernel: with arguments: Jan 20 00:33:47.636954 kernel: /init Jan 20 00:33:47.636966 kernel: with environment: Jan 20 00:33:47.636978 kernel: HOME=/ Jan 20 00:33:47.636990 kernel: TERM=linux Jan 20 00:33:47.637034 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:33:47.637055 systemd[1]: Detected virtualization kvm. Jan 20 00:33:47.637068 systemd[1]: Detected architecture x86-64. Jan 20 00:33:47.637080 systemd[1]: Running in initrd. Jan 20 00:33:47.637093 systemd[1]: No hostname configured, using default hostname. Jan 20 00:33:47.637105 systemd[1]: Hostname set to . Jan 20 00:33:47.637118 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:33:47.637130 systemd[1]: Queued start job for default target initrd.target. Jan 20 00:33:47.637147 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:33:47.637160 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:33:47.637174 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 00:33:47.637187 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:33:47.637200 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 00:33:47.637221 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 00:33:47.637235 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 00:33:47.637248 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 00:33:47.637261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:33:47.637274 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:33:47.637287 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:33:47.637304 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:33:47.637316 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:33:47.637369 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:33:47.637384 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:33:47.637397 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:33:47.637410 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 00:33:47.637422 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 00:33:47.637435 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:33:47.637448 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:33:47.637466 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:33:47.637479 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:33:47.637492 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 00:33:47.637505 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:33:47.637518 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 00:33:47.637530 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 00:33:47.637543 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:33:47.637555 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:33:47.637569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:33:47.637587 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 00:33:47.637600 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:33:47.637613 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 00:33:47.637663 systemd-journald[194]: Collecting audit messages is disabled. Jan 20 00:33:47.637789 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:33:47.637804 systemd-journald[194]: Journal started Jan 20 00:33:47.637837 systemd-journald[194]: Runtime Journal (/run/log/journal/f7b64fef1f69465d81b912fac5f75e6f) is 6.0M, max 48.3M, 42.2M free. Jan 20 00:33:47.570133 systemd-modules-load[195]: Inserted module 'overlay' Jan 20 00:33:47.647934 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:33:47.665947 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:33:47.704527 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:33:47.718950 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 00:33:47.718984 kernel: Bridge firewalling registered Jan 20 00:33:47.735182 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 20 00:33:47.742963 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:33:47.758622 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:33:47.770919 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:33:47.782028 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:33:47.818198 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 00:33:47.827561 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:33:47.857499 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:33:47.869842 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:33:47.902506 dracut-cmdline[223]: dracut-dracut-053 Jan 20 00:33:47.902506 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:33:47.942409 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:33:47.954885 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:33:47.986984 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:33:48.046018 systemd-resolved[268]: Positive Trust Anchors: Jan 20 00:33:48.046140 systemd-resolved[268]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:33:48.046190 systemd-resolved[268]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:33:48.050821 systemd-resolved[268]: Defaulting to hostname 'linux'. Jan 20 00:33:48.054200 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:33:48.060291 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:33:48.120926 kernel: SCSI subsystem initialized Jan 20 00:33:48.136388 kernel: Loading iSCSI transport class v2.0-870. Jan 20 00:33:48.156417 kernel: iscsi: registered transport (tcp) Jan 20 00:33:48.193383 kernel: iscsi: registered transport (qla4xxx) Jan 20 00:33:48.193468 kernel: QLogic iSCSI HBA Driver Jan 20 00:33:48.345380 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 00:33:48.371806 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 00:33:48.500913 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 00:33:48.501057 kernel: device-mapper: uevent: version 1.0.3 Jan 20 00:33:48.501082 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 00:33:48.611900 kernel: raid6: avx2x4 gen() 19155 MB/s Jan 20 00:33:48.630858 kernel: raid6: avx2x2 gen() 17840 MB/s Jan 20 00:33:48.656396 kernel: raid6: avx2x1 gen() 10647 MB/s Jan 20 00:33:48.656504 kernel: raid6: using algorithm avx2x4 gen() 19155 MB/s Jan 20 00:33:48.681288 kernel: raid6: .... xor() 4413 MB/s, rmw enabled Jan 20 00:33:48.681437 kernel: raid6: using avx2x2 recovery algorithm Jan 20 00:33:48.722941 kernel: xor: automatically using best checksumming function avx Jan 20 00:33:49.205456 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 00:33:49.260858 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:33:49.291023 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:33:49.342301 systemd-udevd[417]: Using default interface naming scheme 'v255'. Jan 20 00:33:49.361793 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:33:49.397961 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 00:33:49.456147 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jan 20 00:33:49.597112 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:33:49.639590 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:33:49.902135 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:33:49.949042 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 00:33:49.989416 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 00:33:50.025877 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:33:50.038894 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:33:50.051033 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:33:50.073122 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 00:33:50.100090 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 00:33:50.127860 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 00:33:50.127924 kernel: libata version 3.00 loaded. Jan 20 00:33:50.127944 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 00:33:50.136117 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:33:50.160891 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 00:33:50.161030 kernel: GPT:9289727 != 19775487 Jan 20 00:33:50.161054 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 00:33:50.165512 kernel: GPT:9289727 != 19775487 Jan 20 00:33:50.169473 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 00:33:50.169529 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:33:50.170968 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:33:50.171191 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:33:50.246086 kernel: AVX2 version of gcm_enc/dec engaged. Jan 20 00:33:50.246124 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 00:33:50.247592 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 00:33:50.247667 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 20 00:33:50.248024 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 00:33:50.248299 kernel: AES CTR mode by8 optimization enabled Jan 20 00:33:50.248331 kernel: scsi host0: ahci Jan 20 00:33:50.184313 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:33:50.340907 kernel: scsi host1: ahci Jan 20 00:33:50.341268 kernel: scsi host2: ahci Jan 20 00:33:50.341616 kernel: scsi host3: ahci Jan 20 00:33:50.342051 kernel: scsi host4: ahci Jan 20 00:33:50.342394 kernel: scsi host5: ahci Jan 20 00:33:50.342639 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 20 00:33:50.342659 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 20 00:33:50.342832 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 20 00:33:50.342854 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 20 00:33:50.342871 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 20 00:33:50.342886 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 20 00:33:50.342909 kernel: BTRFS: device fsid ea39c6ab-04c2-4917-8268-943d4ecb2b5c devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (470) Jan 20 00:33:50.205738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:33:50.353483 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (477) Jan 20 00:33:50.206079 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:33:50.232544 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:33:50.328018 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:33:50.368647 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 00:33:50.377825 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 00:33:50.385598 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 00:33:50.389462 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 00:33:50.403571 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:33:50.441664 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 00:33:50.458280 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:33:50.500869 disk-uuid[561]: Primary Header is updated. Jan 20 00:33:50.500869 disk-uuid[561]: Secondary Entries is updated. Jan 20 00:33:50.500869 disk-uuid[561]: Secondary Header is updated. Jan 20 00:33:50.535567 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:33:50.458553 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:33:50.465057 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:33:50.559105 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:33:50.560835 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:33:50.589180 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:33:50.618243 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 00:33:50.618276 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 00:33:50.619089 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 00:33:50.633504 kernel: ata3.00: applying bridge limits Jan 20 00:33:50.638752 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 00:33:50.645551 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 00:33:50.645610 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 00:33:50.649761 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 00:33:50.655022 kernel: ata3.00: configured for UDMA/100 Jan 20 00:33:50.669034 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 00:33:50.693830 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:33:50.846259 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 00:33:50.849097 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 00:33:50.863843 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 00:33:51.579479 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:33:51.591893 disk-uuid[563]: The operation has completed successfully. Jan 20 00:33:51.726849 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 00:33:51.734403 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 00:33:51.783485 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 00:33:51.814659 sh[602]: Success Jan 20 00:33:51.861596 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 20 00:33:51.978266 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 00:33:52.043656 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 00:33:52.080135 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 00:33:52.153468 kernel: BTRFS info (device dm-0): first mount of filesystem ea39c6ab-04c2-4917-8268-943d4ecb2b5c Jan 20 00:33:52.153555 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:33:52.166447 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 00:33:52.166526 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 00:33:52.179045 kernel: BTRFS info (device dm-0): using free space tree Jan 20 00:33:52.229873 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 00:33:52.232218 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 00:33:52.271047 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 00:33:52.297071 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 00:33:52.356604 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:33:52.356742 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:33:52.356767 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:33:52.388537 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:33:52.434555 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 00:33:52.450987 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:33:52.476626 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 00:33:52.510138 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 00:33:52.789231 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:33:52.827738 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:33:52.850141 ignition[707]: Ignition 2.19.0 Jan 20 00:33:52.850184 ignition[707]: Stage: fetch-offline Jan 20 00:33:52.850291 ignition[707]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:33:52.850309 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:33:52.859230 ignition[707]: parsed url from cmdline: "" Jan 20 00:33:52.859239 ignition[707]: no config URL provided Jan 20 00:33:52.859252 ignition[707]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 00:33:52.859271 ignition[707]: no config at "/usr/lib/ignition/user.ign" Jan 20 00:33:52.859315 ignition[707]: op(1): [started] loading QEMU firmware config module Jan 20 00:33:52.859324 ignition[707]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 00:33:52.915554 ignition[707]: op(1): [finished] loading QEMU firmware config module Jan 20 00:33:52.953285 systemd-networkd[789]: lo: Link UP Jan 20 00:33:52.953329 systemd-networkd[789]: lo: Gained carrier Jan 20 00:33:52.960280 systemd-networkd[789]: Enumeration completed Jan 20 00:33:52.961179 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:33:52.963030 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:33:52.963037 systemd-networkd[789]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:33:52.974013 systemd-networkd[789]: eth0: Link UP Jan 20 00:33:52.974022 systemd-networkd[789]: eth0: Gained carrier Jan 20 00:33:52.974040 systemd-networkd[789]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:33:53.000150 systemd[1]: Reached target network.target - Network. Jan 20 00:33:53.030015 systemd-networkd[789]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:33:53.141332 systemd-resolved[268]: Detected conflict on linux IN A 10.0.0.24 Jan 20 00:33:53.143811 systemd-resolved[268]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jan 20 00:33:53.563932 ignition[707]: parsing config with SHA512: 799f7608f64146176277e469b38774294fad66b2b1641c72b4fb37c0ce8c76700b415c28378c71ecfaf9773bdd0e569e390a59aa86e8c249c0fce1866352b1df Jan 20 00:33:53.594030 unknown[707]: fetched base config from "system" Jan 20 00:33:53.595085 unknown[707]: fetched user config from "qemu" Jan 20 00:33:53.598203 ignition[707]: fetch-offline: fetch-offline passed Jan 20 00:33:53.602034 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:33:53.598421 ignition[707]: Ignition finished successfully Jan 20 00:33:53.627103 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 00:33:53.667008 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 00:33:53.805150 ignition[796]: Ignition 2.19.0 Jan 20 00:33:53.805257 ignition[796]: Stage: kargs Jan 20 00:33:53.805596 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:33:53.805617 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:33:53.823912 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 00:33:53.807124 ignition[796]: kargs: kargs passed Jan 20 00:33:53.864624 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 00:33:53.807206 ignition[796]: Ignition finished successfully Jan 20 00:33:53.956474 ignition[805]: Ignition 2.19.0 Jan 20 00:33:53.956503 ignition[805]: Stage: disks Jan 20 00:33:53.956878 ignition[805]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:33:53.975181 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 00:33:53.956899 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:33:53.977555 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 00:33:53.960270 ignition[805]: disks: disks passed Jan 20 00:33:54.019079 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 00:33:53.960408 ignition[805]: Ignition finished successfully Jan 20 00:33:54.041001 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:33:54.062891 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:33:54.128563 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:33:54.160481 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 00:33:54.214864 systemd-fsck[816]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 20 00:33:54.234978 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 00:33:54.277148 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 00:33:54.704417 kernel: EXT4-fs (vda9): mounted filesystem 3f4cac35-b37d-4410-a45a-1329edafa0f9 r/w with ordered data mode. Quota mode: none. Jan 20 00:33:54.708016 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 00:33:54.721721 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 00:33:54.745875 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:33:54.768840 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 00:33:54.775927 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 00:33:54.776005 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 00:33:54.836051 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (824) Jan 20 00:33:54.776045 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:33:54.795644 systemd-networkd[789]: eth0: Gained IPv6LL Jan 20 00:33:54.877471 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:33:54.877539 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:33:54.887066 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:33:54.897776 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 00:33:54.932554 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:33:54.932783 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 00:33:54.952300 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:33:55.058042 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 00:33:55.081570 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Jan 20 00:33:55.097517 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 00:33:55.113618 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 00:33:55.463898 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 00:33:55.493072 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 00:33:55.494834 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 00:33:55.597928 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 00:33:55.610226 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:33:55.670902 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 00:33:55.852123 ignition[937]: INFO : Ignition 2.19.0 Jan 20 00:33:55.852123 ignition[937]: INFO : Stage: mount Jan 20 00:33:55.883235 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:33:55.883235 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:33:55.883235 ignition[937]: INFO : mount: mount passed Jan 20 00:33:55.883235 ignition[937]: INFO : Ignition finished successfully Jan 20 00:33:55.898667 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 00:33:55.974348 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 00:33:56.083060 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:33:56.142176 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (950) Jan 20 00:33:56.158472 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:33:56.158558 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:33:56.161599 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:33:56.181984 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:33:56.190446 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:33:56.287875 ignition[967]: INFO : Ignition 2.19.0 Jan 20 00:33:56.299480 ignition[967]: INFO : Stage: files Jan 20 00:33:56.299480 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:33:56.299480 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:33:56.299480 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Jan 20 00:33:56.361355 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 00:33:56.361355 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 00:33:56.361355 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 00:33:56.361355 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 00:33:56.361355 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 00:33:56.345591 unknown[967]: wrote ssh authorized keys file for user: core Jan 20 00:33:56.437065 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 00:33:56.437065 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 00:33:56.745047 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 00:33:57.734100 kernel: hrtimer: interrupt took 14619334 ns Jan 20 00:33:57.893471 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 00:33:57.893471 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 00:33:57.934445 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 20 00:33:58.099079 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 20 00:33:59.077284 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 00:33:59.077284 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 20 00:33:59.118480 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 00:33:59.118480 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:33:59.118480 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:33:59.118480 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:33:59.118480 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:33:59.118480 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:33:59.118480 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:33:59.118480 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:33:59.118480 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:33:59.118480 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:33:59.118480 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:33:59.118480 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:33:59.118480 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 00:33:59.420017 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 20 00:34:01.019828 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 00:34:01.019828 ignition[967]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 20 00:34:01.036197 ignition[967]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:34:01.046514 ignition[967]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:34:01.046514 ignition[967]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 20 00:34:01.059854 ignition[967]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 20 00:34:01.059854 ignition[967]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:34:01.074369 ignition[967]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:34:01.074369 ignition[967]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 20 00:34:01.088296 ignition[967]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 00:34:01.172910 ignition[967]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:34:01.192180 ignition[967]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:34:01.200884 ignition[967]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 00:34:01.200884 ignition[967]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 20 00:34:01.200884 ignition[967]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 00:34:01.200884 ignition[967]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:34:01.200884 ignition[967]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:34:01.200884 ignition[967]: INFO : files: files passed Jan 20 00:34:01.200884 ignition[967]: INFO : Ignition finished successfully Jan 20 00:34:01.280716 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 00:34:01.360980 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 00:34:01.395549 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 00:34:01.514236 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 00:34:01.514567 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 00:34:01.532240 initrd-setup-root-after-ignition[994]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 00:34:01.543147 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:34:01.543147 initrd-setup-root-after-ignition[997]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:34:01.561118 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:34:01.557559 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:34:01.612738 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 00:34:01.664545 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 00:34:01.768226 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 00:34:01.768535 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 00:34:01.773766 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 00:34:01.796945 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 00:34:01.823179 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 00:34:01.837983 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 00:34:01.864842 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:34:01.957926 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 00:34:01.996594 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:34:02.021902 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:34:02.028287 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 00:34:02.037210 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 00:34:02.037429 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:34:02.044285 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 00:34:02.051183 systemd[1]: Stopped target basic.target - Basic System. Jan 20 00:34:02.060816 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 00:34:02.070456 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:34:02.079525 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 00:34:02.080155 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 00:34:02.080913 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:34:02.081805 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 00:34:02.270761 ignition[1021]: INFO : Ignition 2.19.0 Jan 20 00:34:02.270761 ignition[1021]: INFO : Stage: umount Jan 20 00:34:02.270761 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:34:02.270761 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:34:02.270761 ignition[1021]: INFO : umount: umount passed Jan 20 00:34:02.270761 ignition[1021]: INFO : Ignition finished successfully Jan 20 00:34:02.082341 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 00:34:02.103780 systemd[1]: Stopped target swap.target - Swaps. Jan 20 00:34:02.115483 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 00:34:02.116028 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:34:02.118266 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:34:02.132045 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:34:02.132590 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 00:34:02.132997 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:34:02.133464 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 00:34:02.133881 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 00:34:02.161222 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 00:34:02.162557 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:34:02.165557 systemd[1]: Stopped target paths.target - Path Units. Jan 20 00:34:02.166821 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 00:34:02.170866 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:34:02.171587 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 00:34:02.173862 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 00:34:02.174996 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 00:34:02.175168 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:34:02.177145 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 00:34:02.177280 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:34:02.177558 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 00:34:02.177787 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:34:02.183035 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 00:34:02.183292 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 00:34:02.243226 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 00:34:02.251237 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 00:34:02.251584 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:34:02.266016 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 00:34:02.272788 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 00:34:02.273037 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:34:02.281211 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 00:34:02.281540 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:34:02.294566 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 00:34:02.294799 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 00:34:02.318816 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 00:34:02.320185 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 00:34:02.320468 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 00:34:02.327954 systemd[1]: Stopped target network.target - Network. Jan 20 00:34:02.332078 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 00:34:02.332181 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 00:34:02.332471 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 00:34:02.332564 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 00:34:02.333810 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 00:34:02.664128 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 20 00:34:02.333898 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 00:34:02.334482 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 00:34:02.334561 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 00:34:02.336153 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 00:34:02.336777 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 00:34:02.338019 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 00:34:02.338232 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 00:34:02.340259 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 00:34:02.340450 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 00:34:02.366052 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 00:34:02.366510 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 00:34:02.367269 systemd-networkd[789]: eth0: DHCPv6 lease lost Jan 20 00:34:02.376259 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 00:34:02.376355 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:34:02.381912 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 00:34:02.382098 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 00:34:02.392213 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 00:34:02.392311 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:34:02.425748 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 00:34:02.433006 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 00:34:02.433133 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:34:02.443014 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:34:02.443118 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:34:02.452120 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 00:34:02.452292 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 00:34:02.462031 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:34:02.493497 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 00:34:02.493854 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:34:02.504076 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 00:34:02.504316 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 00:34:02.524880 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 00:34:02.525032 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 00:34:02.528896 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 00:34:02.528970 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:34:02.529583 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 00:34:02.529660 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:34:02.531957 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 00:34:02.532077 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 00:34:02.535481 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:34:02.535563 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:34:02.540852 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 00:34:02.541238 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 00:34:02.541318 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:34:02.543578 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:34:02.543660 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:34:02.563480 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 00:34:02.563740 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 00:34:02.571020 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 00:34:02.581229 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 00:34:02.606873 systemd[1]: Switching root. Jan 20 00:34:03.050448 systemd-journald[194]: Journal stopped Jan 20 00:34:05.788898 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 00:34:05.789003 kernel: SELinux: policy capability open_perms=1 Jan 20 00:34:05.789027 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 00:34:05.789056 kernel: SELinux: policy capability always_check_network=0 Jan 20 00:34:05.789082 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 00:34:05.789103 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 00:34:05.789124 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 00:34:05.789186 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 00:34:05.789206 kernel: audit: type=1403 audit(1768869243.227:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 00:34:05.789227 systemd[1]: Successfully loaded SELinux policy in 157.505ms. Jan 20 00:34:05.789266 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.204ms. Jan 20 00:34:05.789288 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:34:05.789307 systemd[1]: Detected virtualization kvm. Jan 20 00:34:05.789324 systemd[1]: Detected architecture x86-64. Jan 20 00:34:05.789344 systemd[1]: Detected first boot. Jan 20 00:34:05.789366 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:34:05.789456 zram_generator::config[1065]: No configuration found. Jan 20 00:34:05.789493 systemd[1]: Populated /etc with preset unit settings. Jan 20 00:34:05.789514 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 00:34:05.789532 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 00:34:05.789551 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 00:34:05.789577 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 00:34:05.789599 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 00:34:05.789619 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 00:34:05.789740 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 00:34:05.789765 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 00:34:05.789784 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 00:34:05.789806 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 00:34:05.789828 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 00:34:05.789848 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:34:05.789866 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:34:05.789883 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 00:34:05.789904 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 00:34:05.789962 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 00:34:05.789984 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:34:05.790001 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 00:34:05.790021 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:34:05.790051 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 00:34:05.790103 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 00:34:05.790123 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 00:34:05.790145 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 00:34:05.790200 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:34:05.790223 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:34:05.790242 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:34:05.790263 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:34:05.790285 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 00:34:05.790305 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 00:34:05.790323 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:34:05.790341 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:34:05.790361 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:34:05.790452 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 00:34:05.790476 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 00:34:05.790498 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 00:34:05.790520 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 00:34:05.790548 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:34:05.790566 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 00:34:05.790586 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 00:34:05.790608 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 00:34:05.790719 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 00:34:05.790747 systemd[1]: Reached target machines.target - Containers. Jan 20 00:34:05.790765 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 00:34:05.790786 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:34:05.790807 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:34:05.790828 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 00:34:05.790850 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:34:05.790868 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:34:05.790887 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:34:05.790943 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 00:34:05.790964 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:34:05.790982 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 00:34:05.791008 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 00:34:05.791029 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 00:34:05.791051 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 00:34:05.791071 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 00:34:05.791088 kernel: loop: module loaded Jan 20 00:34:05.791140 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:34:05.791163 kernel: ACPI: bus type drm_connector registered Jan 20 00:34:05.791181 kernel: fuse: init (API version 7.39) Jan 20 00:34:05.791198 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:34:05.791217 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 00:34:05.791237 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 00:34:05.791296 systemd-journald[1146]: Collecting audit messages is disabled. Jan 20 00:34:05.791370 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:34:05.791428 systemd-journald[1146]: Journal started Jan 20 00:34:05.791465 systemd-journald[1146]: Runtime Journal (/run/log/journal/f7b64fef1f69465d81b912fac5f75e6f) is 6.0M, max 48.3M, 42.2M free. Jan 20 00:34:05.046869 systemd[1]: Queued start job for default target multi-user.target. Jan 20 00:34:05.075176 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 00:34:05.076178 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 00:34:05.076875 systemd[1]: systemd-journald.service: Consumed 2.658s CPU time. Jan 20 00:34:05.797601 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 00:34:05.797774 systemd[1]: Stopped verity-setup.service. Jan 20 00:34:05.808816 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:34:05.824767 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:34:05.832875 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 00:34:05.837505 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 00:34:05.841930 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 00:34:05.845932 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 00:34:05.850222 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 00:34:05.854767 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 00:34:05.859369 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 00:34:05.865008 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:34:05.870643 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 00:34:05.871035 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 00:34:05.876222 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:34:05.876563 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:34:05.883262 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:34:05.883797 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:34:05.888933 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:34:05.889296 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:34:05.894956 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 00:34:05.895284 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 00:34:05.901084 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:34:05.901598 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:34:05.911114 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:34:05.916356 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 00:34:05.921865 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 00:34:05.966617 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 00:34:06.065579 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 00:34:06.072477 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 00:34:06.076723 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 00:34:06.076853 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:34:06.081314 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 20 00:34:06.096260 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 00:34:06.104663 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 00:34:06.110853 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:34:06.113537 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 00:34:06.122984 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 00:34:06.128615 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:34:06.146657 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 00:34:06.150217 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:34:06.154041 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:34:06.360107 systemd-journald[1146]: Time spent on flushing to /var/log/journal/f7b64fef1f69465d81b912fac5f75e6f is 35.887ms for 989 entries. Jan 20 00:34:06.360107 systemd-journald[1146]: System Journal (/var/log/journal/f7b64fef1f69465d81b912fac5f75e6f) is 8.0M, max 195.6M, 187.6M free. Jan 20 00:34:06.531588 systemd-journald[1146]: Received client request to flush runtime journal. Jan 20 00:34:06.531652 kernel: loop0: detected capacity change from 0 to 142488 Jan 20 00:34:06.325067 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 00:34:06.333942 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 00:34:06.341361 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 00:34:06.346785 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 00:34:06.354307 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 00:34:06.373582 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 00:34:06.386505 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 00:34:06.493567 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 20 00:34:06.499386 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:34:06.527904 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 20 00:34:06.534092 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 00:34:06.739018 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 00:34:06.739136 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 20 00:34:06.740234 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:34:06.751923 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 00:34:06.753303 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 20 00:34:06.772376 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 00:34:06.791220 kernel: loop1: detected capacity change from 0 to 140768 Jan 20 00:34:06.791066 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:34:06.863800 kernel: loop2: detected capacity change from 0 to 224512 Jan 20 00:34:06.947940 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jan 20 00:34:06.947968 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jan 20 00:34:07.009208 kernel: loop3: detected capacity change from 0 to 142488 Jan 20 00:34:07.029567 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:34:07.091029 kernel: loop4: detected capacity change from 0 to 140768 Jan 20 00:34:07.202482 kernel: loop5: detected capacity change from 0 to 224512 Jan 20 00:34:07.229364 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 00:34:07.230542 (sd-merge)[1203]: Merged extensions into '/usr'. Jan 20 00:34:07.237849 systemd[1]: Reloading requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 00:34:07.237896 systemd[1]: Reloading... Jan 20 00:34:07.534824 zram_generator::config[1233]: No configuration found. Jan 20 00:34:07.934251 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:34:08.078379 ldconfig[1174]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 00:34:08.097341 systemd[1]: Reloading finished in 858 ms. Jan 20 00:34:08.137988 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 00:34:08.142171 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 00:34:08.159013 systemd[1]: Starting ensure-sysext.service... Jan 20 00:34:08.163338 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:34:08.176039 systemd[1]: Reloading requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Jan 20 00:34:08.176083 systemd[1]: Reloading... Jan 20 00:34:08.363587 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 00:34:08.364731 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 00:34:08.366974 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 00:34:08.367515 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jan 20 00:34:08.367758 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jan 20 00:34:08.368717 zram_generator::config[1297]: No configuration found. Jan 20 00:34:08.376926 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:34:08.376974 systemd-tmpfiles[1268]: Skipping /boot Jan 20 00:34:08.395974 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:34:08.396030 systemd-tmpfiles[1268]: Skipping /boot Jan 20 00:34:08.937023 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:34:09.116953 systemd[1]: Reloading finished in 940 ms. Jan 20 00:34:09.146820 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 00:34:09.163855 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:34:09.194171 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:34:09.263453 augenrules[1349]: No rules Jan 20 00:34:09.280127 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 00:34:09.286839 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 00:34:09.294153 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:34:09.303121 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:34:09.316015 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 00:34:09.322341 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:34:09.332132 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:34:09.332485 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:34:09.338881 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:34:09.344640 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:34:09.352955 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:34:09.356855 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:34:09.358972 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Jan 20 00:34:09.378582 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 00:34:09.383487 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:34:09.399242 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 00:34:09.415010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:34:09.415328 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:34:09.420962 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:34:09.421549 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:34:09.426321 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 00:34:09.430624 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:34:09.430948 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:34:09.450037 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:34:09.455484 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:34:09.455891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:34:09.468032 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:34:09.478951 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:34:09.486563 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:34:09.490743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:34:09.500109 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:34:09.517083 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 00:34:09.521406 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 00:34:09.521612 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:34:09.523632 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 00:34:09.529943 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 00:34:09.535846 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:34:09.536116 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:34:09.542648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:34:09.543267 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:34:09.548741 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:34:09.549392 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:34:09.596760 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 00:34:09.611470 systemd-resolved[1356]: Positive Trust Anchors: Jan 20 00:34:09.611512 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:34:09.611541 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:34:09.625278 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 00:34:09.627578 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:34:09.628592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:34:09.637885 systemd-resolved[1356]: Defaulting to hostname 'linux'. Jan 20 00:34:09.637942 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:34:09.646280 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:34:09.652179 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:34:09.679622 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:34:09.732108 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:34:09.732530 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 00:34:09.732570 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:34:09.733390 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:34:09.743656 systemd[1]: Finished ensure-sysext.service. Jan 20 00:34:09.755308 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:34:09.755743 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:34:09.770612 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:34:09.771048 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:34:09.776917 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1376) Jan 20 00:34:09.797517 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:34:09.798101 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:34:09.808358 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:34:09.808834 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:34:09.827170 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:34:09.836160 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:34:09.836287 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:34:09.844806 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 00:34:09.849010 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 00:34:09.855330 systemd-networkd[1395]: lo: Link UP Jan 20 00:34:09.855376 systemd-networkd[1395]: lo: Gained carrier Jan 20 00:34:09.860205 systemd-networkd[1395]: Enumeration completed Jan 20 00:34:09.863263 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:34:09.863316 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:34:09.864556 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:34:09.869788 kernel: ACPI: button: Power Button [PWRF] Jan 20 00:34:09.869622 systemd-networkd[1395]: eth0: Link UP Jan 20 00:34:09.869772 systemd-networkd[1395]: eth0: Gained carrier Jan 20 00:34:09.869801 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:34:09.885237 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:34:09.891275 systemd[1]: Reached target network.target - Network. Jan 20 00:34:09.896247 systemd-networkd[1395]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:34:09.902499 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 00:34:09.923911 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 00:34:09.988980 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 20 00:34:10.017232 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 20 00:34:10.017926 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 00:34:10.018318 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 20 00:34:10.019085 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 00:34:10.044899 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 00:34:10.326381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:34:10.327769 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 00:34:10.364299 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:34:10.364874 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:34:10.381101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:34:10.547637 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 00:34:11.310440 systemd-timesyncd[1424]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 00:34:11.310615 systemd-timesyncd[1424]: Initial clock synchronization to Tue 2026-01-20 00:34:11.310173 UTC. Jan 20 00:34:11.311827 systemd-resolved[1356]: Clock change detected. Flushing caches. Jan 20 00:34:11.317046 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 00:34:11.329352 kernel: kvm_amd: TSC scaling supported Jan 20 00:34:11.329435 kernel: kvm_amd: Nested Virtualization enabled Jan 20 00:34:11.329465 kernel: kvm_amd: Nested Paging enabled Jan 20 00:34:11.332469 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 00:34:11.332615 kernel: kvm_amd: PMU virtualization is disabled Jan 20 00:34:11.418649 kernel: EDAC MC: Ver: 3.0.0 Jan 20 00:34:11.482330 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 20 00:34:11.508807 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 20 00:34:11.512927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:34:11.653275 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:34:11.706005 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 20 00:34:11.714106 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:34:11.719740 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:34:11.725336 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 00:34:11.731056 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 00:34:11.735612 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 00:34:11.739151 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 00:34:11.742913 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 00:34:11.746823 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 00:34:11.746881 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:34:11.749637 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:34:11.754459 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 00:34:11.763217 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 00:34:11.776693 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 00:34:11.783909 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 20 00:34:11.789992 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 00:34:11.795034 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:34:11.799260 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:34:11.803517 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:34:11.803654 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:34:11.805502 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 00:34:11.867488 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 00:34:11.877368 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 00:34:11.976966 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 00:34:11.980502 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 00:34:11.985196 jq[1450]: false Jan 20 00:34:11.985884 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:34:11.985999 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 00:34:11.993640 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 00:34:11.999164 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 00:34:12.010827 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 00:34:12.019246 extend-filesystems[1451]: Found loop3 Jan 20 00:34:12.019246 extend-filesystems[1451]: Found loop4 Jan 20 00:34:12.019246 extend-filesystems[1451]: Found loop5 Jan 20 00:34:12.019246 extend-filesystems[1451]: Found sr0 Jan 20 00:34:12.019246 extend-filesystems[1451]: Found vda Jan 20 00:34:12.019246 extend-filesystems[1451]: Found vda1 Jan 20 00:34:12.019246 extend-filesystems[1451]: Found vda2 Jan 20 00:34:12.019246 extend-filesystems[1451]: Found vda3 Jan 20 00:34:12.019246 extend-filesystems[1451]: Found usr Jan 20 00:34:12.179869 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 00:34:12.181477 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1373) Jan 20 00:34:12.181514 extend-filesystems[1451]: Found vda4 Jan 20 00:34:12.181514 extend-filesystems[1451]: Found vda6 Jan 20 00:34:12.181514 extend-filesystems[1451]: Found vda7 Jan 20 00:34:12.181514 extend-filesystems[1451]: Found vda9 Jan 20 00:34:12.181514 extend-filesystems[1451]: Checking size of /dev/vda9 Jan 20 00:34:12.181514 extend-filesystems[1451]: Resized partition /dev/vda9 Jan 20 00:34:12.249822 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 00:34:12.023790 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 00:34:12.040850 dbus-daemon[1449]: [system] SELinux support is enabled Jan 20 00:34:12.250792 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) Jan 20 00:34:12.250792 extend-filesystems[1468]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 00:34:12.250792 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 00:34:12.250792 extend-filesystems[1468]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 00:34:12.028882 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 00:34:12.276683 dbus-daemon[1449]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 00:34:12.297515 extend-filesystems[1451]: Resized filesystem in /dev/vda9 Jan 20 00:34:12.029493 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 00:34:12.037355 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 00:34:12.052712 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 00:34:12.300913 update_engine[1465]: I20260120 00:34:12.245199 1465 main.cc:92] Flatcar Update Engine starting Jan 20 00:34:12.300913 update_engine[1465]: I20260120 00:34:12.284368 1465 update_check_scheduler.cc:74] Next update check in 2m16s Jan 20 00:34:12.079178 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 00:34:12.301332 jq[1470]: true Jan 20 00:34:12.140815 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 20 00:34:12.176394 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 00:34:12.302709 tar[1475]: linux-amd64/LICENSE Jan 20 00:34:12.302709 tar[1475]: linux-amd64/helm Jan 20 00:34:12.176701 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 00:34:12.177105 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 00:34:12.303333 jq[1476]: true Jan 20 00:34:12.177366 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 00:34:12.180902 systemd-logind[1462]: Watching system buttons on /dev/input/event1 (Power Button) Jan 20 00:34:12.180941 systemd-logind[1462]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 00:34:12.186158 systemd-logind[1462]: New seat seat0. Jan 20 00:34:12.195917 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 00:34:12.202469 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 00:34:12.202917 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 00:34:12.270351 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 00:34:12.270410 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 00:34:12.272147 (ntainerd)[1477]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 00:34:12.275226 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 00:34:12.275260 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 00:34:12.280004 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 00:34:12.280271 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 00:34:12.314420 systemd[1]: Started update-engine.service - Update Engine. Jan 20 00:34:12.589685 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 00:34:12.590489 systemd-networkd[1395]: eth0: Gained IPv6LL Jan 20 00:34:12.598766 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 00:34:12.606404 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 00:34:12.612642 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 00:34:12.630140 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 00:34:12.736513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:34:12.778510 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 00:34:12.786343 bash[1511]: Updated "/home/core/.ssh/authorized_keys" Jan 20 00:34:12.801708 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 00:34:12.815675 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 00:34:12.905439 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 00:34:13.042024 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 00:34:13.225786 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 00:34:13.240876 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 00:34:13.242475 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 00:34:13.287130 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 00:34:13.297163 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 00:34:13.353276 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 00:34:13.354007 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 00:34:13.382025 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 00:34:13.551890 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 00:34:13.587167 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 00:34:13.601057 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 00:34:13.604728 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 00:34:13.867450 containerd[1477]: time="2026-01-20T00:34:13.866586148Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 20 00:34:13.907130 containerd[1477]: time="2026-01-20T00:34:13.906946635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:34:13.954495 containerd[1477]: time="2026-01-20T00:34:13.954251363Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:34:13.954495 containerd[1477]: time="2026-01-20T00:34:13.956889468Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 20 00:34:13.954495 containerd[1477]: time="2026-01-20T00:34:13.957144434Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 20 00:34:13.954495 containerd[1477]: time="2026-01-20T00:34:13.958165811Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 20 00:34:13.954495 containerd[1477]: time="2026-01-20T00:34:13.958186369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 20 00:34:13.954495 containerd[1477]: time="2026-01-20T00:34:13.958370282Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:34:13.954495 containerd[1477]: time="2026-01-20T00:34:13.958385921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:34:13.954495 containerd[1477]: time="2026-01-20T00:34:13.958813189Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:34:13.954495 containerd[1477]: time="2026-01-20T00:34:13.958831393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 20 00:34:13.954495 containerd[1477]: time="2026-01-20T00:34:13.958844067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:34:13.954495 containerd[1477]: time="2026-01-20T00:34:13.958853625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 20 00:34:13.960513 containerd[1477]: time="2026-01-20T00:34:13.959013393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:34:13.960513 containerd[1477]: time="2026-01-20T00:34:13.959451161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:34:13.961162 containerd[1477]: time="2026-01-20T00:34:13.960730950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:34:13.961162 containerd[1477]: time="2026-01-20T00:34:13.960751598Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 20 00:34:13.961162 containerd[1477]: time="2026-01-20T00:34:13.960997167Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 20 00:34:13.961162 containerd[1477]: time="2026-01-20T00:34:13.961090521Z" level=info msg="metadata content store policy set" policy=shared Jan 20 00:34:13.968599 containerd[1477]: time="2026-01-20T00:34:13.968529920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 20 00:34:13.969093 containerd[1477]: time="2026-01-20T00:34:13.968785227Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 20 00:34:13.969618 containerd[1477]: time="2026-01-20T00:34:13.968972537Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 20 00:34:13.969618 containerd[1477]: time="2026-01-20T00:34:13.969167991Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 20 00:34:13.969618 containerd[1477]: time="2026-01-20T00:34:13.969187418Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 20 00:34:13.969618 containerd[1477]: time="2026-01-20T00:34:13.969389435Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 20 00:34:13.970446 containerd[1477]: time="2026-01-20T00:34:13.970414298Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 20 00:34:13.970786 containerd[1477]: time="2026-01-20T00:34:13.970766736Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 20 00:34:13.970847 containerd[1477]: time="2026-01-20T00:34:13.970833791Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 20 00:34:13.970895 containerd[1477]: time="2026-01-20T00:34:13.970882793Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 20 00:34:13.970940 containerd[1477]: time="2026-01-20T00:34:13.970928388Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 20 00:34:13.971018 containerd[1477]: time="2026-01-20T00:34:13.971004880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 20 00:34:13.971578 containerd[1477]: time="2026-01-20T00:34:13.971114796Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 20 00:34:13.971578 containerd[1477]: time="2026-01-20T00:34:13.971140634Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 20 00:34:13.971578 containerd[1477]: time="2026-01-20T00:34:13.971155332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 20 00:34:13.971578 containerd[1477]: time="2026-01-20T00:34:13.971214242Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 20 00:34:13.971578 containerd[1477]: time="2026-01-20T00:34:13.971226694Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 20 00:34:13.971578 containerd[1477]: time="2026-01-20T00:34:13.971237295Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 20 00:34:13.971578 containerd[1477]: time="2026-01-20T00:34:13.971255909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.971578 containerd[1477]: time="2026-01-20T00:34:13.971267931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.971578 containerd[1477]: time="2026-01-20T00:34:13.971279012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.971578 containerd[1477]: time="2026-01-20T00:34:13.971336039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.971578 containerd[1477]: time="2026-01-20T00:34:13.971398476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.971578 containerd[1477]: time="2026-01-20T00:34:13.971412101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.971578 containerd[1477]: time="2026-01-20T00:34:13.971423432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.971578 containerd[1477]: time="2026-01-20T00:34:13.971462655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.972130 containerd[1477]: time="2026-01-20T00:34:13.971476932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.972130 containerd[1477]: time="2026-01-20T00:34:13.971489997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.972469 containerd[1477]: time="2026-01-20T00:34:13.971509322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.972469 containerd[1477]: time="2026-01-20T00:34:13.972221141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.972469 containerd[1477]: time="2026-01-20T00:34:13.972239035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.972469 containerd[1477]: time="2026-01-20T00:34:13.972274872Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 20 00:34:13.972469 containerd[1477]: time="2026-01-20T00:34:13.972330526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.972469 containerd[1477]: time="2026-01-20T00:34:13.972343881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.972469 containerd[1477]: time="2026-01-20T00:34:13.972355101Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 20 00:34:13.972752 containerd[1477]: time="2026-01-20T00:34:13.972732816Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 20 00:34:13.973085 containerd[1477]: time="2026-01-20T00:34:13.972978105Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 20 00:34:13.973232 containerd[1477]: time="2026-01-20T00:34:13.973210849Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 20 00:34:13.974447 containerd[1477]: time="2026-01-20T00:34:13.973281651Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 20 00:34:13.974447 containerd[1477]: time="2026-01-20T00:34:13.973339770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.974447 containerd[1477]: time="2026-01-20T00:34:13.973354297Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 20 00:34:13.974447 containerd[1477]: time="2026-01-20T00:34:13.973372070Z" level=info msg="NRI interface is disabled by configuration." Jan 20 00:34:13.974447 containerd[1477]: time="2026-01-20T00:34:13.973382469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 20 00:34:13.974756 containerd[1477]: time="2026-01-20T00:34:13.973839873Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 20 00:34:13.974756 containerd[1477]: time="2026-01-20T00:34:13.973892070Z" level=info msg="Connect containerd service" Jan 20 00:34:13.974756 containerd[1477]: time="2026-01-20T00:34:13.973926114Z" level=info msg="using legacy CRI server" Jan 20 00:34:13.974756 containerd[1477]: time="2026-01-20T00:34:13.973933719Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 00:34:13.974756 containerd[1477]: time="2026-01-20T00:34:13.974057539Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 20 00:34:13.976355 containerd[1477]: time="2026-01-20T00:34:13.976324492Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:34:13.976824 containerd[1477]: time="2026-01-20T00:34:13.976678483Z" level=info msg="Start subscribing containerd event" Jan 20 00:34:13.976938 containerd[1477]: time="2026-01-20T00:34:13.976922518Z" level=info msg="Start recovering state" Jan 20 00:34:13.977099 containerd[1477]: time="2026-01-20T00:34:13.977082287Z" level=info msg="Start event monitor" Jan 20 00:34:13.977221 containerd[1477]: time="2026-01-20T00:34:13.977205317Z" level=info msg="Start snapshots syncer" Jan 20 00:34:13.977681 containerd[1477]: time="2026-01-20T00:34:13.977661408Z" level=info msg="Start cni network conf syncer for default" Jan 20 00:34:13.977781 containerd[1477]: time="2026-01-20T00:34:13.977767466Z" level=info msg="Start streaming server" Jan 20 00:34:13.979170 containerd[1477]: time="2026-01-20T00:34:13.979102729Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 00:34:13.979443 containerd[1477]: time="2026-01-20T00:34:13.979423036Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 00:34:13.979736 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 00:34:13.979978 containerd[1477]: time="2026-01-20T00:34:13.979960180Z" level=info msg="containerd successfully booted in 0.160987s" Jan 20 00:34:14.311365 tar[1475]: linux-amd64/README.md Jan 20 00:34:14.335380 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 00:34:15.935089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:34:15.936350 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:34:15.950034 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 00:34:15.962396 systemd[1]: Startup finished in 5.301s (kernel) + 17.053s (initrd) + 12.136s (userspace) = 34.491s. Jan 20 00:34:19.697820 kubelet[1561]: E0120 00:34:19.696183 1561 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:34:19.703784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:34:19.704117 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:34:19.710728 systemd[1]: kubelet.service: Consumed 5.483s CPU time. Jan 20 00:34:21.244990 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 00:34:21.286507 systemd[1]: Started sshd@0-10.0.0.24:22-10.0.0.1:52182.service - OpenSSH per-connection server daemon (10.0.0.1:52182). Jan 20 00:34:21.688910 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 52182 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:34:21.692934 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:22.019841 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 00:34:22.119160 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 00:34:22.226774 systemd-logind[1462]: New session 1 of user core. Jan 20 00:34:22.492822 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 00:34:22.527883 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 00:34:23.184630 (systemd)[1580]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 00:34:24.179741 systemd[1580]: Queued start job for default target default.target. Jan 20 00:34:24.190301 systemd[1580]: Created slice app.slice - User Application Slice. Jan 20 00:34:24.190410 systemd[1580]: Reached target paths.target - Paths. Jan 20 00:34:24.190426 systemd[1580]: Reached target timers.target - Timers. Jan 20 00:34:24.193219 systemd[1580]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 00:34:24.214368 systemd[1580]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 00:34:24.214678 systemd[1580]: Reached target sockets.target - Sockets. Jan 20 00:34:24.214736 systemd[1580]: Reached target basic.target - Basic System. Jan 20 00:34:24.214806 systemd[1580]: Reached target default.target - Main User Target. Jan 20 00:34:24.214867 systemd[1580]: Startup finished in 514ms. Jan 20 00:34:24.214965 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 00:34:24.310451 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 00:34:24.701812 systemd[1]: Started sshd@1-10.0.0.24:22-10.0.0.1:34118.service - OpenSSH per-connection server daemon (10.0.0.1:34118). Jan 20 00:34:24.749850 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 34118 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:34:24.752683 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:24.781398 systemd-logind[1462]: New session 2 of user core. Jan 20 00:34:24.792861 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 00:34:24.860864 sshd[1591]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:24.880297 systemd[1]: sshd@1-10.0.0.24:22-10.0.0.1:34118.service: Deactivated successfully. Jan 20 00:34:24.882813 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 00:34:24.884931 systemd-logind[1462]: Session 2 logged out. Waiting for processes to exit. Jan 20 00:34:24.895994 systemd[1]: Started sshd@2-10.0.0.24:22-10.0.0.1:34124.service - OpenSSH per-connection server daemon (10.0.0.1:34124). Jan 20 00:34:24.897721 systemd-logind[1462]: Removed session 2. Jan 20 00:34:24.937858 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 34124 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:34:24.940482 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:24.946920 systemd-logind[1462]: New session 3 of user core. Jan 20 00:34:24.962151 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 00:34:25.032482 sshd[1598]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:25.075295 systemd[1]: sshd@2-10.0.0.24:22-10.0.0.1:34124.service: Deactivated successfully. Jan 20 00:34:25.150810 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 00:34:25.154928 systemd-logind[1462]: Session 3 logged out. Waiting for processes to exit. Jan 20 00:34:25.179048 systemd[1]: Started sshd@3-10.0.0.24:22-10.0.0.1:34132.service - OpenSSH per-connection server daemon (10.0.0.1:34132). Jan 20 00:34:25.193720 systemd-logind[1462]: Removed session 3. Jan 20 00:34:25.572987 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 34132 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:34:25.575684 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:25.583262 systemd-logind[1462]: New session 4 of user core. Jan 20 00:34:25.592747 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 00:34:25.654034 sshd[1605]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:25.686641 systemd[1]: sshd@3-10.0.0.24:22-10.0.0.1:34132.service: Deactivated successfully. Jan 20 00:34:25.689297 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 00:34:25.691982 systemd-logind[1462]: Session 4 logged out. Waiting for processes to exit. Jan 20 00:34:25.701112 systemd[1]: Started sshd@4-10.0.0.24:22-10.0.0.1:34134.service - OpenSSH per-connection server daemon (10.0.0.1:34134). Jan 20 00:34:25.702407 systemd-logind[1462]: Removed session 4. Jan 20 00:34:25.742796 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 34134 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:34:25.745236 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:25.751290 systemd-logind[1462]: New session 5 of user core. Jan 20 00:34:25.760965 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 00:34:25.849711 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 00:34:25.850314 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:34:25.884261 sudo[1615]: pam_unix(sudo:session): session closed for user root Jan 20 00:34:25.888821 sshd[1612]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:25.899284 systemd[1]: sshd@4-10.0.0.24:22-10.0.0.1:34134.service: Deactivated successfully. Jan 20 00:34:25.901680 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 00:34:25.903801 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. Jan 20 00:34:25.919144 systemd[1]: Started sshd@5-10.0.0.24:22-10.0.0.1:34138.service - OpenSSH per-connection server daemon (10.0.0.1:34138). Jan 20 00:34:25.920860 systemd-logind[1462]: Removed session 5. Jan 20 00:34:26.113858 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 34138 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:34:26.116309 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:26.125313 systemd-logind[1462]: New session 6 of user core. Jan 20 00:34:26.138829 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 00:34:26.240835 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 00:34:26.241517 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:34:26.249315 sudo[1624]: pam_unix(sudo:session): session closed for user root Jan 20 00:34:26.265186 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 20 00:34:26.265895 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:34:26.292068 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 20 00:34:26.294799 auditctl[1627]: No rules Jan 20 00:34:26.295460 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 00:34:26.295950 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 20 00:34:26.300875 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:34:26.392084 augenrules[1645]: No rules Jan 20 00:34:26.394095 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:34:26.396626 sudo[1623]: pam_unix(sudo:session): session closed for user root Jan 20 00:34:26.399152 sshd[1620]: pam_unix(sshd:session): session closed for user core Jan 20 00:34:26.428211 systemd[1]: sshd@5-10.0.0.24:22-10.0.0.1:34138.service: Deactivated successfully. Jan 20 00:34:26.430504 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 00:34:26.432653 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. Jan 20 00:34:26.440043 systemd[1]: Started sshd@6-10.0.0.24:22-10.0.0.1:34148.service - OpenSSH per-connection server daemon (10.0.0.1:34148). Jan 20 00:34:26.446489 systemd-logind[1462]: Removed session 6. Jan 20 00:34:26.562736 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 34148 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:34:26.565318 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:34:26.585059 systemd-logind[1462]: New session 7 of user core. Jan 20 00:34:26.596835 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 00:34:26.658281 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 00:34:26.659156 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:34:29.895756 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 00:34:30.026307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:34:30.617064 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 00:34:30.622529 (dockerd)[1677]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 00:34:31.703990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:34:31.731622 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:34:33.095874 kubelet[1683]: E0120 00:34:33.095189 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:34:33.105337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:34:33.105732 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:34:33.106948 systemd[1]: kubelet.service: Consumed 3.188s CPU time. Jan 20 00:34:33.978735 dockerd[1677]: time="2026-01-20T00:34:33.978624401Z" level=info msg="Starting up" Jan 20 00:34:34.601780 dockerd[1677]: time="2026-01-20T00:34:34.601662662Z" level=info msg="Loading containers: start." Jan 20 00:34:35.038623 kernel: Initializing XFRM netlink socket Jan 20 00:34:35.314299 systemd-networkd[1395]: docker0: Link UP Jan 20 00:34:35.343847 dockerd[1677]: time="2026-01-20T00:34:35.343709150Z" level=info msg="Loading containers: done." Jan 20 00:34:35.444681 dockerd[1677]: time="2026-01-20T00:34:35.444437449Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 00:34:35.445054 dockerd[1677]: time="2026-01-20T00:34:35.444853536Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 20 00:34:35.445187 dockerd[1677]: time="2026-01-20T00:34:35.445127157Z" level=info msg="Daemon has completed initialization" Jan 20 00:34:35.531843 dockerd[1677]: time="2026-01-20T00:34:35.530895707Z" level=info msg="API listen on /run/docker.sock" Jan 20 00:34:35.533937 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 00:34:37.573649 containerd[1477]: time="2026-01-20T00:34:37.572898981Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 00:34:38.394055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3823126698.mount: Deactivated successfully. Jan 20 00:34:42.392730 containerd[1477]: time="2026-01-20T00:34:42.392268906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:42.392730 containerd[1477]: time="2026-01-20T00:34:42.392774526Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 20 00:34:42.395829 containerd[1477]: time="2026-01-20T00:34:42.394987161Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:42.400241 containerd[1477]: time="2026-01-20T00:34:42.400175518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:42.402018 containerd[1477]: time="2026-01-20T00:34:42.401934234Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 4.828780378s" Jan 20 00:34:42.402245 containerd[1477]: time="2026-01-20T00:34:42.402152511Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 00:34:42.405811 containerd[1477]: time="2026-01-20T00:34:42.405427605Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 00:34:43.144799 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 00:34:43.237200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:34:43.616451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:34:43.660856 (kubelet)[1911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:34:43.948528 kubelet[1911]: E0120 00:34:43.948237 1911 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:34:43.955528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:34:43.956377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:34:44.937488 containerd[1477]: time="2026-01-20T00:34:44.937142764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:44.940104 containerd[1477]: time="2026-01-20T00:34:44.939419953Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 20 00:34:44.941229 containerd[1477]: time="2026-01-20T00:34:44.941134043Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:44.946644 containerd[1477]: time="2026-01-20T00:34:44.946492281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:44.948331 containerd[1477]: time="2026-01-20T00:34:44.948251269Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.542761617s" Jan 20 00:34:44.948484 containerd[1477]: time="2026-01-20T00:34:44.948326678Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 00:34:44.951499 containerd[1477]: time="2026-01-20T00:34:44.951439941Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 00:34:48.371992 containerd[1477]: time="2026-01-20T00:34:48.362959565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:48.376242 containerd[1477]: time="2026-01-20T00:34:48.372980375Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 20 00:34:48.376242 containerd[1477]: time="2026-01-20T00:34:48.374992158Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:48.382055 containerd[1477]: time="2026-01-20T00:34:48.381972320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:48.384192 containerd[1477]: time="2026-01-20T00:34:48.384085956Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 3.432580152s" Jan 20 00:34:48.384192 containerd[1477]: time="2026-01-20T00:34:48.384178387Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 00:34:48.387712 containerd[1477]: time="2026-01-20T00:34:48.387661651Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 00:34:50.949026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2988743331.mount: Deactivated successfully. Jan 20 00:34:53.027799 containerd[1477]: time="2026-01-20T00:34:53.027373331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:53.027799 containerd[1477]: time="2026-01-20T00:34:53.027971364Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 20 00:34:53.030163 containerd[1477]: time="2026-01-20T00:34:53.029934279Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:53.033672 containerd[1477]: time="2026-01-20T00:34:53.033479847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:53.034879 containerd[1477]: time="2026-01-20T00:34:53.034810880Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 4.646834568s" Jan 20 00:34:53.034971 containerd[1477]: time="2026-01-20T00:34:53.034886079Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 00:34:53.038718 containerd[1477]: time="2026-01-20T00:34:53.038641917Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 00:34:53.671718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649017914.mount: Deactivated successfully. Jan 20 00:34:54.145010 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 00:34:54.167050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:34:54.557436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:34:54.595089 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:34:54.769068 kubelet[1952]: E0120 00:34:54.768985 1952 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:34:54.773672 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:34:54.774021 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:34:55.700836 containerd[1477]: time="2026-01-20T00:34:55.700310539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:55.717367 containerd[1477]: time="2026-01-20T00:34:55.701828083Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 20 00:34:55.723078 containerd[1477]: time="2026-01-20T00:34:55.722689267Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:55.784101 containerd[1477]: time="2026-01-20T00:34:55.782120560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:55.794367 containerd[1477]: time="2026-01-20T00:34:55.794279368Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.755533828s" Jan 20 00:34:55.795036 containerd[1477]: time="2026-01-20T00:34:55.794372260Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 00:34:55.797679 containerd[1477]: time="2026-01-20T00:34:55.797523097Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 00:34:57.487278 update_engine[1465]: I20260120 00:34:57.482406 1465 update_attempter.cc:509] Updating boot flags... Jan 20 00:34:57.607627 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2007) Jan 20 00:34:57.660020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1292457588.mount: Deactivated successfully. Jan 20 00:34:57.672418 containerd[1477]: time="2026-01-20T00:34:57.672315076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:57.673796 containerd[1477]: time="2026-01-20T00:34:57.673581067Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 00:34:57.675188 containerd[1477]: time="2026-01-20T00:34:57.675115743Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:57.678811 containerd[1477]: time="2026-01-20T00:34:57.678743714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:34:57.681323 containerd[1477]: time="2026-01-20T00:34:57.680431682Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.882785206s" Jan 20 00:34:57.681323 containerd[1477]: time="2026-01-20T00:34:57.680476986Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 00:34:57.683742 containerd[1477]: time="2026-01-20T00:34:57.683701640Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 00:34:57.708647 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2007) Jan 20 00:34:57.786637 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2007) Jan 20 00:34:58.187439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2924390916.mount: Deactivated successfully. Jan 20 00:35:03.656305 containerd[1477]: time="2026-01-20T00:35:03.655730664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:35:03.660846 containerd[1477]: time="2026-01-20T00:35:03.659811883Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 20 00:35:03.662024 containerd[1477]: time="2026-01-20T00:35:03.661910496Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:35:03.666694 containerd[1477]: time="2026-01-20T00:35:03.666620096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:35:03.668115 containerd[1477]: time="2026-01-20T00:35:03.668036792Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.984126724s" Jan 20 00:35:03.668115 containerd[1477]: time="2026-01-20T00:35:03.668092514Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 00:35:04.901510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 00:35:04.910808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:35:05.098864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:35:05.105750 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:35:05.199285 kubelet[2104]: E0120 00:35:05.197936 2104 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:35:05.203266 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:35:05.203512 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:35:08.111851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:35:08.126070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:35:08.166967 systemd[1]: Reloading requested from client PID 2119 ('systemctl') (unit session-7.scope)... Jan 20 00:35:08.167006 systemd[1]: Reloading... Jan 20 00:35:08.275652 zram_generator::config[2158]: No configuration found. Jan 20 00:35:08.447069 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:35:08.558947 systemd[1]: Reloading finished in 391 ms. Jan 20 00:35:08.654111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:35:08.677592 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:35:08.679839 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 00:35:08.680488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:35:08.723600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:35:08.933151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:35:08.962222 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:35:09.033321 kubelet[2208]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:35:09.033321 kubelet[2208]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:35:09.033321 kubelet[2208]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:35:09.033321 kubelet[2208]: I0120 00:35:09.033253 2208 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:35:09.366082 kubelet[2208]: I0120 00:35:09.365306 2208 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 00:35:09.366082 kubelet[2208]: I0120 00:35:09.365365 2208 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:35:09.366674 kubelet[2208]: I0120 00:35:09.366165 2208 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 00:35:09.392080 kubelet[2208]: E0120 00:35:09.392012 2208 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:35:09.394348 kubelet[2208]: I0120 00:35:09.394298 2208 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:35:09.411721 kubelet[2208]: E0120 00:35:09.409280 2208 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:35:09.411721 kubelet[2208]: I0120 00:35:09.409356 2208 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:35:09.422984 kubelet[2208]: I0120 00:35:09.422880 2208 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:35:09.424750 kubelet[2208]: I0120 00:35:09.424636 2208 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:35:09.425043 kubelet[2208]: I0120 00:35:09.424715 2208 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:35:09.425346 kubelet[2208]: I0120 00:35:09.425091 2208 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:35:09.425346 kubelet[2208]: I0120 00:35:09.425108 2208 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 00:35:09.425490 kubelet[2208]: I0120 00:35:09.425435 2208 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:35:09.431735 kubelet[2208]: I0120 00:35:09.431664 2208 kubelet.go:446] "Attempting to sync node with API server" Jan 20 00:35:09.431735 kubelet[2208]: I0120 00:35:09.431731 2208 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:35:09.431842 kubelet[2208]: I0120 00:35:09.431788 2208 kubelet.go:352] "Adding apiserver pod source" Jan 20 00:35:09.431842 kubelet[2208]: I0120 00:35:09.431808 2208 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:35:09.436700 kubelet[2208]: W0120 00:35:09.436610 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Jan 20 00:35:09.436788 kubelet[2208]: W0120 00:35:09.436610 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Jan 20 00:35:09.436788 kubelet[2208]: E0120 00:35:09.436726 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:35:09.436788 kubelet[2208]: E0120 00:35:09.436744 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:35:09.439077 kubelet[2208]: I0120 00:35:09.439045 2208 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:35:09.439711 kubelet[2208]: I0120 00:35:09.439662 2208 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 00:35:09.440971 kubelet[2208]: W0120 00:35:09.440888 2208 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 00:35:09.444136 kubelet[2208]: I0120 00:35:09.444031 2208 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:35:09.444136 kubelet[2208]: I0120 00:35:09.444113 2208 server.go:1287] "Started kubelet" Jan 20 00:35:09.446424 kubelet[2208]: I0120 00:35:09.446279 2208 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:35:09.447905 kubelet[2208]: I0120 00:35:09.447782 2208 server.go:479] "Adding debug handlers to kubelet server" Jan 20 00:35:09.448406 kubelet[2208]: I0120 00:35:09.448360 2208 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:35:09.448804 kubelet[2208]: I0120 00:35:09.448781 2208 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:35:09.449121 kubelet[2208]: I0120 00:35:09.449058 2208 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:35:09.449953 kubelet[2208]: I0120 00:35:09.449528 2208 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:35:09.451463 kubelet[2208]: I0120 00:35:09.451441 2208 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:35:09.452137 kubelet[2208]: I0120 00:35:09.452063 2208 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:35:09.452527 kubelet[2208]: I0120 00:35:09.452436 2208 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:35:09.454265 kubelet[2208]: E0120 00:35:09.453335 2208 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:35:09.454598 kubelet[2208]: W0120 00:35:09.454425 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Jan 20 00:35:09.454598 kubelet[2208]: E0120 00:35:09.454463 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:35:09.454598 kubelet[2208]: I0120 00:35:09.454470 2208 factory.go:221] Registration of the systemd container factory successfully Jan 20 00:35:09.454706 kubelet[2208]: E0120 00:35:09.454592 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="200ms" Jan 20 00:35:09.454753 kubelet[2208]: I0120 00:35:09.454531 2208 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:35:09.454915 kubelet[2208]: E0120 00:35:09.454847 2208 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:35:09.456739 kubelet[2208]: I0120 00:35:09.456689 2208 factory.go:221] Registration of the containerd container factory successfully Jan 20 00:35:09.457003 kubelet[2208]: E0120 00:35:09.454464 2208 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.24:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.24:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4943db8116fb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 00:35:09.444069115 +0000 UTC m=+0.466785797,LastTimestamp:2026-01-20 00:35:09.444069115 +0000 UTC m=+0.466785797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 00:35:09.485895 kubelet[2208]: I0120 00:35:09.485795 2208 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 00:35:09.488949 kubelet[2208]: I0120 00:35:09.488927 2208 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 00:35:09.489467 kubelet[2208]: I0120 00:35:09.489089 2208 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 00:35:09.489467 kubelet[2208]: I0120 00:35:09.489116 2208 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:35:09.489467 kubelet[2208]: I0120 00:35:09.489125 2208 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 00:35:09.489467 kubelet[2208]: E0120 00:35:09.489234 2208 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:35:09.491836 kubelet[2208]: W0120 00:35:09.491725 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Jan 20 00:35:09.491836 kubelet[2208]: E0120 00:35:09.491815 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:35:09.494609 kubelet[2208]: I0120 00:35:09.494487 2208 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:35:09.494609 kubelet[2208]: I0120 00:35:09.494520 2208 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:35:09.494839 kubelet[2208]: I0120 00:35:09.494727 2208 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:35:09.555495 kubelet[2208]: E0120 00:35:09.555317 2208 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:35:09.589929 kubelet[2208]: E0120 00:35:09.589813 2208 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 00:35:09.643205 kubelet[2208]: I0120 00:35:09.643080 2208 policy_none.go:49] "None policy: Start" Jan 20 00:35:09.643340 kubelet[2208]: I0120 00:35:09.643229 2208 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:35:09.643472 kubelet[2208]: I0120 00:35:09.643421 2208 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:35:09.655000 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 00:35:09.655706 kubelet[2208]: E0120 00:35:09.655663 2208 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:35:09.656145 kubelet[2208]: E0120 00:35:09.656103 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="400ms" Jan 20 00:35:09.681120 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 00:35:09.685795 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 00:35:09.721145 kubelet[2208]: I0120 00:35:09.720831 2208 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 00:35:09.722144 kubelet[2208]: I0120 00:35:09.721936 2208 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:35:09.722144 kubelet[2208]: I0120 00:35:09.721978 2208 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:35:09.722722 kubelet[2208]: I0120 00:35:09.722700 2208 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:35:09.726388 kubelet[2208]: E0120 00:35:09.726338 2208 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:35:09.726637 kubelet[2208]: E0120 00:35:09.726434 2208 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 00:35:09.807003 systemd[1]: Created slice kubepods-burstable-pod00a0ec2cb9d424ca842d67babb2337c8.slice - libcontainer container kubepods-burstable-pod00a0ec2cb9d424ca842d67babb2337c8.slice. Jan 20 00:35:09.821274 kubelet[2208]: E0120 00:35:09.821146 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:35:09.823640 kubelet[2208]: I0120 00:35:09.823451 2208 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:35:09.824290 kubelet[2208]: E0120 00:35:09.824243 2208 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Jan 20 00:35:09.827212 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 20 00:35:09.830013 kubelet[2208]: E0120 00:35:09.829957 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:35:09.832455 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 20 00:35:09.834829 kubelet[2208]: E0120 00:35:09.834736 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:35:09.854341 kubelet[2208]: I0120 00:35:09.854240 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:35:09.854341 kubelet[2208]: I0120 00:35:09.854300 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:35:09.854341 kubelet[2208]: I0120 00:35:09.854320 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:35:09.854341 kubelet[2208]: I0120 00:35:09.854337 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00a0ec2cb9d424ca842d67babb2337c8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"00a0ec2cb9d424ca842d67babb2337c8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:35:09.854341 kubelet[2208]: I0120 00:35:09.854355 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00a0ec2cb9d424ca842d67babb2337c8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"00a0ec2cb9d424ca842d67babb2337c8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:35:09.854797 kubelet[2208]: I0120 00:35:09.854387 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00a0ec2cb9d424ca842d67babb2337c8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"00a0ec2cb9d424ca842d67babb2337c8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:35:09.854797 kubelet[2208]: I0120 00:35:09.854404 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:35:09.854797 kubelet[2208]: I0120 00:35:09.854421 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:35:09.854797 kubelet[2208]: I0120 00:35:09.854493 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:35:10.026958 kubelet[2208]: I0120 00:35:10.026760 2208 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:35:10.027409 kubelet[2208]: E0120 00:35:10.027300 2208 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Jan 20 00:35:10.056903 kubelet[2208]: E0120 00:35:10.056773 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="800ms" Jan 20 00:35:10.127491 kubelet[2208]: E0120 00:35:10.126942 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:10.130734 kubelet[2208]: E0120 00:35:10.130620 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:10.133681 containerd[1477]: time="2026-01-20T00:35:10.133597240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 20 00:35:10.134391 containerd[1477]: time="2026-01-20T00:35:10.133634333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:00a0ec2cb9d424ca842d67babb2337c8,Namespace:kube-system,Attempt:0,}" Jan 20 00:35:10.136513 kubelet[2208]: E0120 00:35:10.136132 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:10.136974 containerd[1477]: time="2026-01-20T00:35:10.136934295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 20 00:35:10.343386 kubelet[2208]: W0120 00:35:10.338636 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Jan 20 00:35:10.343386 kubelet[2208]: E0120 00:35:10.342968 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:35:10.415108 kubelet[2208]: W0120 00:35:10.414628 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Jan 20 00:35:10.415108 kubelet[2208]: E0120 00:35:10.414817 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:35:10.430241 kubelet[2208]: I0120 00:35:10.430098 2208 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:35:10.430731 kubelet[2208]: E0120 00:35:10.430670 2208 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Jan 20 00:35:10.616520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2376843781.mount: Deactivated successfully. Jan 20 00:35:10.624466 containerd[1477]: time="2026-01-20T00:35:10.624337062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:35:10.628238 containerd[1477]: time="2026-01-20T00:35:10.628107981Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 20 00:35:10.629727 containerd[1477]: time="2026-01-20T00:35:10.629417680Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:35:10.631817 containerd[1477]: time="2026-01-20T00:35:10.631265781Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:35:10.633083 containerd[1477]: time="2026-01-20T00:35:10.632882237Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:35:10.634300 containerd[1477]: time="2026-01-20T00:35:10.634243377Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:35:10.635440 containerd[1477]: time="2026-01-20T00:35:10.635370965Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:35:10.638596 containerd[1477]: time="2026-01-20T00:35:10.638441174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:35:10.640948 kubelet[2208]: W0120 00:35:10.640862 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Jan 20 00:35:10.640948 kubelet[2208]: E0120 00:35:10.640944 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:35:10.642484 containerd[1477]: time="2026-01-20T00:35:10.642415352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 508.456986ms" Jan 20 00:35:10.647648 containerd[1477]: time="2026-01-20T00:35:10.647509681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 513.682623ms" Jan 20 00:35:10.648499 containerd[1477]: time="2026-01-20T00:35:10.648418020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 511.313368ms" Jan 20 00:35:10.776357 kubelet[2208]: W0120 00:35:10.775808 2208 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Jan 20 00:35:10.776357 kubelet[2208]: E0120 00:35:10.776105 2208 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:35:10.875807 kubelet[2208]: E0120 00:35:10.873978 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="1.6s" Jan 20 00:35:10.955926 containerd[1477]: time="2026-01-20T00:35:10.955335685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:35:10.955926 containerd[1477]: time="2026-01-20T00:35:10.955444960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:35:10.955926 containerd[1477]: time="2026-01-20T00:35:10.955466269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:10.955926 containerd[1477]: time="2026-01-20T00:35:10.955792494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:35:10.955926 containerd[1477]: time="2026-01-20T00:35:10.955868065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:35:10.955926 containerd[1477]: time="2026-01-20T00:35:10.955890496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:10.955926 containerd[1477]: time="2026-01-20T00:35:10.956059542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:10.974882 containerd[1477]: time="2026-01-20T00:35:10.969227811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:10.998431 containerd[1477]: time="2026-01-20T00:35:10.998022913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:35:10.998431 containerd[1477]: time="2026-01-20T00:35:10.998079339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:35:10.998431 containerd[1477]: time="2026-01-20T00:35:10.998089718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:10.998431 containerd[1477]: time="2026-01-20T00:35:10.998226282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:11.031273 systemd[1]: Started cri-containerd-da0cacff22afcedb39c747002ba1e8f83c2db60df6abab8950a5ec4ed8bf0d51.scope - libcontainer container da0cacff22afcedb39c747002ba1e8f83c2db60df6abab8950a5ec4ed8bf0d51. Jan 20 00:35:11.127785 systemd[1]: Started cri-containerd-e2eea036139292a5ea41342244d90f77a1bd2e18d3752d4880e4839558f53452.scope - libcontainer container e2eea036139292a5ea41342244d90f77a1bd2e18d3752d4880e4839558f53452. Jan 20 00:35:11.136150 systemd[1]: Started cri-containerd-372b0e2e0cf5ce54b6ea8bab144b80a5ec10d04156da2dc281fb66a1b3d86aad.scope - libcontainer container 372b0e2e0cf5ce54b6ea8bab144b80a5ec10d04156da2dc281fb66a1b3d86aad. Jan 20 00:35:11.241477 kubelet[2208]: I0120 00:35:11.234747 2208 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:35:11.241477 kubelet[2208]: E0120 00:35:11.235910 2208 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Jan 20 00:35:11.278942 containerd[1477]: time="2026-01-20T00:35:11.278795259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:00a0ec2cb9d424ca842d67babb2337c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"da0cacff22afcedb39c747002ba1e8f83c2db60df6abab8950a5ec4ed8bf0d51\"" Jan 20 00:35:11.284962 kubelet[2208]: E0120 00:35:11.284634 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:11.289017 containerd[1477]: time="2026-01-20T00:35:11.288977553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"372b0e2e0cf5ce54b6ea8bab144b80a5ec10d04156da2dc281fb66a1b3d86aad\"" Jan 20 00:35:11.294797 containerd[1477]: time="2026-01-20T00:35:11.294605916Z" level=info msg="CreateContainer within sandbox \"da0cacff22afcedb39c747002ba1e8f83c2db60df6abab8950a5ec4ed8bf0d51\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 00:35:11.295261 containerd[1477]: time="2026-01-20T00:35:11.295077836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2eea036139292a5ea41342244d90f77a1bd2e18d3752d4880e4839558f53452\"" Jan 20 00:35:11.295341 kubelet[2208]: E0120 00:35:11.295082 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:11.296022 kubelet[2208]: E0120 00:35:11.295977 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:11.298585 containerd[1477]: time="2026-01-20T00:35:11.298467448Z" level=info msg="CreateContainer within sandbox \"e2eea036139292a5ea41342244d90f77a1bd2e18d3752d4880e4839558f53452\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 00:35:11.299143 containerd[1477]: time="2026-01-20T00:35:11.299053165Z" level=info msg="CreateContainer within sandbox \"372b0e2e0cf5ce54b6ea8bab144b80a5ec10d04156da2dc281fb66a1b3d86aad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 00:35:11.321054 containerd[1477]: time="2026-01-20T00:35:11.320836054Z" level=info msg="CreateContainer within sandbox \"da0cacff22afcedb39c747002ba1e8f83c2db60df6abab8950a5ec4ed8bf0d51\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"64beea474b602cefb84379c994040474189c54a5035a91d1eaba3777a0ed1f35\"" Jan 20 00:35:11.323402 containerd[1477]: time="2026-01-20T00:35:11.321947073Z" level=info msg="StartContainer for \"64beea474b602cefb84379c994040474189c54a5035a91d1eaba3777a0ed1f35\"" Jan 20 00:35:11.329387 containerd[1477]: time="2026-01-20T00:35:11.329318990Z" level=info msg="CreateContainer within sandbox \"372b0e2e0cf5ce54b6ea8bab144b80a5ec10d04156da2dc281fb66a1b3d86aad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"799114dcf7979c22a09f6be0f51568d22f7f8d12d1fe3d4eb812567cc006dd69\"" Jan 20 00:35:11.330312 containerd[1477]: time="2026-01-20T00:35:11.330235396Z" level=info msg="StartContainer for \"799114dcf7979c22a09f6be0f51568d22f7f8d12d1fe3d4eb812567cc006dd69\"" Jan 20 00:35:11.331816 containerd[1477]: time="2026-01-20T00:35:11.331685588Z" level=info msg="CreateContainer within sandbox \"e2eea036139292a5ea41342244d90f77a1bd2e18d3752d4880e4839558f53452\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"142c20572307bc7e2b2b9f1e05f8b5c9c0cb6baaeb8b48e20c0c3df42d49bdb1\"" Jan 20 00:35:11.334971 containerd[1477]: time="2026-01-20T00:35:11.334913065Z" level=info msg="StartContainer for \"142c20572307bc7e2b2b9f1e05f8b5c9c0cb6baaeb8b48e20c0c3df42d49bdb1\"" Jan 20 00:35:11.377135 systemd[1]: Started cri-containerd-64beea474b602cefb84379c994040474189c54a5035a91d1eaba3777a0ed1f35.scope - libcontainer container 64beea474b602cefb84379c994040474189c54a5035a91d1eaba3777a0ed1f35. Jan 20 00:35:11.446473 systemd[1]: Started cri-containerd-142c20572307bc7e2b2b9f1e05f8b5c9c0cb6baaeb8b48e20c0c3df42d49bdb1.scope - libcontainer container 142c20572307bc7e2b2b9f1e05f8b5c9c0cb6baaeb8b48e20c0c3df42d49bdb1. Jan 20 00:35:11.449111 systemd[1]: Started cri-containerd-799114dcf7979c22a09f6be0f51568d22f7f8d12d1fe3d4eb812567cc006dd69.scope - libcontainer container 799114dcf7979c22a09f6be0f51568d22f7f8d12d1fe3d4eb812567cc006dd69. Jan 20 00:35:11.479083 kubelet[2208]: E0120 00:35:11.479020 2208 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" Jan 20 00:35:11.541705 containerd[1477]: time="2026-01-20T00:35:11.541284628Z" level=info msg="StartContainer for \"64beea474b602cefb84379c994040474189c54a5035a91d1eaba3777a0ed1f35\" returns successfully" Jan 20 00:35:11.572042 containerd[1477]: time="2026-01-20T00:35:11.571872126Z" level=info msg="StartContainer for \"142c20572307bc7e2b2b9f1e05f8b5c9c0cb6baaeb8b48e20c0c3df42d49bdb1\" returns successfully" Jan 20 00:35:11.581616 containerd[1477]: time="2026-01-20T00:35:11.580142760Z" level=info msg="StartContainer for \"799114dcf7979c22a09f6be0f51568d22f7f8d12d1fe3d4eb812567cc006dd69\" returns successfully" Jan 20 00:35:12.545205 kubelet[2208]: E0120 00:35:12.544851 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:35:12.545205 kubelet[2208]: E0120 00:35:12.545130 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:12.550587 kubelet[2208]: E0120 00:35:12.548068 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:35:12.550587 kubelet[2208]: E0120 00:35:12.548265 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:12.550587 kubelet[2208]: E0120 00:35:12.550267 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:35:12.550587 kubelet[2208]: E0120 00:35:12.550373 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:12.844964 kubelet[2208]: I0120 00:35:12.843270 2208 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:35:13.577923 kubelet[2208]: E0120 00:35:13.577028 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:35:13.577923 kubelet[2208]: E0120 00:35:13.577413 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:13.577923 kubelet[2208]: E0120 00:35:13.578303 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:35:13.581159 kubelet[2208]: E0120 00:35:13.578801 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:13.589404 kubelet[2208]: E0120 00:35:13.589336 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:35:13.589642 kubelet[2208]: E0120 00:35:13.589604 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:14.485658 kubelet[2208]: E0120 00:35:14.484041 2208 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 00:35:14.491331 kubelet[2208]: I0120 00:35:14.490927 2208 apiserver.go:52] "Watching apiserver" Jan 20 00:35:14.553280 kubelet[2208]: I0120 00:35:14.553087 2208 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:35:14.577525 kubelet[2208]: E0120 00:35:14.577141 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:35:14.577525 kubelet[2208]: E0120 00:35:14.577364 2208 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:35:14.577525 kubelet[2208]: E0120 00:35:14.577409 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:14.577818 kubelet[2208]: E0120 00:35:14.577672 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:14.591305 kubelet[2208]: I0120 00:35:14.591206 2208 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:35:14.655455 kubelet[2208]: I0120 00:35:14.655241 2208 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:35:14.688875 kubelet[2208]: E0120 00:35:14.688440 2208 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:35:14.688875 kubelet[2208]: I0120 00:35:14.688498 2208 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:35:14.692372 kubelet[2208]: E0120 00:35:14.692091 2208 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 00:35:14.692372 kubelet[2208]: I0120 00:35:14.692125 2208 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:35:14.694226 kubelet[2208]: E0120 00:35:14.694087 2208 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 00:35:16.814781 kubelet[2208]: I0120 00:35:16.814307 2208 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:35:16.826264 kubelet[2208]: E0120 00:35:16.826149 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:17.419086 systemd[1]: Reloading requested from client PID 2486 ('systemctl') (unit session-7.scope)... Jan 20 00:35:17.419137 systemd[1]: Reloading... Jan 20 00:35:17.568670 zram_generator::config[2526]: No configuration found. Jan 20 00:35:17.586156 kubelet[2208]: E0120 00:35:17.585951 2208 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:17.784913 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:35:18.415934 systemd[1]: Reloading finished in 996 ms. Jan 20 00:35:18.551775 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:35:18.587504 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 00:35:18.588106 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:35:18.588321 systemd[1]: kubelet.service: Consumed 2.798s CPU time, 135.0M memory peak, 0B memory swap peak. Jan 20 00:35:18.604152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:35:18.852149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:35:18.880484 (kubelet)[2570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:35:18.966462 kubelet[2570]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:35:18.966462 kubelet[2570]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:35:18.966462 kubelet[2570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:35:18.967425 kubelet[2570]: I0120 00:35:18.966718 2570 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:35:18.979418 kubelet[2570]: I0120 00:35:18.979288 2570 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 00:35:18.979418 kubelet[2570]: I0120 00:35:18.979315 2570 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:35:18.980455 kubelet[2570]: I0120 00:35:18.980365 2570 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 00:35:18.982113 kubelet[2570]: I0120 00:35:18.982041 2570 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 00:35:18.986420 kubelet[2570]: I0120 00:35:18.986380 2570 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:35:18.995616 kubelet[2570]: E0120 00:35:18.993045 2570 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:35:18.995616 kubelet[2570]: I0120 00:35:18.993110 2570 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 00:35:19.001280 kubelet[2570]: I0120 00:35:19.001212 2570 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 00:35:19.001883 kubelet[2570]: I0120 00:35:19.001849 2570 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:35:19.002472 kubelet[2570]: I0120 00:35:19.001948 2570 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:35:19.002734 kubelet[2570]: I0120 00:35:19.002717 2570 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:35:19.002793 kubelet[2570]: I0120 00:35:19.002783 2570 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 00:35:19.002890 kubelet[2570]: I0120 00:35:19.002879 2570 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:35:19.003150 kubelet[2570]: I0120 00:35:19.003134 2570 kubelet.go:446] "Attempting to sync node with API server" Jan 20 00:35:19.003310 kubelet[2570]: I0120 00:35:19.003293 2570 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:35:19.003412 kubelet[2570]: I0120 00:35:19.003397 2570 kubelet.go:352] "Adding apiserver pod source" Jan 20 00:35:19.003501 kubelet[2570]: I0120 00:35:19.003485 2570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:35:19.005523 kubelet[2570]: I0120 00:35:19.005492 2570 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:35:19.006754 kubelet[2570]: I0120 00:35:19.006718 2570 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 00:35:19.011033 kubelet[2570]: I0120 00:35:19.010976 2570 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 00:35:19.011119 kubelet[2570]: I0120 00:35:19.011048 2570 server.go:1287] "Started kubelet" Jan 20 00:35:19.014271 sudo[2586]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 20 00:35:19.015004 sudo[2586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 20 00:35:19.019525 kubelet[2570]: I0120 00:35:19.018447 2570 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:35:19.019525 kubelet[2570]: I0120 00:35:19.019322 2570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:35:19.020929 kubelet[2570]: I0120 00:35:19.020479 2570 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:35:19.021153 kubelet[2570]: I0120 00:35:19.021087 2570 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:35:19.027778 kubelet[2570]: I0120 00:35:19.024420 2570 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:35:19.027778 kubelet[2570]: I0120 00:35:19.027210 2570 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 00:35:19.027778 kubelet[2570]: I0120 00:35:19.027473 2570 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 00:35:19.030530 kubelet[2570]: I0120 00:35:19.030504 2570 server.go:479] "Adding debug handlers to kubelet server" Jan 20 00:35:19.034776 kubelet[2570]: I0120 00:35:19.034754 2570 factory.go:221] Registration of the systemd container factory successfully Jan 20 00:35:19.035333 kubelet[2570]: I0120 00:35:19.035272 2570 reconciler.go:26] "Reconciler: start to sync state" Jan 20 00:35:19.035725 kubelet[2570]: I0120 00:35:19.035697 2570 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:35:19.037502 kubelet[2570]: E0120 00:35:19.037436 2570 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:35:19.039335 kubelet[2570]: I0120 00:35:19.039313 2570 factory.go:221] Registration of the containerd container factory successfully Jan 20 00:35:19.061621 kubelet[2570]: I0120 00:35:19.061067 2570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 00:35:19.069004 kubelet[2570]: I0120 00:35:19.068935 2570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 00:35:19.069137 kubelet[2570]: I0120 00:35:19.069025 2570 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 00:35:19.069137 kubelet[2570]: I0120 00:35:19.069050 2570 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:35:19.069137 kubelet[2570]: I0120 00:35:19.069063 2570 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 00:35:19.069269 kubelet[2570]: E0120 00:35:19.069129 2570 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:35:19.107727 kubelet[2570]: I0120 00:35:19.107608 2570 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:35:19.107727 kubelet[2570]: I0120 00:35:19.107654 2570 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:35:19.107727 kubelet[2570]: I0120 00:35:19.107691 2570 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:35:19.108613 kubelet[2570]: I0120 00:35:19.107878 2570 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 00:35:19.108613 kubelet[2570]: I0120 00:35:19.107903 2570 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 00:35:19.108613 kubelet[2570]: I0120 00:35:19.107929 2570 policy_none.go:49] "None policy: Start" Jan 20 00:35:19.108613 kubelet[2570]: I0120 00:35:19.107941 2570 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 00:35:19.108613 kubelet[2570]: I0120 00:35:19.107990 2570 state_mem.go:35] "Initializing new in-memory state store" Jan 20 00:35:19.108613 kubelet[2570]: I0120 00:35:19.108198 2570 state_mem.go:75] "Updated machine memory state" Jan 20 00:35:19.115238 kubelet[2570]: I0120 00:35:19.115211 2570 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 00:35:19.116239 kubelet[2570]: I0120 00:35:19.115787 2570 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:35:19.116239 kubelet[2570]: I0120 00:35:19.115802 2570 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:35:19.116239 kubelet[2570]: I0120 00:35:19.116073 2570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:35:19.117391 kubelet[2570]: E0120 00:35:19.117308 2570 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:35:19.170695 kubelet[2570]: I0120 00:35:19.170643 2570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:35:19.171579 kubelet[2570]: I0120 00:35:19.171424 2570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:35:19.171827 kubelet[2570]: I0120 00:35:19.171097 2570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:35:19.376850 kubelet[2570]: I0120 00:35:19.373500 2570 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:35:19.385875 kubelet[2570]: E0120 00:35:19.385784 2570 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 00:35:19.391674 kubelet[2570]: I0120 00:35:19.391626 2570 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 00:35:19.392154 kubelet[2570]: I0120 00:35:19.391945 2570 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:35:19.464283 kubelet[2570]: I0120 00:35:19.463788 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:35:19.465802 kubelet[2570]: I0120 00:35:19.465718 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:35:19.466529 kubelet[2570]: I0120 00:35:19.466302 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:35:19.466529 kubelet[2570]: I0120 00:35:19.466444 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/00a0ec2cb9d424ca842d67babb2337c8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"00a0ec2cb9d424ca842d67babb2337c8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:35:19.467228 kubelet[2570]: I0120 00:35:19.466944 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/00a0ec2cb9d424ca842d67babb2337c8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"00a0ec2cb9d424ca842d67babb2337c8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:35:19.467442 kubelet[2570]: I0120 00:35:19.467121 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/00a0ec2cb9d424ca842d67babb2337c8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"00a0ec2cb9d424ca842d67babb2337c8\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:35:19.467949 kubelet[2570]: I0120 00:35:19.467806 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:35:19.468229 kubelet[2570]: I0120 00:35:19.468079 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:35:19.468229 kubelet[2570]: I0120 00:35:19.468113 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:35:19.681942 kubelet[2570]: E0120 00:35:19.681699 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:19.687246 kubelet[2570]: E0120 00:35:19.687116 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:19.706333 kubelet[2570]: E0120 00:35:19.706263 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:20.006086 kubelet[2570]: I0120 00:35:20.004142 2570 apiserver.go:52] "Watching apiserver" Jan 20 00:35:20.041502 sudo[2586]: pam_unix(sudo:session): session closed for user root Jan 20 00:35:20.112282 kubelet[2570]: E0120 00:35:20.112123 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:20.149949 kubelet[2570]: E0120 00:35:20.113153 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:20.149949 kubelet[2570]: E0120 00:35:20.113480 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:20.212326 kubelet[2570]: I0120 00:35:20.211954 2570 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 00:35:20.287154 kubelet[2570]: I0120 00:35:20.285819 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.285787918 podStartE2EDuration="1.285787918s" podCreationTimestamp="2026-01-20 00:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:35:20.271673773 +0000 UTC m=+1.382144111" watchObservedRunningTime="2026-01-20 00:35:20.285787918 +0000 UTC m=+1.396258256" Jan 20 00:35:20.287154 kubelet[2570]: I0120 00:35:20.285967 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.285959828 podStartE2EDuration="4.285959828s" podCreationTimestamp="2026-01-20 00:35:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:35:20.285467691 +0000 UTC m=+1.395938109" watchObservedRunningTime="2026-01-20 00:35:20.285959828 +0000 UTC m=+1.396430166" Jan 20 00:35:21.116806 kubelet[2570]: E0120 00:35:21.116740 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:21.116806 kubelet[2570]: E0120 00:35:21.116891 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:22.429417 kubelet[2570]: I0120 00:35:22.429333 2570 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 00:35:22.430702 kubelet[2570]: I0120 00:35:22.430327 2570 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 00:35:22.430778 containerd[1477]: time="2026-01-20T00:35:22.430017853Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 00:35:22.711979 sudo[1656]: pam_unix(sudo:session): session closed for user root Jan 20 00:35:22.714994 sshd[1653]: pam_unix(sshd:session): session closed for user core Jan 20 00:35:22.718961 systemd[1]: sshd@6-10.0.0.24:22-10.0.0.1:34148.service: Deactivated successfully. Jan 20 00:35:22.721383 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 00:35:22.721694 systemd[1]: session-7.scope: Consumed 14.708s CPU time, 163.7M memory peak, 0B memory swap peak. Jan 20 00:35:22.723125 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. Jan 20 00:35:22.724960 systemd-logind[1462]: Removed session 7. Jan 20 00:35:23.339242 kubelet[2570]: I0120 00:35:23.339048 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.339017194 podStartE2EDuration="4.339017194s" podCreationTimestamp="2026-01-20 00:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:35:20.301325507 +0000 UTC m=+1.411795865" watchObservedRunningTime="2026-01-20 00:35:23.339017194 +0000 UTC m=+4.449487542" Jan 20 00:35:23.368900 systemd[1]: Created slice kubepods-burstable-pod1b47e32c_9040_4e71_939f_6287ca4dcb3e.slice - libcontainer container kubepods-burstable-pod1b47e32c_9040_4e71_939f_6287ca4dcb3e.slice. Jan 20 00:35:23.380134 systemd[1]: Created slice kubepods-besteffort-pod6d055e05_f30c_41fb_8327_531de225cfcc.slice - libcontainer container kubepods-besteffort-pod6d055e05_f30c_41fb_8327_531de225cfcc.slice. Jan 20 00:35:23.444801 kubelet[2570]: I0120 00:35:23.444660 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-bpf-maps\") pod \"cilium-kgz5d\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " pod="kube-system/cilium-kgz5d" Jan 20 00:35:23.444801 kubelet[2570]: I0120 00:35:23.444727 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6d055e05-f30c-41fb-8327-531de225cfcc-kube-proxy\") pod \"kube-proxy-jk7w9\" (UID: \"6d055e05-f30c-41fb-8327-531de225cfcc\") " pod="kube-system/kube-proxy-jk7w9" Jan 20 00:35:23.444801 kubelet[2570]: I0120 00:35:23.444748 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b47e32c-9040-4e71-939f-6287ca4dcb3e-cilium-config-path\") pod \"cilium-kgz5d\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " pod="kube-system/cilium-kgz5d" Jan 20 00:35:23.444801 kubelet[2570]: I0120 00:35:23.444764 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-host-proc-sys-net\") pod \"cilium-kgz5d\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " pod="kube-system/cilium-kgz5d" Jan 20 00:35:23.444801 kubelet[2570]: I0120 00:35:23.444776 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-cilium-cgroup\") pod \"cilium-kgz5d\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " pod="kube-system/cilium-kgz5d" Jan 20 00:35:23.444801 kubelet[2570]: I0120 00:35:23.444789 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-lib-modules\") pod \"cilium-kgz5d\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " pod="kube-system/cilium-kgz5d" Jan 20 00:35:23.446445 kubelet[2570]: I0120 00:35:23.444867 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d055e05-f30c-41fb-8327-531de225cfcc-xtables-lock\") pod \"kube-proxy-jk7w9\" (UID: \"6d055e05-f30c-41fb-8327-531de225cfcc\") " pod="kube-system/kube-proxy-jk7w9" Jan 20 00:35:23.446445 kubelet[2570]: I0120 00:35:23.444881 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-cilium-run\") pod \"cilium-kgz5d\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " pod="kube-system/cilium-kgz5d" Jan 20 00:35:23.446445 kubelet[2570]: I0120 00:35:23.444896 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-hostproc\") pod \"cilium-kgz5d\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " pod="kube-system/cilium-kgz5d" Jan 20 00:35:23.446445 kubelet[2570]: I0120 00:35:23.444909 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-etc-cni-netd\") pod \"cilium-kgz5d\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " pod="kube-system/cilium-kgz5d" Jan 20 00:35:23.446445 kubelet[2570]: I0120 00:35:23.444922 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-xtables-lock\") pod \"cilium-kgz5d\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " pod="kube-system/cilium-kgz5d" Jan 20 00:35:23.446445 kubelet[2570]: I0120 00:35:23.444936 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b47e32c-9040-4e71-939f-6287ca4dcb3e-hubble-tls\") pod \"cilium-kgz5d\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " pod="kube-system/cilium-kgz5d" Jan 20 00:35:23.446795 kubelet[2570]: I0120 00:35:23.444951 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwfbl\" (UniqueName: \"kubernetes.io/projected/1b47e32c-9040-4e71-939f-6287ca4dcb3e-kube-api-access-mwfbl\") pod \"cilium-kgz5d\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " pod="kube-system/cilium-kgz5d" Jan 20 00:35:23.446795 kubelet[2570]: I0120 00:35:23.444964 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d055e05-f30c-41fb-8327-531de225cfcc-lib-modules\") pod \"kube-proxy-jk7w9\" (UID: \"6d055e05-f30c-41fb-8327-531de225cfcc\") " pod="kube-system/kube-proxy-jk7w9" Jan 20 00:35:23.446795 kubelet[2570]: I0120 00:35:23.444981 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-host-proc-sys-kernel\") pod \"cilium-kgz5d\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " pod="kube-system/cilium-kgz5d" Jan 20 00:35:23.446795 kubelet[2570]: I0120 00:35:23.444994 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk74g\" (UniqueName: \"kubernetes.io/projected/6d055e05-f30c-41fb-8327-531de225cfcc-kube-api-access-rk74g\") pod \"kube-proxy-jk7w9\" (UID: \"6d055e05-f30c-41fb-8327-531de225cfcc\") " pod="kube-system/kube-proxy-jk7w9" Jan 20 00:35:23.446795 kubelet[2570]: I0120 00:35:23.445009 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-cni-path\") pod \"cilium-kgz5d\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " pod="kube-system/cilium-kgz5d" Jan 20 00:35:23.446991 kubelet[2570]: I0120 00:35:23.445033 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b47e32c-9040-4e71-939f-6287ca4dcb3e-clustermesh-secrets\") pod \"cilium-kgz5d\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " pod="kube-system/cilium-kgz5d" Jan 20 00:35:23.526808 systemd[1]: Created slice kubepods-besteffort-podc7f028f7_e3ff_49bc_adb6_fd1f6e595003.slice - libcontainer container kubepods-besteffort-podc7f028f7_e3ff_49bc_adb6_fd1f6e595003.slice. Jan 20 00:35:23.646269 kubelet[2570]: I0120 00:35:23.646142 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pznrn\" (UniqueName: \"kubernetes.io/projected/c7f028f7-e3ff-49bc-adb6-fd1f6e595003-kube-api-access-pznrn\") pod \"cilium-operator-6c4d7847fc-bmx87\" (UID: \"c7f028f7-e3ff-49bc-adb6-fd1f6e595003\") " pod="kube-system/cilium-operator-6c4d7847fc-bmx87" Jan 20 00:35:23.646269 kubelet[2570]: I0120 00:35:23.646253 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7f028f7-e3ff-49bc-adb6-fd1f6e595003-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bmx87\" (UID: \"c7f028f7-e3ff-49bc-adb6-fd1f6e595003\") " pod="kube-system/cilium-operator-6c4d7847fc-bmx87" Jan 20 00:35:23.675324 kubelet[2570]: E0120 00:35:23.675211 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:23.677016 containerd[1477]: time="2026-01-20T00:35:23.676899983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kgz5d,Uid:1b47e32c-9040-4e71-939f-6287ca4dcb3e,Namespace:kube-system,Attempt:0,}" Jan 20 00:35:23.690845 kubelet[2570]: E0120 00:35:23.690736 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:23.691402 containerd[1477]: time="2026-01-20T00:35:23.691302061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jk7w9,Uid:6d055e05-f30c-41fb-8327-531de225cfcc,Namespace:kube-system,Attempt:0,}" Jan 20 00:35:23.725644 containerd[1477]: time="2026-01-20T00:35:23.725307702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:35:23.726687 containerd[1477]: time="2026-01-20T00:35:23.726595315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:35:23.726687 containerd[1477]: time="2026-01-20T00:35:23.726638444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:23.727228 containerd[1477]: time="2026-01-20T00:35:23.727027921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:23.752783 systemd[1]: Started cri-containerd-4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d.scope - libcontainer container 4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d. Jan 20 00:35:23.762440 containerd[1477]: time="2026-01-20T00:35:23.760981503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:35:23.762440 containerd[1477]: time="2026-01-20T00:35:23.761085106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:35:23.762440 containerd[1477]: time="2026-01-20T00:35:23.761106927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:23.762440 containerd[1477]: time="2026-01-20T00:35:23.761358727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:23.808983 systemd[1]: Started cri-containerd-63bac3132ef4c18ee14e646f1cb2e4323621d532538b32d1595ec97ea6605f96.scope - libcontainer container 63bac3132ef4c18ee14e646f1cb2e4323621d532538b32d1595ec97ea6605f96. Jan 20 00:35:23.826306 containerd[1477]: time="2026-01-20T00:35:23.826102998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kgz5d,Uid:1b47e32c-9040-4e71-939f-6287ca4dcb3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\"" Jan 20 00:35:23.827967 kubelet[2570]: E0120 00:35:23.827795 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:23.831360 kubelet[2570]: E0120 00:35:23.831277 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:23.831644 containerd[1477]: time="2026-01-20T00:35:23.831523325Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 20 00:35:23.833639 containerd[1477]: time="2026-01-20T00:35:23.833454617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bmx87,Uid:c7f028f7-e3ff-49bc-adb6-fd1f6e595003,Namespace:kube-system,Attempt:0,}" Jan 20 00:35:23.870849 containerd[1477]: time="2026-01-20T00:35:23.870708092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jk7w9,Uid:6d055e05-f30c-41fb-8327-531de225cfcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"63bac3132ef4c18ee14e646f1cb2e4323621d532538b32d1595ec97ea6605f96\"" Jan 20 00:35:23.873035 kubelet[2570]: E0120 00:35:23.872698 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:23.877665 containerd[1477]: time="2026-01-20T00:35:23.877476147Z" level=info msg="CreateContainer within sandbox \"63bac3132ef4c18ee14e646f1cb2e4323621d532538b32d1595ec97ea6605f96\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 00:35:23.891647 containerd[1477]: time="2026-01-20T00:35:23.891395384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:35:23.891837 containerd[1477]: time="2026-01-20T00:35:23.891598071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:35:23.891837 containerd[1477]: time="2026-01-20T00:35:23.891645330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:23.895388 containerd[1477]: time="2026-01-20T00:35:23.895301738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:23.943927 containerd[1477]: time="2026-01-20T00:35:23.942719797Z" level=info msg="CreateContainer within sandbox \"63bac3132ef4c18ee14e646f1cb2e4323621d532538b32d1595ec97ea6605f96\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a1298577608342cd433bb09fa674063012bb3f454a951d03b7184a7ee0f5bdff\"" Jan 20 00:35:23.946531 containerd[1477]: time="2026-01-20T00:35:23.946167978Z" level=info msg="StartContainer for \"a1298577608342cd433bb09fa674063012bb3f454a951d03b7184a7ee0f5bdff\"" Jan 20 00:35:23.968735 systemd[1]: Started cri-containerd-d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d.scope - libcontainer container d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d. Jan 20 00:35:24.032920 systemd[1]: Started cri-containerd-a1298577608342cd433bb09fa674063012bb3f454a951d03b7184a7ee0f5bdff.scope - libcontainer container a1298577608342cd433bb09fa674063012bb3f454a951d03b7184a7ee0f5bdff. Jan 20 00:35:24.053906 containerd[1477]: time="2026-01-20T00:35:24.052856633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bmx87,Uid:c7f028f7-e3ff-49bc-adb6-fd1f6e595003,Namespace:kube-system,Attempt:0,} returns sandbox id \"d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d\"" Jan 20 00:35:24.055735 kubelet[2570]: E0120 00:35:24.055523 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:24.103590 containerd[1477]: time="2026-01-20T00:35:24.103419786Z" level=info msg="StartContainer for \"a1298577608342cd433bb09fa674063012bb3f454a951d03b7184a7ee0f5bdff\" returns successfully" Jan 20 00:35:24.198504 kubelet[2570]: E0120 00:35:24.197352 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:24.219110 kubelet[2570]: I0120 00:35:24.218973 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jk7w9" podStartSLOduration=1.218947085 podStartE2EDuration="1.218947085s" podCreationTimestamp="2026-01-20 00:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:35:24.217929223 +0000 UTC m=+5.328399561" watchObservedRunningTime="2026-01-20 00:35:24.218947085 +0000 UTC m=+5.329417424" Jan 20 00:35:25.909716 kubelet[2570]: E0120 00:35:25.909109 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:25.934459 kubelet[2570]: E0120 00:35:25.934321 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:26.227095 kubelet[2570]: E0120 00:35:26.226762 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:26.227095 kubelet[2570]: E0120 00:35:26.226946 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:27.229854 kubelet[2570]: E0120 00:35:27.229698 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:27.233122 kubelet[2570]: E0120 00:35:27.230370 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:27.610230 kubelet[2570]: E0120 00:35:27.610025 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:28.232091 kubelet[2570]: E0120 00:35:28.231986 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:36.022701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1594358718.mount: Deactivated successfully. Jan 20 00:35:39.533434 containerd[1477]: time="2026-01-20T00:35:39.532971574Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:35:39.534524 containerd[1477]: time="2026-01-20T00:35:39.534438510Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 20 00:35:39.536039 containerd[1477]: time="2026-01-20T00:35:39.535963857Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:35:39.537772 containerd[1477]: time="2026-01-20T00:35:39.537707738Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.706088515s" Jan 20 00:35:39.537772 containerd[1477]: time="2026-01-20T00:35:39.537752711Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 20 00:35:39.539408 containerd[1477]: time="2026-01-20T00:35:39.539188741Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 20 00:35:39.543239 containerd[1477]: time="2026-01-20T00:35:39.543131028Z" level=info msg="CreateContainer within sandbox \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 00:35:39.565107 containerd[1477]: time="2026-01-20T00:35:39.565007013Z" level=info msg="CreateContainer within sandbox \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7df0f822ed006853ea486ce0c85c840cd33ac53d1b9ad4ab8d76e96d05cf305b\"" Jan 20 00:35:39.566059 containerd[1477]: time="2026-01-20T00:35:39.566023480Z" level=info msg="StartContainer for \"7df0f822ed006853ea486ce0c85c840cd33ac53d1b9ad4ab8d76e96d05cf305b\"" Jan 20 00:35:39.640913 systemd[1]: Started cri-containerd-7df0f822ed006853ea486ce0c85c840cd33ac53d1b9ad4ab8d76e96d05cf305b.scope - libcontainer container 7df0f822ed006853ea486ce0c85c840cd33ac53d1b9ad4ab8d76e96d05cf305b. Jan 20 00:35:39.694284 containerd[1477]: time="2026-01-20T00:35:39.694110181Z" level=info msg="StartContainer for \"7df0f822ed006853ea486ce0c85c840cd33ac53d1b9ad4ab8d76e96d05cf305b\" returns successfully" Jan 20 00:35:39.711116 systemd[1]: cri-containerd-7df0f822ed006853ea486ce0c85c840cd33ac53d1b9ad4ab8d76e96d05cf305b.scope: Deactivated successfully. Jan 20 00:35:39.750186 kubelet[2570]: E0120 00:35:39.750126 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:39.911451 containerd[1477]: time="2026-01-20T00:35:39.911279723Z" level=info msg="shim disconnected" id=7df0f822ed006853ea486ce0c85c840cd33ac53d1b9ad4ab8d76e96d05cf305b namespace=k8s.io Jan 20 00:35:39.911789 containerd[1477]: time="2026-01-20T00:35:39.911461721Z" level=warning msg="cleaning up after shim disconnected" id=7df0f822ed006853ea486ce0c85c840cd33ac53d1b9ad4ab8d76e96d05cf305b namespace=k8s.io Jan 20 00:35:39.911789 containerd[1477]: time="2026-01-20T00:35:39.911481138Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:35:40.559008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7df0f822ed006853ea486ce0c85c840cd33ac53d1b9ad4ab8d76e96d05cf305b-rootfs.mount: Deactivated successfully. Jan 20 00:35:40.764304 kubelet[2570]: E0120 00:35:40.763880 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:40.768085 containerd[1477]: time="2026-01-20T00:35:40.767983779Z" level=info msg="CreateContainer within sandbox \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 00:35:40.832180 containerd[1477]: time="2026-01-20T00:35:40.831935730Z" level=info msg="CreateContainer within sandbox \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e7fe472da96aa70a907a0befa58d0aac4dda455baddd792f49d4d5ba653ad003\"" Jan 20 00:35:40.833117 containerd[1477]: time="2026-01-20T00:35:40.833046534Z" level=info msg="StartContainer for \"e7fe472da96aa70a907a0befa58d0aac4dda455baddd792f49d4d5ba653ad003\"" Jan 20 00:35:40.893977 systemd[1]: Started cri-containerd-e7fe472da96aa70a907a0befa58d0aac4dda455baddd792f49d4d5ba653ad003.scope - libcontainer container e7fe472da96aa70a907a0befa58d0aac4dda455baddd792f49d4d5ba653ad003. Jan 20 00:35:40.933573 containerd[1477]: time="2026-01-20T00:35:40.933461477Z" level=info msg="StartContainer for \"e7fe472da96aa70a907a0befa58d0aac4dda455baddd792f49d4d5ba653ad003\" returns successfully" Jan 20 00:35:40.965232 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:35:40.966174 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:35:40.966411 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:35:40.973076 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:35:40.973608 systemd[1]: cri-containerd-e7fe472da96aa70a907a0befa58d0aac4dda455baddd792f49d4d5ba653ad003.scope: Deactivated successfully. Jan 20 00:35:41.036182 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:35:41.049496 containerd[1477]: time="2026-01-20T00:35:41.049141861Z" level=info msg="shim disconnected" id=e7fe472da96aa70a907a0befa58d0aac4dda455baddd792f49d4d5ba653ad003 namespace=k8s.io Jan 20 00:35:41.049496 containerd[1477]: time="2026-01-20T00:35:41.049253769Z" level=warning msg="cleaning up after shim disconnected" id=e7fe472da96aa70a907a0befa58d0aac4dda455baddd792f49d4d5ba653ad003 namespace=k8s.io Jan 20 00:35:41.049496 containerd[1477]: time="2026-01-20T00:35:41.049274598Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:35:41.576617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7fe472da96aa70a907a0befa58d0aac4dda455baddd792f49d4d5ba653ad003-rootfs.mount: Deactivated successfully. Jan 20 00:35:41.835810 kubelet[2570]: E0120 00:35:41.828477 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:41.886493 containerd[1477]: time="2026-01-20T00:35:41.886158478Z" level=info msg="CreateContainer within sandbox \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 00:35:41.924794 containerd[1477]: time="2026-01-20T00:35:41.924678027Z" level=info msg="CreateContainer within sandbox \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"98fd6525927c65399a004b2422238795fd95a4815604c3e49b4dd1573aa41c2f\"" Jan 20 00:35:41.948789 containerd[1477]: time="2026-01-20T00:35:41.948432216Z" level=info msg="StartContainer for \"98fd6525927c65399a004b2422238795fd95a4815604c3e49b4dd1573aa41c2f\"" Jan 20 00:35:42.043141 systemd[1]: Started cri-containerd-98fd6525927c65399a004b2422238795fd95a4815604c3e49b4dd1573aa41c2f.scope - libcontainer container 98fd6525927c65399a004b2422238795fd95a4815604c3e49b4dd1573aa41c2f. Jan 20 00:35:42.117100 containerd[1477]: time="2026-01-20T00:35:42.116265832Z" level=info msg="StartContainer for \"98fd6525927c65399a004b2422238795fd95a4815604c3e49b4dd1573aa41c2f\" returns successfully" Jan 20 00:35:42.121416 systemd[1]: cri-containerd-98fd6525927c65399a004b2422238795fd95a4815604c3e49b4dd1573aa41c2f.scope: Deactivated successfully. Jan 20 00:35:42.151500 containerd[1477]: time="2026-01-20T00:35:42.150114936Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:35:42.151500 containerd[1477]: time="2026-01-20T00:35:42.151406898Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 20 00:35:42.153330 containerd[1477]: time="2026-01-20T00:35:42.153098121Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:35:42.157100 containerd[1477]: time="2026-01-20T00:35:42.156923974Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.617658449s" Jan 20 00:35:42.157100 containerd[1477]: time="2026-01-20T00:35:42.156973046Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 20 00:35:42.179270 containerd[1477]: time="2026-01-20T00:35:42.172187120Z" level=info msg="CreateContainer within sandbox \"d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 20 00:35:42.234725 containerd[1477]: time="2026-01-20T00:35:42.233802336Z" level=info msg="shim disconnected" id=98fd6525927c65399a004b2422238795fd95a4815604c3e49b4dd1573aa41c2f namespace=k8s.io Jan 20 00:35:42.234725 containerd[1477]: time="2026-01-20T00:35:42.234152519Z" level=warning msg="cleaning up after shim disconnected" id=98fd6525927c65399a004b2422238795fd95a4815604c3e49b4dd1573aa41c2f namespace=k8s.io Jan 20 00:35:42.234725 containerd[1477]: time="2026-01-20T00:35:42.234171143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:35:42.398510 containerd[1477]: time="2026-01-20T00:35:42.398174250Z" level=info msg="CreateContainer within sandbox \"d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f\"" Jan 20 00:35:42.407289 containerd[1477]: time="2026-01-20T00:35:42.407115753Z" level=info msg="StartContainer for \"0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f\"" Jan 20 00:35:42.461124 systemd[1]: Started cri-containerd-0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f.scope - libcontainer container 0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f. Jan 20 00:35:42.526582 containerd[1477]: time="2026-01-20T00:35:42.526360390Z" level=info msg="StartContainer for \"0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f\" returns successfully" Jan 20 00:35:42.563775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98fd6525927c65399a004b2422238795fd95a4815604c3e49b4dd1573aa41c2f-rootfs.mount: Deactivated successfully. Jan 20 00:35:42.826957 kubelet[2570]: E0120 00:35:42.826384 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:42.834255 kubelet[2570]: E0120 00:35:42.834107 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:42.840345 containerd[1477]: time="2026-01-20T00:35:42.840274262Z" level=info msg="CreateContainer within sandbox \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 00:35:42.853672 kubelet[2570]: I0120 00:35:42.853499 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bmx87" podStartSLOduration=1.751492686 podStartE2EDuration="19.853479022s" podCreationTimestamp="2026-01-20 00:35:23 +0000 UTC" firstStartedPulling="2026-01-20 00:35:24.056937103 +0000 UTC m=+5.167407441" lastFinishedPulling="2026-01-20 00:35:42.158923429 +0000 UTC m=+23.269393777" observedRunningTime="2026-01-20 00:35:42.852844968 +0000 UTC m=+23.963315336" watchObservedRunningTime="2026-01-20 00:35:42.853479022 +0000 UTC m=+23.963949370" Jan 20 00:35:42.897505 containerd[1477]: time="2026-01-20T00:35:42.897116996Z" level=info msg="CreateContainer within sandbox \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"78b0acfcc901864dbb672fc48a4388465957e9f51795fe718c38ed3cfde8e79d\"" Jan 20 00:35:42.900408 containerd[1477]: time="2026-01-20T00:35:42.899787726Z" level=info msg="StartContainer for \"78b0acfcc901864dbb672fc48a4388465957e9f51795fe718c38ed3cfde8e79d\"" Jan 20 00:35:42.953826 systemd[1]: Started cri-containerd-78b0acfcc901864dbb672fc48a4388465957e9f51795fe718c38ed3cfde8e79d.scope - libcontainer container 78b0acfcc901864dbb672fc48a4388465957e9f51795fe718c38ed3cfde8e79d. Jan 20 00:35:43.036469 systemd[1]: cri-containerd-78b0acfcc901864dbb672fc48a4388465957e9f51795fe718c38ed3cfde8e79d.scope: Deactivated successfully. Jan 20 00:35:43.038937 containerd[1477]: time="2026-01-20T00:35:43.038150649Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b47e32c_9040_4e71_939f_6287ca4dcb3e.slice/cri-containerd-78b0acfcc901864dbb672fc48a4388465957e9f51795fe718c38ed3cfde8e79d.scope/memory.events\": no such file or directory" Jan 20 00:35:43.053342 containerd[1477]: time="2026-01-20T00:35:43.053199624Z" level=info msg="StartContainer for \"78b0acfcc901864dbb672fc48a4388465957e9f51795fe718c38ed3cfde8e79d\" returns successfully" Jan 20 00:35:43.123968 containerd[1477]: time="2026-01-20T00:35:43.123655951Z" level=info msg="shim disconnected" id=78b0acfcc901864dbb672fc48a4388465957e9f51795fe718c38ed3cfde8e79d namespace=k8s.io Jan 20 00:35:43.123968 containerd[1477]: time="2026-01-20T00:35:43.123914184Z" level=warning msg="cleaning up after shim disconnected" id=78b0acfcc901864dbb672fc48a4388465957e9f51795fe718c38ed3cfde8e79d namespace=k8s.io Jan 20 00:35:43.123968 containerd[1477]: time="2026-01-20T00:35:43.123934943Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:35:43.561651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78b0acfcc901864dbb672fc48a4388465957e9f51795fe718c38ed3cfde8e79d-rootfs.mount: Deactivated successfully. Jan 20 00:35:43.840171 kubelet[2570]: E0120 00:35:43.839965 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:43.840171 kubelet[2570]: E0120 00:35:43.840052 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:43.844178 containerd[1477]: time="2026-01-20T00:35:43.844063380Z" level=info msg="CreateContainer within sandbox \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 00:35:43.874371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1352282460.mount: Deactivated successfully. Jan 20 00:35:43.876527 containerd[1477]: time="2026-01-20T00:35:43.876410731Z" level=info msg="CreateContainer within sandbox \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd\"" Jan 20 00:35:43.877757 containerd[1477]: time="2026-01-20T00:35:43.877725094Z" level=info msg="StartContainer for \"26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd\"" Jan 20 00:35:43.933740 systemd[1]: Started cri-containerd-26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd.scope - libcontainer container 26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd. Jan 20 00:35:43.988455 containerd[1477]: time="2026-01-20T00:35:43.988378044Z" level=info msg="StartContainer for \"26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd\" returns successfully" Jan 20 00:35:44.190763 kubelet[2570]: I0120 00:35:44.190638 2570 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 00:35:44.239370 systemd[1]: Created slice kubepods-burstable-pod5bbf5739_56fb_43bc_bfda_0ed9e40f91d8.slice - libcontainer container kubepods-burstable-pod5bbf5739_56fb_43bc_bfda_0ed9e40f91d8.slice. Jan 20 00:35:44.251070 systemd[1]: Created slice kubepods-burstable-pod017f9de8_92f6_4da8_a44a_5a2db6f1c312.slice - libcontainer container kubepods-burstable-pod017f9de8_92f6_4da8_a44a_5a2db6f1c312.slice. Jan 20 00:35:44.412238 kubelet[2570]: I0120 00:35:44.411667 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/017f9de8-92f6-4da8-a44a-5a2db6f1c312-config-volume\") pod \"coredns-668d6bf9bc-ptkdg\" (UID: \"017f9de8-92f6-4da8-a44a-5a2db6f1c312\") " pod="kube-system/coredns-668d6bf9bc-ptkdg" Jan 20 00:35:44.412238 kubelet[2570]: I0120 00:35:44.411985 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bbf5739-56fb-43bc-bfda-0ed9e40f91d8-config-volume\") pod \"coredns-668d6bf9bc-t6lvh\" (UID: \"5bbf5739-56fb-43bc-bfda-0ed9e40f91d8\") " pod="kube-system/coredns-668d6bf9bc-t6lvh" Jan 20 00:35:44.412238 kubelet[2570]: I0120 00:35:44.412071 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw776\" (UniqueName: \"kubernetes.io/projected/5bbf5739-56fb-43bc-bfda-0ed9e40f91d8-kube-api-access-dw776\") pod \"coredns-668d6bf9bc-t6lvh\" (UID: \"5bbf5739-56fb-43bc-bfda-0ed9e40f91d8\") " pod="kube-system/coredns-668d6bf9bc-t6lvh" Jan 20 00:35:44.412238 kubelet[2570]: I0120 00:35:44.412145 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2dwn\" (UniqueName: \"kubernetes.io/projected/017f9de8-92f6-4da8-a44a-5a2db6f1c312-kube-api-access-v2dwn\") pod \"coredns-668d6bf9bc-ptkdg\" (UID: \"017f9de8-92f6-4da8-a44a-5a2db6f1c312\") " pod="kube-system/coredns-668d6bf9bc-ptkdg" Jan 20 00:35:44.568754 systemd[1]: run-containerd-runc-k8s.io-26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd-runc.BH5J7p.mount: Deactivated successfully. Jan 20 00:35:44.848072 kubelet[2570]: E0120 00:35:44.847895 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:44.848846 containerd[1477]: time="2026-01-20T00:35:44.848801915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t6lvh,Uid:5bbf5739-56fb-43bc-bfda-0ed9e40f91d8,Namespace:kube-system,Attempt:0,}" Jan 20 00:35:44.851037 kubelet[2570]: E0120 00:35:44.850970 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:44.856473 kubelet[2570]: E0120 00:35:44.856239 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:44.858634 containerd[1477]: time="2026-01-20T00:35:44.858413268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ptkdg,Uid:017f9de8-92f6-4da8-a44a-5a2db6f1c312,Namespace:kube-system,Attempt:0,}" Jan 20 00:35:44.880489 kubelet[2570]: I0120 00:35:44.880241 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kgz5d" podStartSLOduration=6.172178628 podStartE2EDuration="21.880185971s" podCreationTimestamp="2026-01-20 00:35:23 +0000 UTC" firstStartedPulling="2026-01-20 00:35:23.830902257 +0000 UTC m=+4.941372595" lastFinishedPulling="2026-01-20 00:35:39.5389096 +0000 UTC m=+20.649379938" observedRunningTime="2026-01-20 00:35:44.879866358 +0000 UTC m=+25.990336706" watchObservedRunningTime="2026-01-20 00:35:44.880185971 +0000 UTC m=+25.990656309" Jan 20 00:35:45.853499 kubelet[2570]: E0120 00:35:45.853390 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:46.614284 systemd-networkd[1395]: cilium_host: Link UP Jan 20 00:35:46.616180 systemd-networkd[1395]: cilium_net: Link UP Jan 20 00:35:46.616689 systemd-networkd[1395]: cilium_net: Gained carrier Jan 20 00:35:46.617122 systemd-networkd[1395]: cilium_host: Gained carrier Jan 20 00:35:46.797437 systemd-networkd[1395]: cilium_vxlan: Link UP Jan 20 00:35:46.797452 systemd-networkd[1395]: cilium_vxlan: Gained carrier Jan 20 00:35:46.858010 kubelet[2570]: E0120 00:35:46.857777 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:46.970859 systemd-networkd[1395]: cilium_host: Gained IPv6LL Jan 20 00:35:47.079610 kernel: NET: Registered PF_ALG protocol family Jan 20 00:35:47.418868 systemd-networkd[1395]: cilium_net: Gained IPv6LL Jan 20 00:35:47.990422 systemd-networkd[1395]: lxc_health: Link UP Jan 20 00:35:48.002526 systemd-networkd[1395]: lxc_health: Gained carrier Jan 20 00:35:48.516774 systemd-networkd[1395]: lxc5e358937b03a: Link UP Jan 20 00:35:48.526508 kernel: eth0: renamed from tmp96f03 Jan 20 00:35:48.541980 systemd-networkd[1395]: lxc1b1dd9064620: Link UP Jan 20 00:35:48.548594 kernel: eth0: renamed from tmpcf0e2 Jan 20 00:35:48.554817 systemd-networkd[1395]: lxc5e358937b03a: Gained carrier Jan 20 00:35:48.555797 systemd-networkd[1395]: lxc1b1dd9064620: Gained carrier Jan 20 00:35:48.634904 systemd-networkd[1395]: cilium_vxlan: Gained IPv6LL Jan 20 00:35:49.467172 systemd-networkd[1395]: lxc_health: Gained IPv6LL Jan 20 00:35:49.686971 kubelet[2570]: E0120 00:35:49.685522 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:49.875115 kubelet[2570]: E0120 00:35:49.874902 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:50.363195 systemd-networkd[1395]: lxc1b1dd9064620: Gained IPv6LL Jan 20 00:35:50.556495 systemd-networkd[1395]: lxc5e358937b03a: Gained IPv6LL Jan 20 00:35:50.894336 kubelet[2570]: E0120 00:35:50.893484 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:55.067082 containerd[1477]: time="2026-01-20T00:35:55.066616017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:35:55.067082 containerd[1477]: time="2026-01-20T00:35:55.066698823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:35:55.067082 containerd[1477]: time="2026-01-20T00:35:55.066714021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:55.067082 containerd[1477]: time="2026-01-20T00:35:55.066836419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:55.109031 containerd[1477]: time="2026-01-20T00:35:55.108781673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:35:55.109031 containerd[1477]: time="2026-01-20T00:35:55.108989142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:35:55.109031 containerd[1477]: time="2026-01-20T00:35:55.109016884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:55.110836 systemd[1]: Started cri-containerd-cf0e24fdd66e7ed2a9fffd3498d7e211b1e5358e43371ea88458e76d82a4d1c0.scope - libcontainer container cf0e24fdd66e7ed2a9fffd3498d7e211b1e5358e43371ea88458e76d82a4d1c0. Jan 20 00:35:55.112074 containerd[1477]: time="2026-01-20T00:35:55.110824775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:35:55.134437 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:35:55.146005 systemd[1]: Started cri-containerd-96f0372740ca0a1c2dd255a56fd982be5e7df7d97a0ec9e7ffaffc0e466e0bb6.scope - libcontainer container 96f0372740ca0a1c2dd255a56fd982be5e7df7d97a0ec9e7ffaffc0e466e0bb6. Jan 20 00:35:55.179597 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:35:55.193406 containerd[1477]: time="2026-01-20T00:35:55.193328970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t6lvh,Uid:5bbf5739-56fb-43bc-bfda-0ed9e40f91d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf0e24fdd66e7ed2a9fffd3498d7e211b1e5358e43371ea88458e76d82a4d1c0\"" Jan 20 00:35:55.194635 kubelet[2570]: E0120 00:35:55.194426 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:55.197958 containerd[1477]: time="2026-01-20T00:35:55.197883162Z" level=info msg="CreateContainer within sandbox \"cf0e24fdd66e7ed2a9fffd3498d7e211b1e5358e43371ea88458e76d82a4d1c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:35:55.224651 containerd[1477]: time="2026-01-20T00:35:55.224589217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ptkdg,Uid:017f9de8-92f6-4da8-a44a-5a2db6f1c312,Namespace:kube-system,Attempt:0,} returns sandbox id \"96f0372740ca0a1c2dd255a56fd982be5e7df7d97a0ec9e7ffaffc0e466e0bb6\"" Jan 20 00:35:55.225474 kubelet[2570]: E0120 00:35:55.225430 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:55.235136 containerd[1477]: time="2026-01-20T00:35:55.235075989Z" level=info msg="CreateContainer within sandbox \"96f0372740ca0a1c2dd255a56fd982be5e7df7d97a0ec9e7ffaffc0e466e0bb6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:35:55.238827 containerd[1477]: time="2026-01-20T00:35:55.238697007Z" level=info msg="CreateContainer within sandbox \"cf0e24fdd66e7ed2a9fffd3498d7e211b1e5358e43371ea88458e76d82a4d1c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"274074c525139f7b0ddb48b9e3fba42c8a2867be4326db41ab2c1b099b0a0d7c\"" Jan 20 00:35:55.239872 containerd[1477]: time="2026-01-20T00:35:55.239832926Z" level=info msg="StartContainer for \"274074c525139f7b0ddb48b9e3fba42c8a2867be4326db41ab2c1b099b0a0d7c\"" Jan 20 00:35:55.287017 containerd[1477]: time="2026-01-20T00:35:55.286866489Z" level=info msg="CreateContainer within sandbox \"96f0372740ca0a1c2dd255a56fd982be5e7df7d97a0ec9e7ffaffc0e466e0bb6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77b40c6425ce7c76398257d5fec100da1e6f379c3ad881564b1e491681af9b54\"" Jan 20 00:35:55.291118 containerd[1477]: time="2026-01-20T00:35:55.289529355Z" level=info msg="StartContainer for \"77b40c6425ce7c76398257d5fec100da1e6f379c3ad881564b1e491681af9b54\"" Jan 20 00:35:55.299917 systemd[1]: Started cri-containerd-274074c525139f7b0ddb48b9e3fba42c8a2867be4326db41ab2c1b099b0a0d7c.scope - libcontainer container 274074c525139f7b0ddb48b9e3fba42c8a2867be4326db41ab2c1b099b0a0d7c. Jan 20 00:35:55.337010 systemd[1]: Started cri-containerd-77b40c6425ce7c76398257d5fec100da1e6f379c3ad881564b1e491681af9b54.scope - libcontainer container 77b40c6425ce7c76398257d5fec100da1e6f379c3ad881564b1e491681af9b54. Jan 20 00:35:55.357068 containerd[1477]: time="2026-01-20T00:35:55.356954524Z" level=info msg="StartContainer for \"274074c525139f7b0ddb48b9e3fba42c8a2867be4326db41ab2c1b099b0a0d7c\" returns successfully" Jan 20 00:35:55.389883 containerd[1477]: time="2026-01-20T00:35:55.389506948Z" level=info msg="StartContainer for \"77b40c6425ce7c76398257d5fec100da1e6f379c3ad881564b1e491681af9b54\" returns successfully" Jan 20 00:35:55.956241 kubelet[2570]: E0120 00:35:55.955847 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:55.962053 kubelet[2570]: E0120 00:35:55.961954 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:55.979704 kubelet[2570]: I0120 00:35:55.979403 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t6lvh" podStartSLOduration=32.979380896 podStartE2EDuration="32.979380896s" podCreationTimestamp="2026-01-20 00:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:35:55.977982311 +0000 UTC m=+37.088452659" watchObservedRunningTime="2026-01-20 00:35:55.979380896 +0000 UTC m=+37.089851234" Jan 20 00:35:55.999319 kubelet[2570]: I0120 00:35:55.999167 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ptkdg" podStartSLOduration=32.999143611 podStartE2EDuration="32.999143611s" podCreationTimestamp="2026-01-20 00:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:35:55.996474088 +0000 UTC m=+37.106944426" watchObservedRunningTime="2026-01-20 00:35:55.999143611 +0000 UTC m=+37.109613949" Jan 20 00:35:56.075400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2446958636.mount: Deactivated successfully. Jan 20 00:35:56.965271 kubelet[2570]: E0120 00:35:56.965203 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:35:57.972150 kubelet[2570]: E0120 00:35:57.971758 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:05.965850 kubelet[2570]: E0120 00:36:05.962805 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:06.037499 kubelet[2570]: E0120 00:36:06.036690 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:09.130026 systemd[1]: Started sshd@7-10.0.0.24:22-10.0.0.1:46182.service - OpenSSH per-connection server daemon (10.0.0.1:46182). Jan 20 00:36:09.176950 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 46182 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:36:09.179630 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:36:09.187344 systemd-logind[1462]: New session 8 of user core. Jan 20 00:36:09.196999 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 00:36:09.503192 sshd[3964]: pam_unix(sshd:session): session closed for user core Jan 20 00:36:09.507962 systemd[1]: sshd@7-10.0.0.24:22-10.0.0.1:46182.service: Deactivated successfully. Jan 20 00:36:09.510059 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 00:36:09.510999 systemd-logind[1462]: Session 8 logged out. Waiting for processes to exit. Jan 20 00:36:09.512269 systemd-logind[1462]: Removed session 8. Jan 20 00:36:14.516793 systemd[1]: Started sshd@8-10.0.0.24:22-10.0.0.1:53106.service - OpenSSH per-connection server daemon (10.0.0.1:53106). Jan 20 00:36:14.560575 sshd[3979]: Accepted publickey for core from 10.0.0.1 port 53106 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:36:14.563151 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:36:14.570361 systemd-logind[1462]: New session 9 of user core. Jan 20 00:36:14.578977 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 00:36:14.723153 sshd[3979]: pam_unix(sshd:session): session closed for user core Jan 20 00:36:14.728937 systemd[1]: sshd@8-10.0.0.24:22-10.0.0.1:53106.service: Deactivated successfully. Jan 20 00:36:14.731932 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 00:36:14.733203 systemd-logind[1462]: Session 9 logged out. Waiting for processes to exit. Jan 20 00:36:14.734981 systemd-logind[1462]: Removed session 9. Jan 20 00:36:19.749915 systemd[1]: Started sshd@9-10.0.0.24:22-10.0.0.1:53122.service - OpenSSH per-connection server daemon (10.0.0.1:53122). Jan 20 00:36:19.800294 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 53122 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:36:19.803027 sshd[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:36:19.810900 systemd-logind[1462]: New session 10 of user core. Jan 20 00:36:19.825805 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 00:36:19.960139 sshd[3998]: pam_unix(sshd:session): session closed for user core Jan 20 00:36:19.963732 systemd[1]: sshd@9-10.0.0.24:22-10.0.0.1:53122.service: Deactivated successfully. Jan 20 00:36:19.966721 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 00:36:19.968996 systemd-logind[1462]: Session 10 logged out. Waiting for processes to exit. Jan 20 00:36:19.970466 systemd-logind[1462]: Removed session 10. Jan 20 00:36:24.972589 systemd[1]: Started sshd@10-10.0.0.24:22-10.0.0.1:36434.service - OpenSSH per-connection server daemon (10.0.0.1:36434). Jan 20 00:36:25.014038 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 36434 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:36:25.015785 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:36:25.020912 systemd-logind[1462]: New session 11 of user core. Jan 20 00:36:25.031771 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 00:36:25.152922 sshd[4015]: pam_unix(sshd:session): session closed for user core Jan 20 00:36:25.164823 systemd[1]: sshd@10-10.0.0.24:22-10.0.0.1:36434.service: Deactivated successfully. Jan 20 00:36:25.166935 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 00:36:25.168708 systemd-logind[1462]: Session 11 logged out. Waiting for processes to exit. Jan 20 00:36:25.176895 systemd[1]: Started sshd@11-10.0.0.24:22-10.0.0.1:36444.service - OpenSSH per-connection server daemon (10.0.0.1:36444). Jan 20 00:36:25.177902 systemd-logind[1462]: Removed session 11. Jan 20 00:36:25.210976 sshd[4030]: Accepted publickey for core from 10.0.0.1 port 36444 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:36:25.212654 sshd[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:36:25.218159 systemd-logind[1462]: New session 12 of user core. Jan 20 00:36:25.229868 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 00:36:25.396001 sshd[4030]: pam_unix(sshd:session): session closed for user core Jan 20 00:36:25.407690 systemd[1]: sshd@11-10.0.0.24:22-10.0.0.1:36444.service: Deactivated successfully. Jan 20 00:36:25.410983 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 00:36:25.413691 systemd-logind[1462]: Session 12 logged out. Waiting for processes to exit. Jan 20 00:36:25.425618 systemd[1]: Started sshd@12-10.0.0.24:22-10.0.0.1:36448.service - OpenSSH per-connection server daemon (10.0.0.1:36448). Jan 20 00:36:25.428103 systemd-logind[1462]: Removed session 12. Jan 20 00:36:25.462330 sshd[4042]: Accepted publickey for core from 10.0.0.1 port 36448 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:36:25.464754 sshd[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:36:25.472291 systemd-logind[1462]: New session 13 of user core. Jan 20 00:36:25.482942 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 00:36:25.605491 sshd[4042]: pam_unix(sshd:session): session closed for user core Jan 20 00:36:25.609335 systemd[1]: sshd@12-10.0.0.24:22-10.0.0.1:36448.service: Deactivated successfully. Jan 20 00:36:25.611778 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 00:36:25.613652 systemd-logind[1462]: Session 13 logged out. Waiting for processes to exit. Jan 20 00:36:25.615506 systemd-logind[1462]: Removed session 13. Jan 20 00:36:28.070953 kubelet[2570]: E0120 00:36:28.070827 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:28.474130 update_engine[1465]: I20260120 00:36:28.474004 1465 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 00:36:28.474130 update_engine[1465]: I20260120 00:36:28.474083 1465 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 00:36:28.474841 update_engine[1465]: I20260120 00:36:28.474385 1465 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 00:36:28.475261 update_engine[1465]: I20260120 00:36:28.475206 1465 omaha_request_params.cc:62] Current group set to lts Jan 20 00:36:28.476249 update_engine[1465]: I20260120 00:36:28.476188 1465 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 00:36:28.476249 update_engine[1465]: I20260120 00:36:28.476227 1465 update_attempter.cc:643] Scheduling an action processor start. Jan 20 00:36:28.476249 update_engine[1465]: I20260120 00:36:28.476248 1465 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 00:36:28.476349 update_engine[1465]: I20260120 00:36:28.476336 1465 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 00:36:28.476525 update_engine[1465]: I20260120 00:36:28.476434 1465 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 00:36:28.476525 update_engine[1465]: I20260120 00:36:28.476496 1465 omaha_request_action.cc:272] Request: Jan 20 00:36:28.476525 update_engine[1465]: Jan 20 00:36:28.476525 update_engine[1465]: Jan 20 00:36:28.476525 update_engine[1465]: Jan 20 00:36:28.476525 update_engine[1465]: Jan 20 00:36:28.476525 update_engine[1465]: Jan 20 00:36:28.476525 update_engine[1465]: Jan 20 00:36:28.476525 update_engine[1465]: Jan 20 00:36:28.476525 update_engine[1465]: Jan 20 00:36:28.476525 update_engine[1465]: I20260120 00:36:28.476511 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 00:36:28.479391 locksmithd[1495]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 00:36:28.480032 update_engine[1465]: I20260120 00:36:28.479972 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 00:36:28.480629 update_engine[1465]: I20260120 00:36:28.480448 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 00:36:28.497414 update_engine[1465]: E20260120 00:36:28.497293 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 00:36:28.497627 update_engine[1465]: I20260120 00:36:28.497421 1465 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 00:36:30.619875 systemd[1]: Started sshd@13-10.0.0.24:22-10.0.0.1:36454.service - OpenSSH per-connection server daemon (10.0.0.1:36454). Jan 20 00:36:30.672096 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 36454 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:36:30.673785 sshd[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:36:30.678712 systemd-logind[1462]: New session 14 of user core. Jan 20 00:36:30.690723 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 00:36:30.802322 sshd[4057]: pam_unix(sshd:session): session closed for user core Jan 20 00:36:30.806161 systemd[1]: sshd@13-10.0.0.24:22-10.0.0.1:36454.service: Deactivated successfully. Jan 20 00:36:30.808093 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 00:36:30.808895 systemd-logind[1462]: Session 14 logged out. Waiting for processes to exit. Jan 20 00:36:30.810290 systemd-logind[1462]: Removed session 14. Jan 20 00:36:31.073740 kubelet[2570]: E0120 00:36:31.073674 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:34.078517 kubelet[2570]: E0120 00:36:34.078161 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:35.823153 systemd[1]: Started sshd@14-10.0.0.24:22-10.0.0.1:60278.service - OpenSSH per-connection server daemon (10.0.0.1:60278). Jan 20 00:36:35.865169 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 60278 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:36:35.867344 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:36:35.872475 systemd-logind[1462]: New session 15 of user core. Jan 20 00:36:35.887721 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 00:36:35.997973 sshd[4071]: pam_unix(sshd:session): session closed for user core Jan 20 00:36:36.008423 systemd[1]: sshd@14-10.0.0.24:22-10.0.0.1:60278.service: Deactivated successfully. Jan 20 00:36:36.010222 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 00:36:36.011873 systemd-logind[1462]: Session 15 logged out. Waiting for processes to exit. Jan 20 00:36:36.035973 systemd[1]: Started sshd@15-10.0.0.24:22-10.0.0.1:60286.service - OpenSSH per-connection server daemon (10.0.0.1:60286). Jan 20 00:36:36.037383 systemd-logind[1462]: Removed session 15. Jan 20 00:36:36.068931 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 60286 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:36:36.070449 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:36:36.075116 systemd-logind[1462]: New session 16 of user core. Jan 20 00:36:36.085694 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 00:36:36.426970 sshd[4086]: pam_unix(sshd:session): session closed for user core Jan 20 00:36:36.436060 systemd[1]: sshd@15-10.0.0.24:22-10.0.0.1:60286.service: Deactivated successfully. Jan 20 00:36:36.438390 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 00:36:36.440332 systemd-logind[1462]: Session 16 logged out. Waiting for processes to exit. Jan 20 00:36:36.448949 systemd[1]: Started sshd@16-10.0.0.24:22-10.0.0.1:60300.service - OpenSSH per-connection server daemon (10.0.0.1:60300). Jan 20 00:36:36.450276 systemd-logind[1462]: Removed session 16. Jan 20 00:36:36.492985 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 60300 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:36:36.494775 sshd[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:36:36.500923 systemd-logind[1462]: New session 17 of user core. Jan 20 00:36:36.510773 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 00:36:37.150430 sshd[4100]: pam_unix(sshd:session): session closed for user core Jan 20 00:36:37.162752 systemd[1]: sshd@16-10.0.0.24:22-10.0.0.1:60300.service: Deactivated successfully. Jan 20 00:36:37.179411 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 00:36:37.181157 systemd-logind[1462]: Session 17 logged out. Waiting for processes to exit. Jan 20 00:36:37.192193 systemd[1]: Started sshd@17-10.0.0.24:22-10.0.0.1:60308.service - OpenSSH per-connection server daemon (10.0.0.1:60308). Jan 20 00:36:37.194171 systemd-logind[1462]: Removed session 17. Jan 20 00:36:37.234250 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 60308 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:36:37.236178 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:36:37.241764 systemd-logind[1462]: New session 18 of user core. Jan 20 00:36:37.258921 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 00:36:38.477740 update_engine[1465]: I20260120 00:36:38.476235 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 00:36:38.477740 update_engine[1465]: I20260120 00:36:38.477640 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 00:36:38.479828 update_engine[1465]: I20260120 00:36:38.478055 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 00:36:38.498797 update_engine[1465]: E20260120 00:36:38.497882 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 00:36:38.498797 update_engine[1465]: I20260120 00:36:38.498895 1465 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 00:36:48.781266 update_engine[1465]: I20260120 00:36:48.603219 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 00:36:50.305218 update_engine[1465]: I20260120 00:36:50.299434 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 00:36:50.311967 update_engine[1465]: I20260120 00:36:50.306496 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 00:36:50.328832 update_engine[1465]: E20260120 00:36:50.328527 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 00:36:50.328832 update_engine[1465]: I20260120 00:36:50.328784 1465 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 00:36:50.455936 sshd[4121]: pam_unix(sshd:session): session closed for user core Jan 20 00:36:50.492120 systemd[1]: sshd@17-10.0.0.24:22-10.0.0.1:60308.service: Deactivated successfully. Jan 20 00:36:50.503995 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 00:36:50.506149 kubelet[2570]: E0120 00:36:50.503922 2570 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.078s" Jan 20 00:36:50.509812 systemd[1]: session-18.scope: Consumed 8.043s CPU time. Jan 20 00:36:50.511145 systemd[1]: cri-containerd-799114dcf7979c22a09f6be0f51568d22f7f8d12d1fe3d4eb812567cc006dd69.scope: Deactivated successfully. Jan 20 00:36:50.514733 systemd[1]: cri-containerd-799114dcf7979c22a09f6be0f51568d22f7f8d12d1fe3d4eb812567cc006dd69.scope: Consumed 5.587s CPU time, 16.2M memory peak, 0B memory swap peak. Jan 20 00:36:50.527988 systemd-logind[1462]: Session 18 logged out. Waiting for processes to exit. Jan 20 00:36:50.546475 systemd[1]: Started sshd@18-10.0.0.24:22-10.0.0.1:56574.service - OpenSSH per-connection server daemon (10.0.0.1:56574). Jan 20 00:36:50.552024 systemd-logind[1462]: Removed session 18. Jan 20 00:36:50.584229 kubelet[2570]: E0120 00:36:50.583867 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:50.650101 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 56574 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:36:50.656743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-799114dcf7979c22a09f6be0f51568d22f7f8d12d1fe3d4eb812567cc006dd69-rootfs.mount: Deactivated successfully. Jan 20 00:36:50.658815 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:36:50.678149 systemd-logind[1462]: New session 19 of user core. Jan 20 00:36:50.686997 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 00:36:50.696612 containerd[1477]: time="2026-01-20T00:36:50.696252386Z" level=info msg="shim disconnected" id=799114dcf7979c22a09f6be0f51568d22f7f8d12d1fe3d4eb812567cc006dd69 namespace=k8s.io Jan 20 00:36:50.696612 containerd[1477]: time="2026-01-20T00:36:50.696432432Z" level=warning msg="cleaning up after shim disconnected" id=799114dcf7979c22a09f6be0f51568d22f7f8d12d1fe3d4eb812567cc006dd69 namespace=k8s.io Jan 20 00:36:50.697447 containerd[1477]: time="2026-01-20T00:36:50.696636824Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:36:50.965954 sshd[4134]: pam_unix(sshd:session): session closed for user core Jan 20 00:36:50.974204 systemd[1]: sshd@18-10.0.0.24:22-10.0.0.1:56574.service: Deactivated successfully. Jan 20 00:36:50.982432 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 00:36:50.988671 systemd-logind[1462]: Session 19 logged out. Waiting for processes to exit. Jan 20 00:36:50.993210 systemd-logind[1462]: Removed session 19. Jan 20 00:36:51.457669 kubelet[2570]: I0120 00:36:51.454721 2570 scope.go:117] "RemoveContainer" containerID="799114dcf7979c22a09f6be0f51568d22f7f8d12d1fe3d4eb812567cc006dd69" Jan 20 00:36:51.457669 kubelet[2570]: E0120 00:36:51.454886 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:51.465449 containerd[1477]: time="2026-01-20T00:36:51.464932502Z" level=info msg="CreateContainer within sandbox \"372b0e2e0cf5ce54b6ea8bab144b80a5ec10d04156da2dc281fb66a1b3d86aad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 00:36:51.561441 containerd[1477]: time="2026-01-20T00:36:51.561310234Z" level=info msg="CreateContainer within sandbox \"372b0e2e0cf5ce54b6ea8bab144b80a5ec10d04156da2dc281fb66a1b3d86aad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"64cbe8197350f1501f114f12bb525f9b63c2e6ffbf22efcadf0ebd1a61ed2e48\"" Jan 20 00:36:51.564675 containerd[1477]: time="2026-01-20T00:36:51.562956308Z" level=info msg="StartContainer for \"64cbe8197350f1501f114f12bb525f9b63c2e6ffbf22efcadf0ebd1a61ed2e48\"" Jan 20 00:36:51.664193 systemd[1]: Started cri-containerd-64cbe8197350f1501f114f12bb525f9b63c2e6ffbf22efcadf0ebd1a61ed2e48.scope - libcontainer container 64cbe8197350f1501f114f12bb525f9b63c2e6ffbf22efcadf0ebd1a61ed2e48. Jan 20 00:36:51.831097 containerd[1477]: time="2026-01-20T00:36:51.826405191Z" level=info msg="StartContainer for \"64cbe8197350f1501f114f12bb525f9b63c2e6ffbf22efcadf0ebd1a61ed2e48\" returns successfully" Jan 20 00:36:52.670781 kubelet[2570]: E0120 00:36:52.670736 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:53.681292 kubelet[2570]: E0120 00:36:53.681150 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:54.698362 kubelet[2570]: E0120 00:36:54.686240 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:55.937942 kubelet[2570]: E0120 00:36:55.935181 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:56.009914 systemd[1]: Started sshd@19-10.0.0.24:22-10.0.0.1:43478.service - OpenSSH per-connection server daemon (10.0.0.1:43478). Jan 20 00:36:56.073060 kubelet[2570]: E0120 00:36:56.070268 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:36:56.144830 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 43478 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:36:56.161105 sshd[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:36:56.204880 systemd-logind[1462]: New session 20 of user core. Jan 20 00:36:56.223024 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 00:36:56.738726 sshd[4213]: pam_unix(sshd:session): session closed for user core Jan 20 00:36:56.760219 systemd[1]: sshd@19-10.0.0.24:22-10.0.0.1:43478.service: Deactivated successfully. Jan 20 00:36:56.773453 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 00:36:56.780230 systemd-logind[1462]: Session 20 logged out. Waiting for processes to exit. Jan 20 00:36:56.782374 systemd-logind[1462]: Removed session 20. Jan 20 00:37:00.475125 update_engine[1465]: I20260120 00:37:00.474890 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 00:37:00.476281 update_engine[1465]: I20260120 00:37:00.475839 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 00:37:00.476423 update_engine[1465]: I20260120 00:37:00.476342 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 00:37:00.497912 update_engine[1465]: E20260120 00:37:00.497148 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 00:37:00.497912 update_engine[1465]: I20260120 00:37:00.497530 1465 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 00:37:00.498260 update_engine[1465]: I20260120 00:37:00.497941 1465 omaha_request_action.cc:617] Omaha request response: Jan 20 00:37:00.498516 update_engine[1465]: E20260120 00:37:00.498421 1465 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 00:37:00.499028 update_engine[1465]: I20260120 00:37:00.498903 1465 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 00:37:00.499028 update_engine[1465]: I20260120 00:37:00.498981 1465 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 00:37:00.499028 update_engine[1465]: I20260120 00:37:00.499001 1465 update_attempter.cc:306] Processing Done. Jan 20 00:37:00.501761 update_engine[1465]: E20260120 00:37:00.499053 1465 update_attempter.cc:619] Update failed. Jan 20 00:37:00.501761 update_engine[1465]: I20260120 00:37:00.499091 1465 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 00:37:00.501761 update_engine[1465]: I20260120 00:37:00.499109 1465 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 00:37:00.501761 update_engine[1465]: I20260120 00:37:00.499123 1465 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 00:37:00.501761 update_engine[1465]: I20260120 00:37:00.499349 1465 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 00:37:00.501761 update_engine[1465]: I20260120 00:37:00.499460 1465 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 00:37:00.501761 update_engine[1465]: I20260120 00:37:00.499478 1465 omaha_request_action.cc:272] Request: Jan 20 00:37:00.501761 update_engine[1465]: Jan 20 00:37:00.501761 update_engine[1465]: Jan 20 00:37:00.501761 update_engine[1465]: Jan 20 00:37:00.501761 update_engine[1465]: Jan 20 00:37:00.501761 update_engine[1465]: Jan 20 00:37:00.501761 update_engine[1465]: Jan 20 00:37:00.501761 update_engine[1465]: I20260120 00:37:00.499491 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 00:37:00.503161 locksmithd[1495]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 00:37:00.503818 update_engine[1465]: I20260120 00:37:00.503316 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 00:37:00.503818 update_engine[1465]: I20260120 00:37:00.503748 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 00:37:00.526872 update_engine[1465]: E20260120 00:37:00.526698 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 20 00:37:00.526872 update_engine[1465]: I20260120 00:37:00.526862 1465 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 00:37:00.527083 update_engine[1465]: I20260120 00:37:00.526887 1465 omaha_request_action.cc:617] Omaha request response: Jan 20 00:37:00.527083 update_engine[1465]: I20260120 00:37:00.526901 1465 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 00:37:00.527083 update_engine[1465]: I20260120 00:37:00.526913 1465 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 00:37:00.527083 update_engine[1465]: I20260120 00:37:00.526923 1465 update_attempter.cc:306] Processing Done. Jan 20 00:37:00.527083 update_engine[1465]: I20260120 00:37:00.526935 1465 update_attempter.cc:310] Error event sent. Jan 20 00:37:00.527083 update_engine[1465]: I20260120 00:37:00.526996 1465 update_check_scheduler.cc:74] Next update check in 42m35s Jan 20 00:37:00.531688 locksmithd[1495]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 00:37:01.811437 systemd[1]: Started sshd@20-10.0.0.24:22-10.0.0.1:43484.service - OpenSSH per-connection server daemon (10.0.0.1:43484). Jan 20 00:37:01.907585 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 43484 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:37:01.914158 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:37:01.929116 systemd-logind[1462]: New session 21 of user core. Jan 20 00:37:01.936284 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 00:37:02.215599 sshd[4229]: pam_unix(sshd:session): session closed for user core Jan 20 00:37:02.230190 systemd[1]: sshd@20-10.0.0.24:22-10.0.0.1:43484.service: Deactivated successfully. Jan 20 00:37:02.235437 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 00:37:02.246777 systemd-logind[1462]: Session 21 logged out. Waiting for processes to exit. Jan 20 00:37:02.253505 systemd-logind[1462]: Removed session 21. Jan 20 00:37:04.104166 kubelet[2570]: E0120 00:37:04.103238 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:05.951721 kubelet[2570]: E0120 00:37:05.950520 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:06.964112 kubelet[2570]: E0120 00:37:06.963350 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:07.279516 systemd[1]: Started sshd@21-10.0.0.24:22-10.0.0.1:47426.service - OpenSSH per-connection server daemon (10.0.0.1:47426). Jan 20 00:37:07.364177 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 47426 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:37:07.370184 sshd[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:37:07.383137 systemd-logind[1462]: New session 22 of user core. Jan 20 00:37:07.391092 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 00:37:07.771526 sshd[4243]: pam_unix(sshd:session): session closed for user core Jan 20 00:37:07.782085 systemd[1]: sshd@21-10.0.0.24:22-10.0.0.1:47426.service: Deactivated successfully. Jan 20 00:37:07.788390 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 00:37:07.793863 systemd-logind[1462]: Session 22 logged out. Waiting for processes to exit. Jan 20 00:37:07.796363 systemd-logind[1462]: Removed session 22. Jan 20 00:37:08.071160 kubelet[2570]: E0120 00:37:08.070796 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:13.406073 systemd[1]: Started sshd@22-10.0.0.24:22-10.0.0.1:40476.service - OpenSSH per-connection server daemon (10.0.0.1:40476). Jan 20 00:37:13.591599 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 40476 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:37:13.597380 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:37:13.670093 systemd-logind[1462]: New session 23 of user core. Jan 20 00:37:13.688024 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 00:37:13.969270 sshd[4258]: pam_unix(sshd:session): session closed for user core Jan 20 00:37:13.985932 systemd[1]: sshd@22-10.0.0.24:22-10.0.0.1:40476.service: Deactivated successfully. Jan 20 00:37:13.992909 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 00:37:13.997495 systemd-logind[1462]: Session 23 logged out. Waiting for processes to exit. Jan 20 00:37:14.018214 systemd[1]: Started sshd@23-10.0.0.24:22-10.0.0.1:40478.service - OpenSSH per-connection server daemon (10.0.0.1:40478). Jan 20 00:37:14.022047 systemd-logind[1462]: Removed session 23. Jan 20 00:37:14.092614 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 40478 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:37:14.098467 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:37:14.114065 systemd-logind[1462]: New session 24 of user core. Jan 20 00:37:14.130997 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 00:37:16.929112 containerd[1477]: time="2026-01-20T00:37:16.927298540Z" level=info msg="StopContainer for \"0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f\" with timeout 30 (s)" Jan 20 00:37:16.937973 containerd[1477]: time="2026-01-20T00:37:16.937873667Z" level=info msg="Stop container \"0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f\" with signal terminated" Jan 20 00:37:16.941369 containerd[1477]: time="2026-01-20T00:37:16.941205830Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:37:16.969188 containerd[1477]: time="2026-01-20T00:37:16.969137124Z" level=info msg="StopContainer for \"26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd\" with timeout 2 (s)" Jan 20 00:37:16.971901 containerd[1477]: time="2026-01-20T00:37:16.971499506Z" level=info msg="Stop container \"26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd\" with signal terminated" Jan 20 00:37:16.998777 systemd[1]: cri-containerd-0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f.scope: Deactivated successfully. Jan 20 00:37:16.999252 systemd[1]: cri-containerd-0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f.scope: Consumed 1.715s CPU time. Jan 20 00:37:17.019412 systemd-networkd[1395]: lxc_health: Link DOWN Jan 20 00:37:17.019426 systemd-networkd[1395]: lxc_health: Lost carrier Jan 20 00:37:17.068336 systemd[1]: cri-containerd-26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd.scope: Deactivated successfully. Jan 20 00:37:17.069223 systemd[1]: cri-containerd-26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd.scope: Consumed 16.495s CPU time. Jan 20 00:37:17.125072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f-rootfs.mount: Deactivated successfully. Jan 20 00:37:17.166241 containerd[1477]: time="2026-01-20T00:37:17.165989414Z" level=info msg="shim disconnected" id=0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f namespace=k8s.io Jan 20 00:37:17.166241 containerd[1477]: time="2026-01-20T00:37:17.166110209Z" level=warning msg="cleaning up after shim disconnected" id=0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f namespace=k8s.io Jan 20 00:37:17.166241 containerd[1477]: time="2026-01-20T00:37:17.166128603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:37:17.190486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd-rootfs.mount: Deactivated successfully. Jan 20 00:37:17.219231 containerd[1477]: time="2026-01-20T00:37:17.215473219Z" level=info msg="shim disconnected" id=26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd namespace=k8s.io Jan 20 00:37:17.224860 containerd[1477]: time="2026-01-20T00:37:17.220996254Z" level=warning msg="cleaning up after shim disconnected" id=26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd namespace=k8s.io Jan 20 00:37:17.224860 containerd[1477]: time="2026-01-20T00:37:17.221038333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:37:17.273055 containerd[1477]: time="2026-01-20T00:37:17.272795413Z" level=info msg="StopContainer for \"0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f\" returns successfully" Jan 20 00:37:17.299661 containerd[1477]: time="2026-01-20T00:37:17.299523140Z" level=info msg="StopPodSandbox for \"d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d\"" Jan 20 00:37:17.299872 containerd[1477]: time="2026-01-20T00:37:17.299728453Z" level=info msg="Container to stop \"0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:37:17.306845 containerd[1477]: time="2026-01-20T00:37:17.305333893Z" level=info msg="StopContainer for \"26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd\" returns successfully" Jan 20 00:37:17.305844 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d-shm.mount: Deactivated successfully. Jan 20 00:37:17.308195 containerd[1477]: time="2026-01-20T00:37:17.307488305Z" level=info msg="StopPodSandbox for \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\"" Jan 20 00:37:17.308195 containerd[1477]: time="2026-01-20T00:37:17.307859218Z" level=info msg="Container to stop \"26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:37:17.308195 containerd[1477]: time="2026-01-20T00:37:17.307969414Z" level=info msg="Container to stop \"7df0f822ed006853ea486ce0c85c840cd33ac53d1b9ad4ab8d76e96d05cf305b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:37:17.308195 containerd[1477]: time="2026-01-20T00:37:17.307985133Z" level=info msg="Container to stop \"78b0acfcc901864dbb672fc48a4388465957e9f51795fe718c38ed3cfde8e79d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:37:17.308195 containerd[1477]: time="2026-01-20T00:37:17.308078006Z" level=info msg="Container to stop \"98fd6525927c65399a004b2422238795fd95a4815604c3e49b4dd1573aa41c2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:37:17.308195 containerd[1477]: time="2026-01-20T00:37:17.308095489Z" level=info msg="Container to stop \"e7fe472da96aa70a907a0befa58d0aac4dda455baddd792f49d4d5ba653ad003\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 00:37:17.325204 systemd[1]: cri-containerd-d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d.scope: Deactivated successfully. Jan 20 00:37:17.327857 systemd[1]: cri-containerd-4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d.scope: Deactivated successfully. Jan 20 00:37:17.416510 containerd[1477]: time="2026-01-20T00:37:17.416377315Z" level=info msg="shim disconnected" id=4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d namespace=k8s.io Jan 20 00:37:17.420381 containerd[1477]: time="2026-01-20T00:37:17.417701492Z" level=warning msg="cleaning up after shim disconnected" id=4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d namespace=k8s.io Jan 20 00:37:17.420381 containerd[1477]: time="2026-01-20T00:37:17.417734033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:37:17.420381 containerd[1477]: time="2026-01-20T00:37:17.416960244Z" level=info msg="shim disconnected" id=d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d namespace=k8s.io Jan 20 00:37:17.420381 containerd[1477]: time="2026-01-20T00:37:17.417869072Z" level=warning msg="cleaning up after shim disconnected" id=d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d namespace=k8s.io Jan 20 00:37:17.420381 containerd[1477]: time="2026-01-20T00:37:17.417882837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:37:17.504307 containerd[1477]: time="2026-01-20T00:37:17.502931436Z" level=info msg="TearDown network for sandbox \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\" successfully" Jan 20 00:37:17.504307 containerd[1477]: time="2026-01-20T00:37:17.503017255Z" level=info msg="StopPodSandbox for \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\" returns successfully" Jan 20 00:37:17.504307 containerd[1477]: time="2026-01-20T00:37:17.503845080Z" level=info msg="TearDown network for sandbox \"d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d\" successfully" Jan 20 00:37:17.504307 containerd[1477]: time="2026-01-20T00:37:17.503878221Z" level=info msg="StopPodSandbox for \"d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d\" returns successfully" Jan 20 00:37:17.530239 kubelet[2570]: I0120 00:37:17.528179 2570 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d" Jan 20 00:37:17.542033 kubelet[2570]: I0120 00:37:17.541886 2570 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d" Jan 20 00:37:17.564357 kubelet[2570]: I0120 00:37:17.564221 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pznrn\" (UniqueName: \"kubernetes.io/projected/c7f028f7-e3ff-49bc-adb6-fd1f6e595003-kube-api-access-pznrn\") pod \"c7f028f7-e3ff-49bc-adb6-fd1f6e595003\" (UID: \"c7f028f7-e3ff-49bc-adb6-fd1f6e595003\") " Jan 20 00:37:17.564802 kubelet[2570]: I0120 00:37:17.564340 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7f028f7-e3ff-49bc-adb6-fd1f6e595003-cilium-config-path\") pod \"c7f028f7-e3ff-49bc-adb6-fd1f6e595003\" (UID: \"c7f028f7-e3ff-49bc-adb6-fd1f6e595003\") " Jan 20 00:37:17.577830 kubelet[2570]: I0120 00:37:17.577725 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f028f7-e3ff-49bc-adb6-fd1f6e595003-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c7f028f7-e3ff-49bc-adb6-fd1f6e595003" (UID: "c7f028f7-e3ff-49bc-adb6-fd1f6e595003"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 00:37:17.580014 kubelet[2570]: I0120 00:37:17.578316 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f028f7-e3ff-49bc-adb6-fd1f6e595003-kube-api-access-pznrn" (OuterVolumeSpecName: "kube-api-access-pznrn") pod "c7f028f7-e3ff-49bc-adb6-fd1f6e595003" (UID: "c7f028f7-e3ff-49bc-adb6-fd1f6e595003"). InnerVolumeSpecName "kube-api-access-pznrn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:37:17.680044 kubelet[2570]: I0120 00:37:17.675109 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-hostproc\") pod \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " Jan 20 00:37:17.680044 kubelet[2570]: I0120 00:37:17.676090 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b47e32c-9040-4e71-939f-6287ca4dcb3e-cilium-config-path\") pod \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " Jan 20 00:37:17.680044 kubelet[2570]: I0120 00:37:17.676451 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-cilium-cgroup\") pod \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " Jan 20 00:37:17.680044 kubelet[2570]: I0120 00:37:17.676520 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-cilium-run\") pod \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " Jan 20 00:37:17.680044 kubelet[2570]: I0120 00:37:17.676601 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-cni-path\") pod \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " Jan 20 00:37:17.680044 kubelet[2570]: I0120 00:37:17.676638 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b47e32c-9040-4e71-939f-6287ca4dcb3e-hubble-tls\") pod \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " Jan 20 00:37:17.688502 kubelet[2570]: I0120 00:37:17.678366 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-hostproc" (OuterVolumeSpecName: "hostproc") pod "1b47e32c-9040-4e71-939f-6287ca4dcb3e" (UID: "1b47e32c-9040-4e71-939f-6287ca4dcb3e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:37:17.688502 kubelet[2570]: I0120 00:37:17.678807 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-lib-modules\") pod \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " Jan 20 00:37:17.688502 kubelet[2570]: I0120 00:37:17.678854 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-host-proc-sys-net\") pod \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " Jan 20 00:37:17.688502 kubelet[2570]: I0120 00:37:17.678894 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b47e32c-9040-4e71-939f-6287ca4dcb3e-clustermesh-secrets\") pod \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " Jan 20 00:37:17.688502 kubelet[2570]: I0120 00:37:17.678916 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1b47e32c-9040-4e71-939f-6287ca4dcb3e" (UID: "1b47e32c-9040-4e71-939f-6287ca4dcb3e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:37:17.688502 kubelet[2570]: I0120 00:37:17.679043 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-etc-cni-netd\") pod \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " Jan 20 00:37:17.688951 kubelet[2570]: I0120 00:37:17.679229 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwfbl\" (UniqueName: \"kubernetes.io/projected/1b47e32c-9040-4e71-939f-6287ca4dcb3e-kube-api-access-mwfbl\") pod \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " Jan 20 00:37:17.688951 kubelet[2570]: I0120 00:37:17.679336 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-host-proc-sys-kernel\") pod \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " Jan 20 00:37:17.688951 kubelet[2570]: I0120 00:37:17.679367 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-bpf-maps\") pod \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " Jan 20 00:37:17.688951 kubelet[2570]: I0120 00:37:17.679397 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-xtables-lock\") pod \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\" (UID: \"1b47e32c-9040-4e71-939f-6287ca4dcb3e\") " Jan 20 00:37:17.688951 kubelet[2570]: I0120 00:37:17.679401 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1b47e32c-9040-4e71-939f-6287ca4dcb3e" (UID: "1b47e32c-9040-4e71-939f-6287ca4dcb3e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:37:17.688951 kubelet[2570]: I0120 00:37:17.679620 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pznrn\" (UniqueName: \"kubernetes.io/projected/c7f028f7-e3ff-49bc-adb6-fd1f6e595003-kube-api-access-pznrn\") on node \"localhost\" DevicePath \"\"" Jan 20 00:37:17.689295 kubelet[2570]: I0120 00:37:17.679647 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7f028f7-e3ff-49bc-adb6-fd1f6e595003-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 00:37:17.689295 kubelet[2570]: I0120 00:37:17.679664 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 20 00:37:17.689295 kubelet[2570]: I0120 00:37:17.679729 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 20 00:37:17.689295 kubelet[2570]: I0120 00:37:17.680117 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1b47e32c-9040-4e71-939f-6287ca4dcb3e" (UID: "1b47e32c-9040-4e71-939f-6287ca4dcb3e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:37:17.689295 kubelet[2570]: I0120 00:37:17.680262 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1b47e32c-9040-4e71-939f-6287ca4dcb3e" (UID: "1b47e32c-9040-4e71-939f-6287ca4dcb3e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:37:17.689295 kubelet[2570]: I0120 00:37:17.681883 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1b47e32c-9040-4e71-939f-6287ca4dcb3e" (UID: "1b47e32c-9040-4e71-939f-6287ca4dcb3e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:37:17.690914 kubelet[2570]: I0120 00:37:17.685224 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1b47e32c-9040-4e71-939f-6287ca4dcb3e" (UID: "1b47e32c-9040-4e71-939f-6287ca4dcb3e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:37:17.690914 kubelet[2570]: I0120 00:37:17.685274 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-cni-path" (OuterVolumeSpecName: "cni-path") pod "1b47e32c-9040-4e71-939f-6287ca4dcb3e" (UID: "1b47e32c-9040-4e71-939f-6287ca4dcb3e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:37:17.690914 kubelet[2570]: I0120 00:37:17.685312 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1b47e32c-9040-4e71-939f-6287ca4dcb3e" (UID: "1b47e32c-9040-4e71-939f-6287ca4dcb3e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:37:17.690914 kubelet[2570]: I0120 00:37:17.685348 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1b47e32c-9040-4e71-939f-6287ca4dcb3e" (UID: "1b47e32c-9040-4e71-939f-6287ca4dcb3e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 00:37:17.690914 kubelet[2570]: I0120 00:37:17.689490 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b47e32c-9040-4e71-939f-6287ca4dcb3e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1b47e32c-9040-4e71-939f-6287ca4dcb3e" (UID: "1b47e32c-9040-4e71-939f-6287ca4dcb3e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 00:37:17.693909 kubelet[2570]: I0120 00:37:17.693777 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b47e32c-9040-4e71-939f-6287ca4dcb3e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1b47e32c-9040-4e71-939f-6287ca4dcb3e" (UID: "1b47e32c-9040-4e71-939f-6287ca4dcb3e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 00:37:17.693909 kubelet[2570]: I0120 00:37:17.693869 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b47e32c-9040-4e71-939f-6287ca4dcb3e-kube-api-access-mwfbl" (OuterVolumeSpecName: "kube-api-access-mwfbl") pod "1b47e32c-9040-4e71-939f-6287ca4dcb3e" (UID: "1b47e32c-9040-4e71-939f-6287ca4dcb3e"). InnerVolumeSpecName "kube-api-access-mwfbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:37:17.695888 kubelet[2570]: I0120 00:37:17.695809 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b47e32c-9040-4e71-939f-6287ca4dcb3e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1b47e32c-9040-4e71-939f-6287ca4dcb3e" (UID: "1b47e32c-9040-4e71-939f-6287ca4dcb3e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:37:17.792806 kubelet[2570]: I0120 00:37:17.788019 2570 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 20 00:37:17.792806 kubelet[2570]: I0120 00:37:17.788093 2570 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 20 00:37:17.792806 kubelet[2570]: I0120 00:37:17.788166 2570 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 20 00:37:17.792806 kubelet[2570]: I0120 00:37:17.788183 2570 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 20 00:37:17.792806 kubelet[2570]: I0120 00:37:17.788206 2570 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 20 00:37:17.792806 kubelet[2570]: I0120 00:37:17.788223 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b47e32c-9040-4e71-939f-6287ca4dcb3e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 00:37:17.792806 kubelet[2570]: I0120 00:37:17.788264 2570 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b47e32c-9040-4e71-939f-6287ca4dcb3e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 20 00:37:17.792806 kubelet[2570]: I0120 00:37:17.788283 2570 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 20 00:37:17.793509 kubelet[2570]: I0120 00:37:17.788305 2570 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 20 00:37:17.793509 kubelet[2570]: I0120 00:37:17.788322 2570 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b47e32c-9040-4e71-939f-6287ca4dcb3e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 20 00:37:17.793509 kubelet[2570]: I0120 00:37:17.788340 2570 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b47e32c-9040-4e71-939f-6287ca4dcb3e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 20 00:37:17.793509 kubelet[2570]: I0120 00:37:17.788358 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mwfbl\" (UniqueName: \"kubernetes.io/projected/1b47e32c-9040-4e71-939f-6287ca4dcb3e-kube-api-access-mwfbl\") on node \"localhost\" DevicePath \"\"" Jan 20 00:37:17.813296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d-rootfs.mount: Deactivated successfully. Jan 20 00:37:17.814400 systemd[1]: var-lib-kubelet-pods-c7f028f7\x2de3ff\x2d49bc\x2dadb6\x2dfd1f6e595003-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpznrn.mount: Deactivated successfully. Jan 20 00:37:17.815623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d-rootfs.mount: Deactivated successfully. Jan 20 00:37:17.815836 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d-shm.mount: Deactivated successfully. Jan 20 00:37:17.815957 systemd[1]: var-lib-kubelet-pods-1b47e32c\x2d9040\x2d4e71\x2d939f\x2d6287ca4dcb3e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmwfbl.mount: Deactivated successfully. Jan 20 00:37:17.816074 systemd[1]: var-lib-kubelet-pods-1b47e32c\x2d9040\x2d4e71\x2d939f\x2d6287ca4dcb3e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 20 00:37:17.816296 systemd[1]: var-lib-kubelet-pods-1b47e32c\x2d9040\x2d4e71\x2d939f\x2d6287ca4dcb3e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 20 00:37:18.267020 sshd[4273]: pam_unix(sshd:session): session closed for user core Jan 20 00:37:18.282806 systemd[1]: sshd@23-10.0.0.24:22-10.0.0.1:40478.service: Deactivated successfully. Jan 20 00:37:18.290936 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 00:37:18.291318 systemd[1]: session-24.scope: Consumed 1.417s CPU time. Jan 20 00:37:18.302841 systemd-logind[1462]: Session 24 logged out. Waiting for processes to exit. Jan 20 00:37:18.330777 systemd[1]: Started sshd@24-10.0.0.24:22-10.0.0.1:40482.service - OpenSSH per-connection server daemon (10.0.0.1:40482). Jan 20 00:37:18.372858 systemd-logind[1462]: Removed session 24. Jan 20 00:37:18.601955 systemd[1]: Removed slice kubepods-besteffort-podc7f028f7_e3ff_49bc_adb6_fd1f6e595003.slice - libcontainer container kubepods-besteffort-podc7f028f7_e3ff_49bc_adb6_fd1f6e595003.slice. Jan 20 00:37:18.658337 systemd[1]: kubepods-besteffort-podc7f028f7_e3ff_49bc_adb6_fd1f6e595003.slice: Consumed 1.758s CPU time. Jan 20 00:37:18.662596 sshd[4432]: Accepted publickey for core from 10.0.0.1 port 40482 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:37:18.664613 systemd[1]: Removed slice kubepods-burstable-pod1b47e32c_9040_4e71_939f_6287ca4dcb3e.slice - libcontainer container kubepods-burstable-pod1b47e32c_9040_4e71_939f_6287ca4dcb3e.slice. Jan 20 00:37:18.664816 systemd[1]: kubepods-burstable-pod1b47e32c_9040_4e71_939f_6287ca4dcb3e.slice: Consumed 16.693s CPU time. Jan 20 00:37:18.668725 sshd[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:37:18.689206 systemd-logind[1462]: New session 25 of user core. Jan 20 00:37:18.701097 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 00:37:19.042320 kubelet[2570]: I0120 00:37:19.042193 2570 scope.go:117] "RemoveContainer" containerID="e7fe472da96aa70a907a0befa58d0aac4dda455baddd792f49d4d5ba653ad003" Jan 20 00:37:19.045502 containerd[1477]: time="2026-01-20T00:37:19.045338302Z" level=info msg="RemoveContainer for \"e7fe472da96aa70a907a0befa58d0aac4dda455baddd792f49d4d5ba653ad003\"" Jan 20 00:37:19.062636 containerd[1477]: time="2026-01-20T00:37:19.062397631Z" level=info msg="RemoveContainer for \"e7fe472da96aa70a907a0befa58d0aac4dda455baddd792f49d4d5ba653ad003\" returns successfully" Jan 20 00:37:19.063218 kubelet[2570]: I0120 00:37:19.063044 2570 scope.go:117] "RemoveContainer" containerID="0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f" Jan 20 00:37:19.065491 containerd[1477]: time="2026-01-20T00:37:19.065406145Z" level=info msg="RemoveContainer for \"0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f\"" Jan 20 00:37:19.077892 containerd[1477]: time="2026-01-20T00:37:19.076374319Z" level=info msg="RemoveContainer for \"0d37f82dd052b55b47ba12f131327ad6cc31d1712d49c68212c36f8ea140b09f\" returns successfully" Jan 20 00:37:19.078074 kubelet[2570]: I0120 00:37:19.077410 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b47e32c-9040-4e71-939f-6287ca4dcb3e" path="/var/lib/kubelet/pods/1b47e32c-9040-4e71-939f-6287ca4dcb3e/volumes" Jan 20 00:37:19.078372 kubelet[2570]: I0120 00:37:19.078338 2570 scope.go:117] "RemoveContainer" containerID="7df0f822ed006853ea486ce0c85c840cd33ac53d1b9ad4ab8d76e96d05cf305b" Jan 20 00:37:19.082627 containerd[1477]: time="2026-01-20T00:37:19.082522554Z" level=info msg="RemoveContainer for \"7df0f822ed006853ea486ce0c85c840cd33ac53d1b9ad4ab8d76e96d05cf305b\"" Jan 20 00:37:19.084868 kubelet[2570]: I0120 00:37:19.082835 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7f028f7-e3ff-49bc-adb6-fd1f6e595003" path="/var/lib/kubelet/pods/c7f028f7-e3ff-49bc-adb6-fd1f6e595003/volumes" Jan 20 00:37:19.095940 containerd[1477]: time="2026-01-20T00:37:19.095830250Z" level=info msg="RemoveContainer for \"7df0f822ed006853ea486ce0c85c840cd33ac53d1b9ad4ab8d76e96d05cf305b\" returns successfully" Jan 20 00:37:19.096365 kubelet[2570]: I0120 00:37:19.096270 2570 scope.go:117] "RemoveContainer" containerID="26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd" Jan 20 00:37:19.102130 containerd[1477]: time="2026-01-20T00:37:19.100103535Z" level=info msg="RemoveContainer for \"26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd\"" Jan 20 00:37:19.111267 containerd[1477]: time="2026-01-20T00:37:19.111173639Z" level=info msg="RemoveContainer for \"26e0e646df2d33d2ebe7eafa46a597fbd41030ffe620d07c6109e424de40ecdd\" returns successfully" Jan 20 00:37:19.112323 kubelet[2570]: I0120 00:37:19.112149 2570 scope.go:117] "RemoveContainer" containerID="78b0acfcc901864dbb672fc48a4388465957e9f51795fe718c38ed3cfde8e79d" Jan 20 00:37:19.119329 containerd[1477]: time="2026-01-20T00:37:19.119163444Z" level=info msg="RemoveContainer for \"78b0acfcc901864dbb672fc48a4388465957e9f51795fe718c38ed3cfde8e79d\"" Jan 20 00:37:19.131230 containerd[1477]: time="2026-01-20T00:37:19.131120455Z" level=info msg="RemoveContainer for \"78b0acfcc901864dbb672fc48a4388465957e9f51795fe718c38ed3cfde8e79d\" returns successfully" Jan 20 00:37:19.132245 kubelet[2570]: I0120 00:37:19.132144 2570 scope.go:117] "RemoveContainer" containerID="98fd6525927c65399a004b2422238795fd95a4815604c3e49b4dd1573aa41c2f" Jan 20 00:37:19.140123 containerd[1477]: time="2026-01-20T00:37:19.139990430Z" level=info msg="RemoveContainer for \"98fd6525927c65399a004b2422238795fd95a4815604c3e49b4dd1573aa41c2f\"" Jan 20 00:37:19.158924 containerd[1477]: time="2026-01-20T00:37:19.156027561Z" level=info msg="RemoveContainer for \"98fd6525927c65399a004b2422238795fd95a4815604c3e49b4dd1573aa41c2f\" returns successfully" Jan 20 00:37:19.160010 containerd[1477]: time="2026-01-20T00:37:19.159971225Z" level=info msg="StopPodSandbox for \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\"" Jan 20 00:37:19.160474 containerd[1477]: time="2026-01-20T00:37:19.160304659Z" level=info msg="TearDown network for sandbox \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\" successfully" Jan 20 00:37:19.160474 containerd[1477]: time="2026-01-20T00:37:19.160333102Z" level=info msg="StopPodSandbox for \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\" returns successfully" Jan 20 00:37:19.166800 containerd[1477]: time="2026-01-20T00:37:19.164164547Z" level=info msg="RemovePodSandbox for \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\"" Jan 20 00:37:19.166800 containerd[1477]: time="2026-01-20T00:37:19.164225651Z" level=info msg="Forcibly stopping sandbox \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\"" Jan 20 00:37:19.166800 containerd[1477]: time="2026-01-20T00:37:19.164315018Z" level=info msg="TearDown network for sandbox \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\" successfully" Jan 20 00:37:19.175602 containerd[1477]: time="2026-01-20T00:37:19.175062791Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:37:19.175602 containerd[1477]: time="2026-01-20T00:37:19.175173056Z" level=info msg="RemovePodSandbox \"4daaeec1bc5614302b177674e80e408651da4dc93bb6c864f1b48f1ec3e97a0d\" returns successfully" Jan 20 00:37:19.176777 containerd[1477]: time="2026-01-20T00:37:19.176742918Z" level=info msg="StopPodSandbox for \"d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d\"" Jan 20 00:37:19.216151 containerd[1477]: time="2026-01-20T00:37:19.215773530Z" level=info msg="TearDown network for sandbox \"d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d\" successfully" Jan 20 00:37:19.216151 containerd[1477]: time="2026-01-20T00:37:19.215927799Z" level=info msg="StopPodSandbox for \"d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d\" returns successfully" Jan 20 00:37:19.220787 containerd[1477]: time="2026-01-20T00:37:19.220188125Z" level=info msg="RemovePodSandbox for \"d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d\"" Jan 20 00:37:19.220787 containerd[1477]: time="2026-01-20T00:37:19.220305825Z" level=info msg="Forcibly stopping sandbox \"d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d\"" Jan 20 00:37:19.220787 containerd[1477]: time="2026-01-20T00:37:19.220408697Z" level=info msg="TearDown network for sandbox \"d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d\" successfully" Jan 20 00:37:19.237837 containerd[1477]: time="2026-01-20T00:37:19.237469449Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:37:19.237837 containerd[1477]: time="2026-01-20T00:37:19.237623267Z" level=info msg="RemovePodSandbox \"d716c6bbf8c7d1c45152e61c6bacd897f7d96986135247db239d8459edec796d\" returns successfully" Jan 20 00:37:19.915414 sshd[4432]: pam_unix(sshd:session): session closed for user core Jan 20 00:37:19.952659 systemd[1]: sshd@24-10.0.0.24:22-10.0.0.1:40482.service: Deactivated successfully. Jan 20 00:37:19.959365 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 00:37:19.967283 systemd-logind[1462]: Session 25 logged out. Waiting for processes to exit. Jan 20 00:37:19.989957 systemd[1]: Started sshd@25-10.0.0.24:22-10.0.0.1:40490.service - OpenSSH per-connection server daemon (10.0.0.1:40490). Jan 20 00:37:20.001225 systemd-logind[1462]: Removed session 25. Jan 20 00:37:20.022510 kubelet[2570]: I0120 00:37:20.022281 2570 memory_manager.go:355] "RemoveStaleState removing state" podUID="1b47e32c-9040-4e71-939f-6287ca4dcb3e" containerName="cilium-agent" Jan 20 00:37:20.022768 kubelet[2570]: I0120 00:37:20.022490 2570 memory_manager.go:355] "RemoveStaleState removing state" podUID="c7f028f7-e3ff-49bc-adb6-fd1f6e595003" containerName="cilium-operator" Jan 20 00:37:20.045215 systemd[1]: Created slice kubepods-burstable-podf66325f7_d88c_4856_8a3a_15986be4b42c.slice - libcontainer container kubepods-burstable-podf66325f7_d88c_4856_8a3a_15986be4b42c.slice. Jan 20 00:37:20.076802 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 40490 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:37:20.080313 sshd[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:37:20.098629 systemd-logind[1462]: New session 26 of user core. Jan 20 00:37:20.123144 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 00:37:20.190081 kubelet[2570]: I0120 00:37:20.187872 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f66325f7-d88c-4856-8a3a-15986be4b42c-cilium-ipsec-secrets\") pod \"cilium-rhfcg\" (UID: \"f66325f7-d88c-4856-8a3a-15986be4b42c\") " pod="kube-system/cilium-rhfcg" Jan 20 00:37:20.190081 kubelet[2570]: I0120 00:37:20.187945 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f66325f7-d88c-4856-8a3a-15986be4b42c-xtables-lock\") pod \"cilium-rhfcg\" (UID: \"f66325f7-d88c-4856-8a3a-15986be4b42c\") " pod="kube-system/cilium-rhfcg" Jan 20 00:37:20.190081 kubelet[2570]: I0120 00:37:20.187989 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f66325f7-d88c-4856-8a3a-15986be4b42c-clustermesh-secrets\") pod \"cilium-rhfcg\" (UID: \"f66325f7-d88c-4856-8a3a-15986be4b42c\") " pod="kube-system/cilium-rhfcg" Jan 20 00:37:20.190081 kubelet[2570]: I0120 00:37:20.188025 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f66325f7-d88c-4856-8a3a-15986be4b42c-cilium-config-path\") pod \"cilium-rhfcg\" (UID: \"f66325f7-d88c-4856-8a3a-15986be4b42c\") " pod="kube-system/cilium-rhfcg" Jan 20 00:37:20.190081 kubelet[2570]: I0120 00:37:20.188056 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f66325f7-d88c-4856-8a3a-15986be4b42c-hubble-tls\") pod \"cilium-rhfcg\" (UID: \"f66325f7-d88c-4856-8a3a-15986be4b42c\") " pod="kube-system/cilium-rhfcg" Jan 20 00:37:20.190081 kubelet[2570]: I0120 00:37:20.188089 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f66325f7-d88c-4856-8a3a-15986be4b42c-bpf-maps\") pod \"cilium-rhfcg\" (UID: \"f66325f7-d88c-4856-8a3a-15986be4b42c\") " pod="kube-system/cilium-rhfcg" Jan 20 00:37:20.191034 kubelet[2570]: I0120 00:37:20.188115 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f66325f7-d88c-4856-8a3a-15986be4b42c-cilium-cgroup\") pod \"cilium-rhfcg\" (UID: \"f66325f7-d88c-4856-8a3a-15986be4b42c\") " pod="kube-system/cilium-rhfcg" Jan 20 00:37:20.191034 kubelet[2570]: I0120 00:37:20.188143 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f66325f7-d88c-4856-8a3a-15986be4b42c-etc-cni-netd\") pod \"cilium-rhfcg\" (UID: \"f66325f7-d88c-4856-8a3a-15986be4b42c\") " pod="kube-system/cilium-rhfcg" Jan 20 00:37:20.191034 kubelet[2570]: I0120 00:37:20.188174 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f66325f7-d88c-4856-8a3a-15986be4b42c-cni-path\") pod \"cilium-rhfcg\" (UID: \"f66325f7-d88c-4856-8a3a-15986be4b42c\") " pod="kube-system/cilium-rhfcg" Jan 20 00:37:20.191034 kubelet[2570]: I0120 00:37:20.188212 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f66325f7-d88c-4856-8a3a-15986be4b42c-hostproc\") pod \"cilium-rhfcg\" (UID: \"f66325f7-d88c-4856-8a3a-15986be4b42c\") " pod="kube-system/cilium-rhfcg" Jan 20 00:37:20.191034 kubelet[2570]: I0120 00:37:20.188241 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f66325f7-d88c-4856-8a3a-15986be4b42c-host-proc-sys-net\") pod \"cilium-rhfcg\" (UID: \"f66325f7-d88c-4856-8a3a-15986be4b42c\") " pod="kube-system/cilium-rhfcg" Jan 20 00:37:20.191034 kubelet[2570]: I0120 00:37:20.188271 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f66325f7-d88c-4856-8a3a-15986be4b42c-host-proc-sys-kernel\") pod \"cilium-rhfcg\" (UID: \"f66325f7-d88c-4856-8a3a-15986be4b42c\") " pod="kube-system/cilium-rhfcg" Jan 20 00:37:20.191238 kubelet[2570]: I0120 00:37:20.188401 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f66325f7-d88c-4856-8a3a-15986be4b42c-cilium-run\") pod \"cilium-rhfcg\" (UID: \"f66325f7-d88c-4856-8a3a-15986be4b42c\") " pod="kube-system/cilium-rhfcg" Jan 20 00:37:20.191238 kubelet[2570]: I0120 00:37:20.188446 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f66325f7-d88c-4856-8a3a-15986be4b42c-lib-modules\") pod \"cilium-rhfcg\" (UID: \"f66325f7-d88c-4856-8a3a-15986be4b42c\") " pod="kube-system/cilium-rhfcg" Jan 20 00:37:20.191238 kubelet[2570]: I0120 00:37:20.188487 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfq6j\" (UniqueName: \"kubernetes.io/projected/f66325f7-d88c-4856-8a3a-15986be4b42c-kube-api-access-lfq6j\") pod \"cilium-rhfcg\" (UID: \"f66325f7-d88c-4856-8a3a-15986be4b42c\") " pod="kube-system/cilium-rhfcg" Jan 20 00:37:20.204401 sshd[4447]: pam_unix(sshd:session): session closed for user core Jan 20 00:37:20.221803 systemd[1]: sshd@25-10.0.0.24:22-10.0.0.1:40490.service: Deactivated successfully. Jan 20 00:37:20.234165 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 00:37:20.243181 systemd-logind[1462]: Session 26 logged out. Waiting for processes to exit. Jan 20 00:37:20.286229 systemd[1]: Started sshd@26-10.0.0.24:22-10.0.0.1:40504.service - OpenSSH per-connection server daemon (10.0.0.1:40504). Jan 20 00:37:20.294267 systemd-logind[1462]: Removed session 26. Jan 20 00:37:20.364835 kubelet[2570]: E0120 00:37:20.364632 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:20.367232 containerd[1477]: time="2026-01-20T00:37:20.365999229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rhfcg,Uid:f66325f7-d88c-4856-8a3a-15986be4b42c,Namespace:kube-system,Attempt:0,}" Jan 20 00:37:20.373210 sshd[4455]: Accepted publickey for core from 10.0.0.1 port 40504 ssh2: RSA SHA256:ih+h3dt1c9chvSzmGppOapeMZVkRX8y+sbFiMafy0RA Jan 20 00:37:20.379415 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:37:20.394439 systemd-logind[1462]: New session 27 of user core. Jan 20 00:37:20.406304 kubelet[2570]: E0120 00:37:20.406173 2570 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 00:37:20.436084 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 00:37:20.451269 containerd[1477]: time="2026-01-20T00:37:20.450654098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:37:20.451269 containerd[1477]: time="2026-01-20T00:37:20.450783780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:37:20.451269 containerd[1477]: time="2026-01-20T00:37:20.450804449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:37:20.451269 containerd[1477]: time="2026-01-20T00:37:20.450962064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:37:20.507070 systemd[1]: Started cri-containerd-37b766c0ee7f3caf1df864a2fd8a47553f13b4431b3fac73324d21311ea1c1ee.scope - libcontainer container 37b766c0ee7f3caf1df864a2fd8a47553f13b4431b3fac73324d21311ea1c1ee. Jan 20 00:37:20.590825 containerd[1477]: time="2026-01-20T00:37:20.589868781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rhfcg,Uid:f66325f7-d88c-4856-8a3a-15986be4b42c,Namespace:kube-system,Attempt:0,} returns sandbox id \"37b766c0ee7f3caf1df864a2fd8a47553f13b4431b3fac73324d21311ea1c1ee\"" Jan 20 00:37:20.592229 kubelet[2570]: E0120 00:37:20.592094 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:20.599513 containerd[1477]: time="2026-01-20T00:37:20.597965883Z" level=info msg="CreateContainer within sandbox \"37b766c0ee7f3caf1df864a2fd8a47553f13b4431b3fac73324d21311ea1c1ee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 00:37:20.648207 containerd[1477]: time="2026-01-20T00:37:20.648098830Z" level=info msg="CreateContainer within sandbox \"37b766c0ee7f3caf1df864a2fd8a47553f13b4431b3fac73324d21311ea1c1ee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bab81dfb6e08c854966cf09a4db44ab33a0f323e4f96e4c755d3efc76b31c1c1\"" Jan 20 00:37:20.653733 containerd[1477]: time="2026-01-20T00:37:20.649772316Z" level=info msg="StartContainer for \"bab81dfb6e08c854966cf09a4db44ab33a0f323e4f96e4c755d3efc76b31c1c1\"" Jan 20 00:37:20.789944 systemd[1]: Started cri-containerd-bab81dfb6e08c854966cf09a4db44ab33a0f323e4f96e4c755d3efc76b31c1c1.scope - libcontainer container bab81dfb6e08c854966cf09a4db44ab33a0f323e4f96e4c755d3efc76b31c1c1. Jan 20 00:37:20.914398 containerd[1477]: time="2026-01-20T00:37:20.914107440Z" level=info msg="StartContainer for \"bab81dfb6e08c854966cf09a4db44ab33a0f323e4f96e4c755d3efc76b31c1c1\" returns successfully" Jan 20 00:37:20.972222 systemd[1]: cri-containerd-bab81dfb6e08c854966cf09a4db44ab33a0f323e4f96e4c755d3efc76b31c1c1.scope: Deactivated successfully. Jan 20 00:37:21.052060 containerd[1477]: time="2026-01-20T00:37:21.050735072Z" level=info msg="shim disconnected" id=bab81dfb6e08c854966cf09a4db44ab33a0f323e4f96e4c755d3efc76b31c1c1 namespace=k8s.io Jan 20 00:37:21.052060 containerd[1477]: time="2026-01-20T00:37:21.051648456Z" level=warning msg="cleaning up after shim disconnected" id=bab81dfb6e08c854966cf09a4db44ab33a0f323e4f96e4c755d3efc76b31c1c1 namespace=k8s.io Jan 20 00:37:21.052060 containerd[1477]: time="2026-01-20T00:37:21.051678532Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:37:21.074462 kubelet[2570]: E0120 00:37:21.072520 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-t6lvh" podUID="5bbf5739-56fb-43bc-bfda-0ed9e40f91d8" Jan 20 00:37:21.444876 kubelet[2570]: I0120 00:37:21.441331 2570 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T00:37:21Z","lastTransitionTime":"2026-01-20T00:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 20 00:37:21.599348 kubelet[2570]: E0120 00:37:21.599151 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:21.608199 containerd[1477]: time="2026-01-20T00:37:21.607885895Z" level=info msg="CreateContainer within sandbox \"37b766c0ee7f3caf1df864a2fd8a47553f13b4431b3fac73324d21311ea1c1ee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 00:37:21.692400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2189019435.mount: Deactivated successfully. Jan 20 00:37:21.734201 containerd[1477]: time="2026-01-20T00:37:21.733984571Z" level=info msg="CreateContainer within sandbox \"37b766c0ee7f3caf1df864a2fd8a47553f13b4431b3fac73324d21311ea1c1ee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"90a2741fd687b5544c93b3706454f20a2a656e8e19f42a86a8abbfe504074ebe\"" Jan 20 00:37:21.738340 containerd[1477]: time="2026-01-20T00:37:21.736285062Z" level=info msg="StartContainer for \"90a2741fd687b5544c93b3706454f20a2a656e8e19f42a86a8abbfe504074ebe\"" Jan 20 00:37:21.832996 systemd[1]: Started cri-containerd-90a2741fd687b5544c93b3706454f20a2a656e8e19f42a86a8abbfe504074ebe.scope - libcontainer container 90a2741fd687b5544c93b3706454f20a2a656e8e19f42a86a8abbfe504074ebe. Jan 20 00:37:21.913247 containerd[1477]: time="2026-01-20T00:37:21.909944978Z" level=info msg="StartContainer for \"90a2741fd687b5544c93b3706454f20a2a656e8e19f42a86a8abbfe504074ebe\" returns successfully" Jan 20 00:37:21.923431 systemd[1]: cri-containerd-90a2741fd687b5544c93b3706454f20a2a656e8e19f42a86a8abbfe504074ebe.scope: Deactivated successfully. Jan 20 00:37:22.015937 containerd[1477]: time="2026-01-20T00:37:22.015278827Z" level=info msg="shim disconnected" id=90a2741fd687b5544c93b3706454f20a2a656e8e19f42a86a8abbfe504074ebe namespace=k8s.io Jan 20 00:37:22.015937 containerd[1477]: time="2026-01-20T00:37:22.015350320Z" level=warning msg="cleaning up after shim disconnected" id=90a2741fd687b5544c93b3706454f20a2a656e8e19f42a86a8abbfe504074ebe namespace=k8s.io Jan 20 00:37:22.015937 containerd[1477]: time="2026-01-20T00:37:22.015365789Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:37:22.319123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90a2741fd687b5544c93b3706454f20a2a656e8e19f42a86a8abbfe504074ebe-rootfs.mount: Deactivated successfully. Jan 20 00:37:22.628258 kubelet[2570]: E0120 00:37:22.623033 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:22.638505 containerd[1477]: time="2026-01-20T00:37:22.638326245Z" level=info msg="CreateContainer within sandbox \"37b766c0ee7f3caf1df864a2fd8a47553f13b4431b3fac73324d21311ea1c1ee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 00:37:28.969977 kubelet[2570]: E0120 00:37:28.969798 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-t6lvh" podUID="5bbf5739-56fb-43bc-bfda-0ed9e40f91d8" Jan 20 00:37:28.986614 kubelet[2570]: E0120 00:37:28.986493 2570 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 00:37:28.990441 kubelet[2570]: E0120 00:37:28.990351 2570 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.637s" Jan 20 00:37:29.007371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2049445057.mount: Deactivated successfully. Jan 20 00:37:29.035484 containerd[1477]: time="2026-01-20T00:37:29.035279692Z" level=info msg="CreateContainer within sandbox \"37b766c0ee7f3caf1df864a2fd8a47553f13b4431b3fac73324d21311ea1c1ee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1f336eb758181bb7d9aa99874b0d6809db3cdab6c2a70f40a490c578593d466a\"" Jan 20 00:37:29.039632 containerd[1477]: time="2026-01-20T00:37:29.037055404Z" level=info msg="StartContainer for \"1f336eb758181bb7d9aa99874b0d6809db3cdab6c2a70f40a490c578593d466a\"" Jan 20 00:37:29.215077 systemd[1]: Started cri-containerd-1f336eb758181bb7d9aa99874b0d6809db3cdab6c2a70f40a490c578593d466a.scope - libcontainer container 1f336eb758181bb7d9aa99874b0d6809db3cdab6c2a70f40a490c578593d466a. Jan 20 00:37:29.340827 containerd[1477]: time="2026-01-20T00:37:29.338195071Z" level=info msg="StartContainer for \"1f336eb758181bb7d9aa99874b0d6809db3cdab6c2a70f40a490c578593d466a\" returns successfully" Jan 20 00:37:29.343933 systemd[1]: cri-containerd-1f336eb758181bb7d9aa99874b0d6809db3cdab6c2a70f40a490c578593d466a.scope: Deactivated successfully. Jan 20 00:37:29.464140 containerd[1477]: time="2026-01-20T00:37:29.463613838Z" level=info msg="shim disconnected" id=1f336eb758181bb7d9aa99874b0d6809db3cdab6c2a70f40a490c578593d466a namespace=k8s.io Jan 20 00:37:29.464479 containerd[1477]: time="2026-01-20T00:37:29.464147386Z" level=warning msg="cleaning up after shim disconnected" id=1f336eb758181bb7d9aa99874b0d6809db3cdab6c2a70f40a490c578593d466a namespace=k8s.io Jan 20 00:37:29.464479 containerd[1477]: time="2026-01-20T00:37:29.464241331Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:37:30.002358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f336eb758181bb7d9aa99874b0d6809db3cdab6c2a70f40a490c578593d466a-rootfs.mount: Deactivated successfully. Jan 20 00:37:30.010799 kubelet[2570]: E0120 00:37:30.010625 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:30.022453 containerd[1477]: time="2026-01-20T00:37:30.015632697Z" level=info msg="CreateContainer within sandbox \"37b766c0ee7f3caf1df864a2fd8a47553f13b4431b3fac73324d21311ea1c1ee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 00:37:30.132457 containerd[1477]: time="2026-01-20T00:37:30.132284507Z" level=info msg="CreateContainer within sandbox \"37b766c0ee7f3caf1df864a2fd8a47553f13b4431b3fac73324d21311ea1c1ee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"54f486d2755adb2210f99b98e1f591996de9599b3997dff8860434f71eaf1028\"" Jan 20 00:37:30.138197 containerd[1477]: time="2026-01-20T00:37:30.133518038Z" level=info msg="StartContainer for \"54f486d2755adb2210f99b98e1f591996de9599b3997dff8860434f71eaf1028\"" Jan 20 00:37:30.235899 systemd[1]: Started cri-containerd-54f486d2755adb2210f99b98e1f591996de9599b3997dff8860434f71eaf1028.scope - libcontainer container 54f486d2755adb2210f99b98e1f591996de9599b3997dff8860434f71eaf1028. Jan 20 00:37:30.343819 systemd[1]: cri-containerd-54f486d2755adb2210f99b98e1f591996de9599b3997dff8860434f71eaf1028.scope: Deactivated successfully. Jan 20 00:37:30.358275 containerd[1477]: time="2026-01-20T00:37:30.358175687Z" level=info msg="StartContainer for \"54f486d2755adb2210f99b98e1f591996de9599b3997dff8860434f71eaf1028\" returns successfully" Jan 20 00:37:30.460392 containerd[1477]: time="2026-01-20T00:37:30.460204419Z" level=info msg="shim disconnected" id=54f486d2755adb2210f99b98e1f591996de9599b3997dff8860434f71eaf1028 namespace=k8s.io Jan 20 00:37:30.460392 containerd[1477]: time="2026-01-20T00:37:30.460302762Z" level=warning msg="cleaning up after shim disconnected" id=54f486d2755adb2210f99b98e1f591996de9599b3997dff8860434f71eaf1028 namespace=k8s.io Jan 20 00:37:30.460392 containerd[1477]: time="2026-01-20T00:37:30.460325405Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:37:31.004487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54f486d2755adb2210f99b98e1f591996de9599b3997dff8860434f71eaf1028-rootfs.mount: Deactivated successfully. Jan 20 00:37:31.041199 kubelet[2570]: E0120 00:37:31.041124 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:31.046274 containerd[1477]: time="2026-01-20T00:37:31.046080581Z" level=info msg="CreateContainer within sandbox \"37b766c0ee7f3caf1df864a2fd8a47553f13b4431b3fac73324d21311ea1c1ee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 00:37:31.073083 kubelet[2570]: E0120 00:37:31.072992 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-t6lvh" podUID="5bbf5739-56fb-43bc-bfda-0ed9e40f91d8" Jan 20 00:37:31.076488 kubelet[2570]: E0120 00:37:31.073010 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:31.121512 containerd[1477]: time="2026-01-20T00:37:31.121293634Z" level=info msg="CreateContainer within sandbox \"37b766c0ee7f3caf1df864a2fd8a47553f13b4431b3fac73324d21311ea1c1ee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"da8b5d3548bcb3501e51cfa737b49ffef1897b9a0c54f409178b4b251fa0aead\"" Jan 20 00:37:31.121512 containerd[1477]: time="2026-01-20T00:37:31.122291787Z" level=info msg="StartContainer for \"da8b5d3548bcb3501e51cfa737b49ffef1897b9a0c54f409178b4b251fa0aead\"" Jan 20 00:37:31.307167 systemd[1]: Started cri-containerd-da8b5d3548bcb3501e51cfa737b49ffef1897b9a0c54f409178b4b251fa0aead.scope - libcontainer container da8b5d3548bcb3501e51cfa737b49ffef1897b9a0c54f409178b4b251fa0aead. Jan 20 00:37:31.449657 containerd[1477]: time="2026-01-20T00:37:31.449268691Z" level=info msg="StartContainer for \"da8b5d3548bcb3501e51cfa737b49ffef1897b9a0c54f409178b4b251fa0aead\" returns successfully" Jan 20 00:37:32.052604 kubelet[2570]: E0120 00:37:32.046845 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:32.136106 kubelet[2570]: I0120 00:37:32.133879 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rhfcg" podStartSLOduration=13.133827572 podStartE2EDuration="13.133827572s" podCreationTimestamp="2026-01-20 00:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:37:32.123504133 +0000 UTC m=+133.233974491" watchObservedRunningTime="2026-01-20 00:37:32.133827572 +0000 UTC m=+133.244297910" Jan 20 00:37:32.765223 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 20 00:37:33.058011 kubelet[2570]: E0120 00:37:33.056902 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:33.090851 kubelet[2570]: E0120 00:37:33.089438 2570 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-t6lvh" podUID="5bbf5739-56fb-43bc-bfda-0ed9e40f91d8" Jan 20 00:37:35.078612 kubelet[2570]: E0120 00:37:35.076411 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:38.249071 systemd[1]: run-containerd-runc-k8s.io-da8b5d3548bcb3501e51cfa737b49ffef1897b9a0c54f409178b4b251fa0aead-runc.2KSaWX.mount: Deactivated successfully. Jan 20 00:37:38.471487 systemd-networkd[1395]: lxc_health: Link UP Jan 20 00:37:38.482322 systemd-networkd[1395]: lxc_health: Gained carrier Jan 20 00:37:39.931097 systemd-networkd[1395]: lxc_health: Gained IPv6LL Jan 20 00:37:40.369994 kubelet[2570]: E0120 00:37:40.368665 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:41.090288 kubelet[2570]: E0120 00:37:41.090208 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:42.091716 kubelet[2570]: E0120 00:37:42.091632 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:37:44.764470 sshd[4455]: pam_unix(sshd:session): session closed for user core Jan 20 00:37:44.769765 systemd[1]: sshd@26-10.0.0.24:22-10.0.0.1:40504.service: Deactivated successfully. Jan 20 00:37:44.772317 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 00:37:44.772745 systemd[1]: session-27.scope: Consumed 2.624s CPU time. Jan 20 00:37:44.773640 systemd-logind[1462]: Session 27 logged out. Waiting for processes to exit. Jan 20 00:37:44.775109 systemd-logind[1462]: Removed session 27.