Jan 20 03:05:55.633629 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:14:52 -00 2026 Jan 20 03:05:55.633659 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 03:05:55.633674 kernel: BIOS-provided physical RAM map: Jan 20 03:05:55.633683 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 03:05:55.633692 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 03:05:55.633700 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 03:05:55.633711 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 03:05:55.633720 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 03:05:55.633729 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 03:05:55.633821 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 03:05:55.633832 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 03:05:55.633845 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 03:05:55.633965 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 03:05:55.633974 kernel: NX (Execute Disable) protection: active Jan 20 03:05:55.633985 kernel: APIC: Static calls initialized Jan 20 03:05:55.633995 kernel: SMBIOS 2.8 present. Jan 20 03:05:55.634009 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 03:05:55.634018 kernel: DMI: Memory slots populated: 1/1 Jan 20 03:05:55.634028 kernel: Hypervisor detected: KVM Jan 20 03:05:55.634037 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 03:05:55.634047 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 03:05:55.634057 kernel: kvm-clock: using sched offset of 8964057626 cycles Jan 20 03:05:55.634069 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 03:05:55.634078 kernel: tsc: Detected 2445.426 MHz processor Jan 20 03:05:55.634086 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 03:05:55.634095 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 03:05:55.634107 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 03:05:55.634117 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 03:05:55.634129 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 03:05:55.634139 kernel: Using GB pages for direct mapping Jan 20 03:05:55.634147 kernel: ACPI: Early table checksum verification disabled Jan 20 03:05:55.634156 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 03:05:55.634165 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:05:55.634175 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:05:55.634185 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:05:55.634198 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 03:05:55.634208 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:05:55.634218 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:05:55.634228 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:05:55.634238 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:05:55.634253 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 03:05:55.634266 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 03:05:55.634276 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 03:05:55.634287 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 03:05:55.634297 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 03:05:55.634307 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 03:05:55.634318 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 03:05:55.634328 kernel: No NUMA configuration found Jan 20 03:05:55.634338 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 03:05:55.634351 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 20 03:05:55.634362 kernel: Zone ranges: Jan 20 03:05:55.634372 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 03:05:55.634382 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 03:05:55.634393 kernel: Normal empty Jan 20 03:05:55.634403 kernel: Device empty Jan 20 03:05:55.634413 kernel: Movable zone start for each node Jan 20 03:05:55.634423 kernel: Early memory node ranges Jan 20 03:05:55.634434 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 03:05:55.634444 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 03:05:55.634457 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 03:05:55.634468 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 03:05:55.634478 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 03:05:55.634488 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 03:05:55.634498 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 03:05:55.634509 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 03:05:55.634519 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 03:05:55.634529 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 03:05:55.634540 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 03:05:55.634553 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 03:05:55.634563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 03:05:55.634574 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 03:05:55.634584 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 03:05:55.634594 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 03:05:55.634604 kernel: TSC deadline timer available Jan 20 03:05:55.634615 kernel: CPU topo: Max. logical packages: 1 Jan 20 03:05:55.634625 kernel: CPU topo: Max. logical dies: 1 Jan 20 03:05:55.634635 kernel: CPU topo: Max. dies per package: 1 Jan 20 03:05:55.634648 kernel: CPU topo: Max. threads per core: 1 Jan 20 03:05:55.634658 kernel: CPU topo: Num. cores per package: 4 Jan 20 03:05:55.634669 kernel: CPU topo: Num. threads per package: 4 Jan 20 03:05:55.634679 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 03:05:55.634689 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 03:05:55.634699 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 03:05:55.634710 kernel: kvm-guest: setup PV sched yield Jan 20 03:05:55.634720 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 03:05:55.634730 kernel: Booting paravirtualized kernel on KVM Jan 20 03:05:55.634822 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 03:05:55.634834 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 03:05:55.634845 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 03:05:55.634963 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 03:05:55.634974 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 03:05:55.634984 kernel: kvm-guest: PV spinlocks enabled Jan 20 03:05:55.634994 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 03:05:55.635006 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 03:05:55.635016 kernel: random: crng init done Jan 20 03:05:55.635030 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 03:05:55.635041 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 03:05:55.635052 kernel: Fallback order for Node 0: 0 Jan 20 03:05:55.635064 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 20 03:05:55.635073 kernel: Policy zone: DMA32 Jan 20 03:05:55.635082 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 03:05:55.635091 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 03:05:55.635100 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 03:05:55.635109 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 03:05:55.635125 kernel: Dynamic Preempt: voluntary Jan 20 03:05:55.635136 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 03:05:55.635146 kernel: rcu: RCU event tracing is enabled. Jan 20 03:05:55.635155 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 03:05:55.635164 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 03:05:55.635175 kernel: Rude variant of Tasks RCU enabled. Jan 20 03:05:55.635186 kernel: Tracing variant of Tasks RCU enabled. Jan 20 03:05:55.635198 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 03:05:55.635207 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 03:05:55.635221 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 03:05:55.635230 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 03:05:55.635241 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 03:05:55.635252 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 03:05:55.635263 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 03:05:55.635285 kernel: Console: colour VGA+ 80x25 Jan 20 03:05:55.635302 kernel: printk: legacy console [ttyS0] enabled Jan 20 03:05:55.635312 kernel: ACPI: Core revision 20240827 Jan 20 03:05:55.635321 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 03:05:55.635330 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 03:05:55.635340 kernel: x2apic enabled Jan 20 03:05:55.635354 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 03:05:55.635366 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 03:05:55.635378 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 03:05:55.635389 kernel: kvm-guest: setup PV IPIs Jan 20 03:05:55.635401 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 03:05:55.635418 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 03:05:55.635430 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 03:05:55.635442 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 03:05:55.635453 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 03:05:55.635464 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 03:05:55.635476 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 03:05:55.635488 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 03:05:55.635499 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 03:05:55.635510 kernel: Speculative Store Bypass: Vulnerable Jan 20 03:05:55.635527 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 03:05:55.635540 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 03:05:55.635553 kernel: active return thunk: srso_alias_return_thunk Jan 20 03:05:55.635566 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 03:05:55.635575 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 03:05:55.635585 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 03:05:55.635594 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 03:05:55.635603 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 03:05:55.635620 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 03:05:55.635632 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 03:05:55.635644 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 03:05:55.635655 kernel: Freeing SMP alternatives memory: 32K Jan 20 03:05:55.635667 kernel: pid_max: default: 32768 minimum: 301 Jan 20 03:05:55.635679 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 03:05:55.635692 kernel: landlock: Up and running. Jan 20 03:05:55.635704 kernel: SELinux: Initializing. Jan 20 03:05:55.635714 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 03:05:55.635728 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 03:05:55.635812 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 03:05:55.635827 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 03:05:55.635839 kernel: signal: max sigframe size: 1776 Jan 20 03:05:55.635987 kernel: rcu: Hierarchical SRCU implementation. Jan 20 03:05:55.636002 kernel: rcu: Max phase no-delay instances is 400. Jan 20 03:05:55.636013 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 03:05:55.636025 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 03:05:55.636037 kernel: smp: Bringing up secondary CPUs ... Jan 20 03:05:55.636054 kernel: smpboot: x86: Booting SMP configuration: Jan 20 03:05:55.636067 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 03:05:55.636077 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 03:05:55.636087 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 03:05:55.636097 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46204K init, 2556K bss, 145096K reserved, 0K cma-reserved) Jan 20 03:05:55.636107 kernel: devtmpfs: initialized Jan 20 03:05:55.636119 kernel: x86/mm: Memory block size: 128MB Jan 20 03:05:55.636131 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 03:05:55.636141 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 03:05:55.636155 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 03:05:55.636164 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 03:05:55.636175 kernel: audit: initializing netlink subsys (disabled) Jan 20 03:05:55.636187 kernel: audit: type=2000 audit(1768878348.987:1): state=initialized audit_enabled=0 res=1 Jan 20 03:05:55.636199 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 03:05:55.636211 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 03:05:55.636223 kernel: cpuidle: using governor menu Jan 20 03:05:55.636235 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 03:05:55.636247 kernel: dca service started, version 1.12.1 Jan 20 03:05:55.636263 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 20 03:05:55.636276 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 03:05:55.636286 kernel: PCI: Using configuration type 1 for base access Jan 20 03:05:55.636296 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 03:05:55.636306 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 03:05:55.636315 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 03:05:55.636327 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 03:05:55.636338 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 03:05:55.636350 kernel: ACPI: Added _OSI(Module Device) Jan 20 03:05:55.636366 kernel: ACPI: Added _OSI(Processor Device) Jan 20 03:05:55.636378 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 03:05:55.636390 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 03:05:55.636402 kernel: ACPI: Interpreter enabled Jan 20 03:05:55.636414 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 03:05:55.636426 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 03:05:55.636438 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 03:05:55.636448 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 03:05:55.636458 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 03:05:55.636471 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 03:05:55.636731 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 03:05:55.637160 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 03:05:55.637343 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 03:05:55.637363 kernel: PCI host bridge to bus 0000:00 Jan 20 03:05:55.637545 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 03:05:55.637717 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 03:05:55.638109 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 03:05:55.638335 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 03:05:55.638598 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 03:05:55.639040 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 03:05:55.639194 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 03:05:55.639445 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 03:05:55.639979 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 03:05:55.640151 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 20 03:05:55.640312 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 20 03:05:55.640467 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 20 03:05:55.640619 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 03:05:55.640991 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 03:05:55.641161 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 20 03:05:55.641350 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 20 03:05:55.641533 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 03:05:55.641731 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 03:05:55.642141 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 20 03:05:55.642328 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 20 03:05:55.642518 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 03:05:55.642720 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 03:05:55.643104 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 20 03:05:55.643271 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 20 03:05:55.643429 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 03:05:55.643585 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 20 03:05:55.643837 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 03:05:55.644136 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 03:05:55.644395 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 10742 usecs Jan 20 03:05:55.644597 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 03:05:55.645146 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 20 03:05:55.645337 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 20 03:05:55.645692 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 03:05:55.646110 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 20 03:05:55.646131 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 03:05:55.646149 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 03:05:55.646159 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 03:05:55.646169 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 03:05:55.646178 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 03:05:55.646190 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 03:05:55.646202 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 03:05:55.646214 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 03:05:55.646226 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 03:05:55.646237 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 03:05:55.646254 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 03:05:55.646267 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 03:05:55.646279 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 03:05:55.646289 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 03:05:55.646299 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 03:05:55.646308 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 03:05:55.646318 kernel: iommu: Default domain type: Translated Jan 20 03:05:55.646330 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 03:05:55.646342 kernel: PCI: Using ACPI for IRQ routing Jan 20 03:05:55.646358 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 03:05:55.646370 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 03:05:55.646382 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 03:05:55.646560 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 03:05:55.646824 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 03:05:55.647152 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 03:05:55.647169 kernel: vgaarb: loaded Jan 20 03:05:55.647182 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 03:05:55.647195 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 03:05:55.647210 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 03:05:55.647220 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 03:05:55.647229 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 03:05:55.647241 kernel: pnp: PnP ACPI init Jan 20 03:05:55.647416 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 03:05:55.647434 kernel: pnp: PnP ACPI: found 6 devices Jan 20 03:05:55.647446 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 03:05:55.647458 kernel: NET: Registered PF_INET protocol family Jan 20 03:05:55.647475 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 03:05:55.647487 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 03:05:55.647499 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 03:05:55.647510 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 03:05:55.647521 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 03:05:55.647532 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 03:05:55.647544 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 03:05:55.647556 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 03:05:55.647567 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 03:05:55.647582 kernel: NET: Registered PF_XDP protocol family Jan 20 03:05:55.647733 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 03:05:55.648100 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 03:05:55.648251 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 03:05:55.648396 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 03:05:55.648538 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 03:05:55.648680 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 03:05:55.648695 kernel: PCI: CLS 0 bytes, default 64 Jan 20 03:05:55.648713 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 03:05:55.648724 kernel: Initialise system trusted keyrings Jan 20 03:05:55.648735 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 03:05:55.648830 kernel: Key type asymmetric registered Jan 20 03:05:55.648842 kernel: Asymmetric key parser 'x509' registered Jan 20 03:05:55.648974 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 03:05:55.648986 kernel: io scheduler mq-deadline registered Jan 20 03:05:55.648998 kernel: io scheduler kyber registered Jan 20 03:05:55.649009 kernel: io scheduler bfq registered Jan 20 03:05:55.649025 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 03:05:55.649037 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 03:05:55.649050 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 03:05:55.649063 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 03:05:55.649074 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 03:05:55.649083 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 03:05:55.649093 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 03:05:55.649102 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 03:05:55.649112 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 03:05:55.649285 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 03:05:55.649302 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 03:05:55.649448 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 03:05:55.649598 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T03:05:54 UTC (1768878354) Jan 20 03:05:55.650035 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 03:05:55.650055 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 03:05:55.650069 kernel: NET: Registered PF_INET6 protocol family Jan 20 03:05:55.650079 kernel: Segment Routing with IPv6 Jan 20 03:05:55.650094 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 03:05:55.650104 kernel: NET: Registered PF_PACKET protocol family Jan 20 03:05:55.650114 kernel: Key type dns_resolver registered Jan 20 03:05:55.650125 kernel: IPI shorthand broadcast: enabled Jan 20 03:05:55.650138 kernel: sched_clock: Marking stable (4802067974, 1242076912)->(6521886429, -477741543) Jan 20 03:05:55.650149 kernel: registered taskstats version 1 Jan 20 03:05:55.650158 kernel: Loading compiled-in X.509 certificates Jan 20 03:05:55.650168 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 5eaf2083485884e476a8ac33c4b07b82eff139e9' Jan 20 03:05:55.650178 kernel: Demotion targets for Node 0: null Jan 20 03:05:55.650193 kernel: Key type .fscrypt registered Jan 20 03:05:55.650204 kernel: Key type fscrypt-provisioning registered Jan 20 03:05:55.650215 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 03:05:55.650226 kernel: ima: Allocated hash algorithm: sha1 Jan 20 03:05:55.650237 kernel: ima: No architecture policies found Jan 20 03:05:55.650248 kernel: clk: Disabling unused clocks Jan 20 03:05:55.650259 kernel: Warning: unable to open an initial console. Jan 20 03:05:55.650270 kernel: Freeing unused kernel image (initmem) memory: 46204K Jan 20 03:05:55.650284 kernel: Write protecting the kernel read-only data: 40960k Jan 20 03:05:55.650295 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 20 03:05:55.650306 kernel: Run /init as init process Jan 20 03:05:55.650317 kernel: with arguments: Jan 20 03:05:55.650328 kernel: /init Jan 20 03:05:55.650339 kernel: with environment: Jan 20 03:05:55.650349 kernel: HOME=/ Jan 20 03:05:55.650360 kernel: TERM=linux Jan 20 03:05:55.650372 systemd[1]: Successfully made /usr/ read-only. Jan 20 03:05:55.650391 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 03:05:55.650403 systemd[1]: Detected virtualization kvm. Jan 20 03:05:55.650414 systemd[1]: Detected architecture x86-64. Jan 20 03:05:55.650425 systemd[1]: Running in initrd. Jan 20 03:05:55.650436 systemd[1]: No hostname configured, using default hostname. Jan 20 03:05:55.650448 systemd[1]: Hostname set to . Jan 20 03:05:55.650460 systemd[1]: Initializing machine ID from VM UUID. Jan 20 03:05:55.650471 systemd[1]: Queued start job for default target initrd.target. Jan 20 03:05:55.650497 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 03:05:55.650514 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 03:05:55.650529 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 03:05:55.650541 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 03:05:55.650553 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 03:05:55.650569 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 03:05:55.650582 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 03:05:55.650595 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 03:05:55.650607 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 03:05:55.650619 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 03:05:55.650630 systemd[1]: Reached target paths.target - Path Units. Jan 20 03:05:55.650642 systemd[1]: Reached target slices.target - Slice Units. Jan 20 03:05:55.650653 systemd[1]: Reached target swap.target - Swaps. Jan 20 03:05:55.650667 systemd[1]: Reached target timers.target - Timer Units. Jan 20 03:05:55.650678 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 03:05:55.650690 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 03:05:55.650702 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 03:05:55.650714 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 03:05:55.650727 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 03:05:55.650819 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 03:05:55.650833 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 03:05:55.650978 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 03:05:55.650992 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 03:05:55.651005 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 03:05:55.651017 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 03:05:55.651030 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 03:05:55.651042 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 03:05:55.651054 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 03:05:55.651069 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 03:05:55.651084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 03:05:55.651095 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 03:05:55.651109 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 03:05:55.651162 systemd-journald[203]: Collecting audit messages is disabled. Jan 20 03:05:55.651193 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 03:05:55.651204 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 03:05:55.651220 systemd-journald[203]: Journal started Jan 20 03:05:55.651245 systemd-journald[203]: Runtime Journal (/run/log/journal/73f3ed4c9964410080dae6db5ad8723b) is 6M, max 48.3M, 42.2M free. Jan 20 03:05:55.650000 systemd-modules-load[204]: Inserted module 'overlay' Jan 20 03:05:55.667044 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 03:05:55.693351 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 03:05:55.742185 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 03:05:55.745084 kernel: Bridge firewalling registered Jan 20 03:05:55.744998 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 20 03:05:55.747448 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 03:05:55.760651 systemd-tmpfiles[217]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 03:05:56.165277 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 03:05:56.181837 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:05:56.200266 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 03:05:56.341486 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 03:05:56.343092 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 03:05:56.481639 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 03:05:56.609473 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 03:05:56.614480 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 03:05:56.618535 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 03:05:56.635493 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 03:05:56.648590 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 03:05:56.710460 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 03:05:56.725696 systemd-resolved[246]: Positive Trust Anchors: Jan 20 03:05:56.725710 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 03:05:56.725736 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 03:05:56.728567 systemd-resolved[246]: Defaulting to hostname 'linux'. Jan 20 03:05:56.729996 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 03:05:56.746393 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 03:05:57.007120 kernel: SCSI subsystem initialized Jan 20 03:05:57.024057 kernel: Loading iSCSI transport class v2.0-870. Jan 20 03:05:57.050086 kernel: iscsi: registered transport (tcp) Jan 20 03:05:57.087403 kernel: iscsi: registered transport (qla4xxx) Jan 20 03:05:57.087466 kernel: QLogic iSCSI HBA Driver Jan 20 03:05:57.133603 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 03:05:57.189386 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 03:05:57.200827 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 03:05:57.321687 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 03:05:57.338225 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 03:05:57.427152 kernel: raid6: avx2x4 gen() 27730 MB/s Jan 20 03:05:57.447059 kernel: raid6: avx2x2 gen() 26314 MB/s Jan 20 03:05:57.470237 kernel: raid6: avx2x1 gen() 20000 MB/s Jan 20 03:05:57.470312 kernel: raid6: using algorithm avx2x4 gen() 27730 MB/s Jan 20 03:05:57.494549 kernel: raid6: .... xor() 3869 MB/s, rmw enabled Jan 20 03:05:57.494675 kernel: raid6: using avx2x2 recovery algorithm Jan 20 03:05:57.527069 kernel: xor: automatically using best checksumming function avx Jan 20 03:05:57.796006 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 03:05:57.813670 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 03:05:57.825194 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 03:05:57.894178 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jan 20 03:05:57.903446 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 03:05:57.920548 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 03:05:57.984229 dracut-pre-trigger[459]: rd.md=0: removing MD RAID activation Jan 20 03:05:58.057490 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 03:05:58.078700 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 03:05:58.202705 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 03:05:58.212223 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 03:05:58.278008 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 03:05:58.330312 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 03:05:58.350131 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 03:05:58.350194 kernel: GPT:9289727 != 19775487 Jan 20 03:05:58.350217 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 03:05:58.359174 kernel: GPT:9289727 != 19775487 Jan 20 03:05:58.359278 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 03:05:58.359685 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 03:05:58.388610 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 03:05:58.388639 kernel: libata version 3.00 loaded. Jan 20 03:05:58.360133 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:05:58.400232 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 03:05:58.424451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 03:05:58.435624 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 03:05:58.442742 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 03:05:58.471323 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 03:05:58.481196 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 03:05:58.496057 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 03:05:58.496304 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 03:05:58.496448 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 03:05:58.525345 kernel: scsi host0: ahci Jan 20 03:05:58.530235 kernel: scsi host1: ahci Jan 20 03:05:58.535987 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 20 03:05:58.538043 kernel: scsi host2: ahci Jan 20 03:05:58.538292 kernel: scsi host3: ahci Jan 20 03:05:58.541048 kernel: scsi host4: ahci Jan 20 03:05:58.543121 kernel: scsi host5: ahci Jan 20 03:05:58.543329 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Jan 20 03:05:58.543347 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Jan 20 03:05:58.543362 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Jan 20 03:05:58.543377 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Jan 20 03:05:58.543391 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Jan 20 03:05:58.543407 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Jan 20 03:05:58.556040 kernel: AES CTR mode by8 optimization enabled Jan 20 03:05:58.586615 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 03:05:59.155479 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 03:05:59.155519 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 03:05:59.155536 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 03:05:59.155554 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 03:05:59.155571 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 03:05:59.155596 kernel: ata3.00: applying bridge limits Jan 20 03:05:59.155613 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 03:05:59.155628 kernel: ata3.00: configured for UDMA/100 Jan 20 03:05:59.155644 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 03:05:59.156266 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 03:05:59.156287 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 03:05:59.156304 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 03:05:59.156320 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 03:05:59.156551 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 03:05:59.156575 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 03:05:59.173155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:05:59.205676 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 03:05:59.232004 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 03:05:59.258436 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 03:05:59.276236 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 03:05:59.283217 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 03:05:59.341156 disk-uuid[632]: Primary Header is updated. Jan 20 03:05:59.341156 disk-uuid[632]: Secondary Entries is updated. Jan 20 03:05:59.341156 disk-uuid[632]: Secondary Header is updated. Jan 20 03:05:59.365677 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 03:05:59.567401 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 03:05:59.575303 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 03:05:59.589662 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 03:05:59.597164 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 03:05:59.606129 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 03:05:59.678271 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 03:06:00.386157 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 03:06:00.388626 disk-uuid[633]: The operation has completed successfully. Jan 20 03:06:00.456386 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 03:06:00.456701 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 03:06:00.520705 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 03:06:00.567761 sh[659]: Success Jan 20 03:06:00.622463 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 03:06:00.622541 kernel: device-mapper: uevent: version 1.0.3 Jan 20 03:06:00.623206 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 03:06:00.663102 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 03:06:00.727736 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 03:06:00.749615 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 03:06:00.782083 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 03:06:00.802301 kernel: BTRFS: device fsid 1cad4abe-82cb-4052-9906-9dfb1f3e3340 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (671) Jan 20 03:06:00.827211 kernel: BTRFS info (device dm-0): first mount of filesystem 1cad4abe-82cb-4052-9906-9dfb1f3e3340 Jan 20 03:06:00.827282 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 03:06:00.864645 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 03:06:00.864724 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 03:06:00.869440 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 03:06:00.870732 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 03:06:00.884112 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 03:06:00.885712 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 03:06:00.937045 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 03:06:01.009128 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (694) Jan 20 03:06:01.023601 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:06:01.023651 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 03:06:01.046562 kernel: BTRFS info (device vda6): turning on async discard Jan 20 03:06:01.046951 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 03:06:01.070178 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:06:01.075373 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 03:06:01.098559 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 03:06:01.266407 ignition[755]: Ignition 2.22.0 Jan 20 03:06:01.266498 ignition[755]: Stage: fetch-offline Jan 20 03:06:01.266541 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jan 20 03:06:01.266555 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:06:01.266669 ignition[755]: parsed url from cmdline: "" Jan 20 03:06:01.266676 ignition[755]: no config URL provided Jan 20 03:06:01.266685 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 03:06:01.266698 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jan 20 03:06:01.266729 ignition[755]: op(1): [started] loading QEMU firmware config module Jan 20 03:06:01.266737 ignition[755]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 03:06:01.295255 ignition[755]: op(1): [finished] loading QEMU firmware config module Jan 20 03:06:01.414652 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 03:06:01.432160 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 03:06:01.515164 systemd-networkd[849]: lo: Link UP Jan 20 03:06:01.515230 systemd-networkd[849]: lo: Gained carrier Jan 20 03:06:01.525314 systemd-networkd[849]: Enumeration completed Jan 20 03:06:01.530055 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 03:06:01.542714 systemd[1]: Reached target network.target - Network. Jan 20 03:06:01.555482 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 03:06:01.555583 systemd-networkd[849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 03:06:01.577473 systemd-networkd[849]: eth0: Link UP Jan 20 03:06:01.577762 systemd-networkd[849]: eth0: Gained carrier Jan 20 03:06:01.577779 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 03:06:01.627143 systemd-networkd[849]: eth0: DHCPv4 address 10.0.0.5/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 03:06:01.944539 ignition[755]: parsing config with SHA512: 2a4d25bc88bc08ceb5d387e1b135fdd77c5f996be0bdebd1ab23e6fec986f2537d377001f5767b4d79192009cd3758ee209cffa0ac0084d519e864486f8cb273 Jan 20 03:06:01.960398 unknown[755]: fetched base config from "system" Jan 20 03:06:01.960412 unknown[755]: fetched user config from "qemu" Jan 20 03:06:01.972369 ignition[755]: fetch-offline: fetch-offline passed Jan 20 03:06:01.972625 ignition[755]: Ignition finished successfully Jan 20 03:06:01.986585 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 03:06:02.002058 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 03:06:02.004281 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 03:06:02.100603 ignition[854]: Ignition 2.22.0 Jan 20 03:06:02.100699 ignition[854]: Stage: kargs Jan 20 03:06:02.101186 ignition[854]: no configs at "/usr/lib/ignition/base.d" Jan 20 03:06:02.101204 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:06:02.108051 ignition[854]: kargs: kargs passed Jan 20 03:06:02.108119 ignition[854]: Ignition finished successfully Jan 20 03:06:02.131473 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 03:06:02.146433 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 03:06:02.227274 ignition[862]: Ignition 2.22.0 Jan 20 03:06:02.227350 ignition[862]: Stage: disks Jan 20 03:06:02.227497 ignition[862]: no configs at "/usr/lib/ignition/base.d" Jan 20 03:06:02.227508 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:06:02.228466 ignition[862]: disks: disks passed Jan 20 03:06:02.228531 ignition[862]: Ignition finished successfully Jan 20 03:06:02.271556 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 03:06:02.290337 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 03:06:02.301383 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 03:06:02.309454 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 03:06:02.316145 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 03:06:02.322677 systemd[1]: Reached target basic.target - Basic System. Jan 20 03:06:02.330791 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 03:06:02.421533 systemd-fsck[872]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 20 03:06:02.431519 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 03:06:02.448341 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 03:06:02.699210 kernel: EXT4-fs (vda9): mounted filesystem d87587c2-84ee-4a64-a55e-c6773c94f548 r/w with ordered data mode. Quota mode: none. Jan 20 03:06:02.700280 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 03:06:02.706702 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 03:06:02.713670 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 03:06:02.745171 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 03:06:02.764075 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (881) Jan 20 03:06:02.745777 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 03:06:02.798282 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:06:02.798317 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 03:06:02.746044 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 03:06:02.830345 kernel: BTRFS info (device vda6): turning on async discard Jan 20 03:06:02.830381 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 03:06:02.746082 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 03:06:02.844101 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 03:06:02.869724 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 03:06:02.875708 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 03:06:02.950139 initrd-setup-root[905]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 03:06:02.961670 initrd-setup-root[912]: cut: /sysroot/etc/group: No such file or directory Jan 20 03:06:02.978308 initrd-setup-root[919]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 03:06:02.987281 systemd-networkd[849]: eth0: Gained IPv6LL Jan 20 03:06:02.997516 initrd-setup-root[926]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 03:06:03.236438 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 03:06:03.245982 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 03:06:03.272286 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 03:06:03.291048 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 03:06:03.305338 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:06:03.337412 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 03:06:03.391392 ignition[995]: INFO : Ignition 2.22.0 Jan 20 03:06:03.391392 ignition[995]: INFO : Stage: mount Jan 20 03:06:03.401187 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 03:06:03.401187 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:06:03.401187 ignition[995]: INFO : mount: mount passed Jan 20 03:06:03.401187 ignition[995]: INFO : Ignition finished successfully Jan 20 03:06:03.431490 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 03:06:03.434321 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 03:06:03.704485 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 03:06:03.764057 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1007) Jan 20 03:06:03.764143 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:06:03.778532 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 03:06:03.805104 kernel: BTRFS info (device vda6): turning on async discard Jan 20 03:06:03.805195 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 03:06:03.808645 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 03:06:03.889062 ignition[1024]: INFO : Ignition 2.22.0 Jan 20 03:06:03.889062 ignition[1024]: INFO : Stage: files Jan 20 03:06:03.905228 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 03:06:03.905228 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:06:03.905228 ignition[1024]: DEBUG : files: compiled without relabeling support, skipping Jan 20 03:06:03.905228 ignition[1024]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 03:06:03.905228 ignition[1024]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 03:06:03.956591 ignition[1024]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 03:06:03.956591 ignition[1024]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 03:06:03.956591 ignition[1024]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 03:06:03.956591 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 03:06:03.956591 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 03:06:03.907213 unknown[1024]: wrote ssh authorized keys file for user: core Jan 20 03:06:04.041088 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 03:06:04.183763 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 03:06:04.183763 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 03:06:04.183763 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 20 03:06:04.417182 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 20 03:06:04.811117 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 20 03:06:04.811117 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 20 03:06:04.838601 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 03:06:04.838601 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 03:06:04.838601 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 03:06:04.838601 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 03:06:04.838601 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 03:06:04.838601 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 03:06:04.838601 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 03:06:04.838601 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 03:06:04.838601 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 03:06:04.838601 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 03:06:04.838601 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 03:06:04.838601 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 03:06:04.838601 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 20 03:06:05.279521 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 20 03:06:06.200592 ignition[1024]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 03:06:06.200592 ignition[1024]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 20 03:06:06.227709 ignition[1024]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 03:06:06.248500 ignition[1024]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 03:06:06.248500 ignition[1024]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 20 03:06:06.248500 ignition[1024]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 20 03:06:06.248500 ignition[1024]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 03:06:06.305752 ignition[1024]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 03:06:06.305752 ignition[1024]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 20 03:06:06.305752 ignition[1024]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 03:06:06.397050 ignition[1024]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 03:06:06.409065 ignition[1024]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 03:06:06.409065 ignition[1024]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 03:06:06.409065 ignition[1024]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 20 03:06:06.409065 ignition[1024]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 03:06:06.409065 ignition[1024]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 03:06:06.409065 ignition[1024]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 03:06:06.409065 ignition[1024]: INFO : files: files passed Jan 20 03:06:06.409065 ignition[1024]: INFO : Ignition finished successfully Jan 20 03:06:06.410364 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 03:06:06.495597 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 03:06:06.533213 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 03:06:06.542404 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 03:06:06.542572 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 03:06:06.583151 initrd-setup-root-after-ignition[1053]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 03:06:06.591995 initrd-setup-root-after-ignition[1055]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 03:06:06.591995 initrd-setup-root-after-ignition[1055]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 03:06:06.616778 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 03:06:06.629789 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 03:06:06.638295 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 03:06:06.663495 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 03:06:06.783202 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 03:06:06.783467 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 03:06:06.799378 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 03:06:06.816793 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 03:06:06.817098 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 03:06:06.818414 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 03:06:06.889184 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 03:06:06.891408 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 03:06:06.961293 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 03:06:06.961759 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 03:06:06.979521 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 03:06:07.010385 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 03:06:07.010629 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 03:06:07.036136 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 03:06:07.036487 systemd[1]: Stopped target basic.target - Basic System. Jan 20 03:06:07.053565 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 03:06:07.067157 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 03:06:07.103041 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 03:06:07.103440 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 03:06:07.120647 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 03:06:07.141570 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 03:06:07.160596 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 03:06:07.201333 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 03:06:07.209763 systemd[1]: Stopped target swap.target - Swaps. Jan 20 03:06:07.223185 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 03:06:07.223312 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 03:06:07.242614 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 03:06:07.249690 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 03:06:07.265107 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 03:06:07.265451 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 03:06:07.288370 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 03:06:07.288600 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 03:06:07.310180 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 03:06:07.310403 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 03:06:07.321343 systemd[1]: Stopped target paths.target - Path Units. Jan 20 03:06:07.336692 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 03:06:07.338284 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 03:06:07.352610 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 03:06:07.364648 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 03:06:07.379593 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 03:06:07.379689 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 03:06:07.392363 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 03:06:07.392506 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 03:06:07.423995 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 03:06:07.424179 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 03:06:07.438657 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 03:06:07.438827 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 03:06:07.466437 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 03:06:07.477420 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 03:06:07.477663 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 03:06:07.607639 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 03:06:07.616514 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 03:06:07.616693 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 03:06:07.659536 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 03:06:07.659725 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 03:06:07.690157 ignition[1079]: INFO : Ignition 2.22.0 Jan 20 03:06:07.690157 ignition[1079]: INFO : Stage: umount Jan 20 03:06:07.690157 ignition[1079]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 03:06:07.690157 ignition[1079]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 03:06:07.690157 ignition[1079]: INFO : umount: umount passed Jan 20 03:06:07.690157 ignition[1079]: INFO : Ignition finished successfully Jan 20 03:06:07.683606 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 03:06:07.757353 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 03:06:07.758280 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 03:06:07.758416 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 03:06:07.779621 systemd[1]: Stopped target network.target - Network. Jan 20 03:06:07.796842 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 03:06:07.797172 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 03:06:07.814195 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 03:06:07.814257 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 03:06:07.831429 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 03:06:07.831493 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 03:06:07.846249 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 03:06:07.846300 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 03:06:07.860294 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 03:06:07.873149 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 03:06:07.890200 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 03:06:07.922449 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 03:06:07.922570 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 03:06:07.929274 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 03:06:07.929329 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 03:06:07.999601 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 03:06:07.999843 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 03:06:08.028758 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 20 03:06:08.029412 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 03:06:08.029649 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 03:06:08.055669 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 20 03:06:08.057343 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 03:06:08.072448 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 03:06:08.072511 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 03:06:08.084142 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 03:06:08.100176 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 03:06:08.100254 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 03:06:08.131181 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 03:06:08.131253 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 03:06:08.180672 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 03:06:08.180824 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 03:06:08.197534 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 03:06:08.197606 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 03:06:08.214721 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 03:06:08.224320 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 03:06:08.224406 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 20 03:06:08.262149 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 03:06:08.262359 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 03:06:08.286356 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 03:06:08.286594 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 03:06:08.296791 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 03:06:08.296839 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 03:06:08.310556 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 03:06:08.310602 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 03:06:08.324552 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 03:06:08.324618 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 03:06:08.341161 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 03:06:08.341230 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 03:06:08.354157 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 03:06:08.354217 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 03:06:08.373184 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 03:06:08.386038 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 03:06:08.386159 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 03:06:08.417421 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 03:06:08.417507 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 03:06:08.452539 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 03:06:08.452610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:06:08.485148 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 20 03:06:08.485228 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 20 03:06:08.485305 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 03:06:08.485699 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 03:06:08.486086 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 03:06:08.493694 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 03:06:08.523437 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 03:06:08.603514 systemd[1]: Switching root. Jan 20 03:06:08.672386 systemd-journald[203]: Journal stopped Jan 20 03:06:11.132305 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 20 03:06:11.132374 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 03:06:11.132392 kernel: SELinux: policy capability open_perms=1 Jan 20 03:06:11.132403 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 03:06:11.132412 kernel: SELinux: policy capability always_check_network=0 Jan 20 03:06:11.132425 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 03:06:11.132435 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 03:06:11.132447 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 03:06:11.132457 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 03:06:11.132471 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 03:06:11.132482 kernel: audit: type=1403 audit(1768878368.993:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 03:06:11.132498 systemd[1]: Successfully loaded SELinux policy in 149.510ms. Jan 20 03:06:11.132515 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.199ms. Jan 20 03:06:11.132526 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 03:06:11.132540 systemd[1]: Detected virtualization kvm. Jan 20 03:06:11.132553 systemd[1]: Detected architecture x86-64. Jan 20 03:06:11.132563 systemd[1]: Detected first boot. Jan 20 03:06:11.132574 systemd[1]: Initializing machine ID from VM UUID. Jan 20 03:06:11.132585 zram_generator::config[1127]: No configuration found. Jan 20 03:06:11.132596 kernel: Guest personality initialized and is inactive Jan 20 03:06:11.132606 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 03:06:11.132616 kernel: Initialized host personality Jan 20 03:06:11.132626 kernel: NET: Registered PF_VSOCK protocol family Jan 20 03:06:11.132636 systemd[1]: Populated /etc with preset unit settings. Jan 20 03:06:11.132650 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 20 03:06:11.132660 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 03:06:11.132671 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 03:06:11.132682 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 03:06:11.132693 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 03:06:11.132704 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 03:06:11.132714 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 03:06:11.132730 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 03:06:11.132743 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 03:06:11.132754 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 03:06:11.132770 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 03:06:11.132792 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 03:06:11.132807 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 03:06:11.132823 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 03:06:11.132838 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 03:06:11.133130 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 03:06:11.133146 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 03:06:11.133162 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 03:06:11.133173 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 03:06:11.133184 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 03:06:11.133195 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 03:06:11.133206 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 03:06:11.133216 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 03:06:11.133227 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 03:06:11.133237 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 03:06:11.133250 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 03:06:11.133261 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 03:06:11.133272 systemd[1]: Reached target slices.target - Slice Units. Jan 20 03:06:11.133282 systemd[1]: Reached target swap.target - Swaps. Jan 20 03:06:11.133293 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 03:06:11.133305 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 03:06:11.133315 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 03:06:11.133328 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 03:06:11.133338 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 03:06:11.133351 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 03:06:11.133362 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 03:06:11.133373 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 03:06:11.133383 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 03:06:11.133394 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 03:06:11.133405 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:06:11.133416 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 03:06:11.133426 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 03:06:11.133437 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 03:06:11.133450 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 03:06:11.133461 systemd[1]: Reached target machines.target - Containers. Jan 20 03:06:11.133472 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 03:06:11.133488 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 03:06:11.133498 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 03:06:11.133510 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 03:06:11.133521 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 03:06:11.133531 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 03:06:11.133544 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 03:06:11.133554 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 03:06:11.133566 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 03:06:11.133576 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 03:06:11.133587 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 03:06:11.133598 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 03:06:11.133609 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 03:06:11.133620 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 03:06:11.133631 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 03:06:11.133644 kernel: fuse: init (API version 7.41) Jan 20 03:06:11.133654 kernel: loop: module loaded Jan 20 03:06:11.133666 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 03:06:11.133677 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 03:06:11.133687 kernel: ACPI: bus type drm_connector registered Jan 20 03:06:11.133698 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 03:06:11.133708 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 03:06:11.133742 systemd-journald[1212]: Collecting audit messages is disabled. Jan 20 03:06:11.133774 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 03:06:11.133795 systemd-journald[1212]: Journal started Jan 20 03:06:11.133822 systemd-journald[1212]: Runtime Journal (/run/log/journal/73f3ed4c9964410080dae6db5ad8723b) is 6M, max 48.3M, 42.2M free. Jan 20 03:06:10.046830 systemd[1]: Queued start job for default target multi-user.target. Jan 20 03:06:10.075342 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 03:06:10.077325 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 03:06:10.078240 systemd[1]: systemd-journald.service: Consumed 2.380s CPU time. Jan 20 03:06:11.153108 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 03:06:11.169285 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 03:06:11.169345 systemd[1]: Stopped verity-setup.service. Jan 20 03:06:11.176303 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:06:11.196125 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 03:06:11.204629 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 03:06:11.211410 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 03:06:11.218600 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 03:06:11.226121 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 03:06:11.234392 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 03:06:11.242413 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 03:06:11.249334 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 03:06:11.257568 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 03:06:11.267498 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 03:06:11.267827 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 03:06:11.277358 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 03:06:11.277686 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 03:06:11.286546 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 03:06:11.287141 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 03:06:11.294430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 03:06:11.294751 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 03:06:11.303318 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 03:06:11.303642 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 03:06:11.311447 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 03:06:11.311785 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 03:06:11.319776 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 03:06:11.328421 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 03:06:11.337592 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 03:06:11.346627 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 03:06:11.355455 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 03:06:11.380485 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 03:06:11.391427 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 03:06:11.410676 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 03:06:11.419281 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 03:06:11.419393 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 03:06:11.428789 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 03:06:11.440206 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 03:06:11.447720 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 03:06:11.467426 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 03:06:11.486512 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 03:06:11.496498 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 03:06:11.498335 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 03:06:11.507200 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 03:06:11.509295 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 03:06:11.553457 systemd-journald[1212]: Time spent on flushing to /var/log/journal/73f3ed4c9964410080dae6db5ad8723b is 29.937ms for 978 entries. Jan 20 03:06:11.553457 systemd-journald[1212]: System Journal (/var/log/journal/73f3ed4c9964410080dae6db5ad8723b) is 8M, max 195.6M, 187.6M free. Jan 20 03:06:11.596301 systemd-journald[1212]: Received client request to flush runtime journal. Jan 20 03:06:11.596353 kernel: loop0: detected capacity change from 0 to 110984 Jan 20 03:06:11.519295 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 03:06:11.533238 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 03:06:11.546711 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 03:06:11.602428 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 03:06:11.612618 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 03:06:11.630602 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 03:06:11.648296 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 03:06:11.663684 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 03:06:11.675424 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 03:06:11.686626 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 03:06:11.701298 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 03:06:11.722818 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 03:06:11.761377 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 03:06:11.767131 kernel: loop1: detected capacity change from 0 to 128560 Jan 20 03:06:11.779225 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jan 20 03:06:11.779302 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jan 20 03:06:11.793166 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 03:06:11.829147 kernel: loop2: detected capacity change from 0 to 229808 Jan 20 03:06:11.890165 kernel: loop3: detected capacity change from 0 to 110984 Jan 20 03:06:11.928214 kernel: loop4: detected capacity change from 0 to 128560 Jan 20 03:06:11.964675 kernel: loop5: detected capacity change from 0 to 229808 Jan 20 03:06:11.992257 (sd-merge)[1270]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 03:06:11.993081 (sd-merge)[1270]: Merged extensions into '/usr'. Jan 20 03:06:11.999695 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 03:06:11.999777 systemd[1]: Reloading... Jan 20 03:06:12.093067 zram_generator::config[1295]: No configuration found. Jan 20 03:06:12.141607 ldconfig[1242]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 03:06:12.365490 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 03:06:12.365729 systemd[1]: Reloading finished in 365 ms. Jan 20 03:06:12.417568 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 03:06:12.429552 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 03:06:12.442579 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 03:06:12.497792 systemd[1]: Starting ensure-sysext.service... Jan 20 03:06:12.506734 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 03:06:12.536246 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 03:06:12.567341 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 03:06:12.567449 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 03:06:12.567753 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 03:06:12.568353 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 03:06:12.570300 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 03:06:12.570643 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jan 20 03:06:12.570827 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jan 20 03:06:12.573666 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... Jan 20 03:06:12.573687 systemd[1]: Reloading... Jan 20 03:06:12.580728 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 03:06:12.580748 systemd-tmpfiles[1335]: Skipping /boot Jan 20 03:06:12.607524 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 03:06:12.607610 systemd-tmpfiles[1335]: Skipping /boot Jan 20 03:06:12.611370 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Jan 20 03:06:12.658141 zram_generator::config[1360]: No configuration found. Jan 20 03:06:12.922196 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 03:06:12.947177 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 03:06:12.955623 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 03:06:12.956096 systemd[1]: Reloading finished in 381 ms. Jan 20 03:06:12.960050 kernel: ACPI: button: Power Button [PWRF] Jan 20 03:06:13.000624 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 03:06:13.021089 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 03:06:13.084353 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 03:06:13.101306 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:06:13.103588 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 03:06:13.120787 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 03:06:13.132183 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 03:06:13.135063 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 03:06:13.150579 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 03:06:13.164429 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 03:06:13.174361 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 03:06:13.176280 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 03:06:13.186260 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 03:06:13.189680 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 03:06:13.209377 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 03:06:13.223741 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 03:06:13.236706 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 03:06:13.245286 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:06:13.249195 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 03:06:13.250211 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 03:06:13.258815 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 03:06:13.260137 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 03:06:13.271277 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 03:06:13.271776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 03:06:13.307158 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 03:06:13.307592 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 03:06:13.311662 systemd[1]: Finished ensure-sysext.service. Jan 20 03:06:13.322536 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 03:06:13.350630 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:06:13.351206 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 03:06:13.353608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 03:06:13.362369 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 03:06:13.372395 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 03:06:13.387752 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 03:06:13.399440 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 03:06:13.399842 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 03:06:13.413420 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 03:06:13.454490 augenrules[1492]: No rules Jan 20 03:06:13.488762 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 03:06:13.498618 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 03:06:13.506042 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:06:13.517377 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 03:06:13.521465 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 03:06:13.534337 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 03:06:13.548753 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 03:06:13.567255 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 03:06:13.570595 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 03:06:13.588595 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 03:06:13.590774 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 03:06:13.592791 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 03:06:13.593475 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 03:06:13.601558 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 03:06:13.602274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 03:06:13.667309 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 03:06:13.667417 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 03:06:13.676269 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 03:06:13.689557 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 03:06:13.712640 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 03:06:13.754600 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 03:06:13.796690 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 03:06:13.929314 kernel: kvm_amd: TSC scaling supported Jan 20 03:06:13.929434 kernel: kvm_amd: Nested Virtualization enabled Jan 20 03:06:13.929458 kernel: kvm_amd: Nested Paging enabled Jan 20 03:06:13.930444 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 03:06:13.930483 kernel: kvm_amd: PMU virtualization is disabled Jan 20 03:06:14.175354 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 03:06:14.176994 systemd-resolved[1466]: Positive Trust Anchors: Jan 20 03:06:14.177330 systemd-resolved[1466]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 03:06:14.177412 systemd-resolved[1466]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 03:06:14.185279 systemd-networkd[1465]: lo: Link UP Jan 20 03:06:14.185322 systemd-networkd[1465]: lo: Gained carrier Jan 20 03:06:14.186507 systemd-resolved[1466]: Defaulting to hostname 'linux'. Jan 20 03:06:14.190303 systemd-networkd[1465]: Enumeration completed Jan 20 03:06:14.191337 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 03:06:14.191370 systemd-networkd[1465]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 03:06:14.193013 kernel: EDAC MC: Ver: 3.0.0 Jan 20 03:06:14.193276 systemd-networkd[1465]: eth0: Link UP Jan 20 03:06:14.193563 systemd-networkd[1465]: eth0: Gained carrier Jan 20 03:06:14.193618 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 03:06:14.215055 systemd-networkd[1465]: eth0: DHCPv4 address 10.0.0.5/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 03:06:14.216359 systemd-timesyncd[1490]: Network configuration changed, trying to establish connection. Jan 20 03:06:14.937791 systemd-resolved[1466]: Clock change detected. Flushing caches. Jan 20 03:06:14.937870 systemd-timesyncd[1490]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 03:06:14.937937 systemd-timesyncd[1490]: Initial clock synchronization to Tue 2026-01-20 03:06:14.937719 UTC. Jan 20 03:06:15.016923 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 03:06:15.017532 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 03:06:15.018660 systemd[1]: Reached target network.target - Network. Jan 20 03:06:15.018805 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 03:06:15.019265 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 03:06:15.022664 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 03:06:15.024726 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 03:06:15.056255 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:06:15.062901 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 03:06:15.067209 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 03:06:15.071475 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 03:06:15.075908 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 03:06:15.079996 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 03:06:15.083977 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 03:06:15.088389 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 03:06:15.092341 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 03:06:15.092396 systemd[1]: Reached target paths.target - Path Units. Jan 20 03:06:15.095427 systemd[1]: Reached target timers.target - Timer Units. Jan 20 03:06:15.099280 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 03:06:15.104720 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 03:06:15.109863 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 03:06:15.113894 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 03:06:15.117466 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 03:06:15.127208 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 03:06:15.130940 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 03:06:15.135707 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 03:06:15.139716 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 03:06:15.144438 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 03:06:15.147474 systemd[1]: Reached target basic.target - Basic System. Jan 20 03:06:15.150285 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 03:06:15.150351 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 03:06:15.152079 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 03:06:15.156370 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 03:06:15.176133 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 03:06:15.182660 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 03:06:15.188004 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 03:06:15.191126 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 03:06:15.194110 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 03:06:15.197923 jq[1533]: false Jan 20 03:06:15.200871 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 03:06:15.206282 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 03:06:15.212190 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 03:06:15.219904 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing passwd entry cache Jan 20 03:06:15.219889 oslogin_cache_refresh[1535]: Refreshing passwd entry cache Jan 20 03:06:15.222039 extend-filesystems[1534]: Found /dev/vda6 Jan 20 03:06:15.223714 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 03:06:15.228284 extend-filesystems[1534]: Found /dev/vda9 Jan 20 03:06:15.231569 extend-filesystems[1534]: Checking size of /dev/vda9 Jan 20 03:06:15.234838 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 03:06:15.236164 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 03:06:15.236902 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 03:06:15.238779 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 03:06:15.240082 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 03:06:15.243008 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 03:06:15.243946 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 03:06:15.244199 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 03:06:15.244578 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 03:06:15.244930 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 03:06:15.247219 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 03:06:15.247484 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 03:06:15.256414 jq[1554]: true Jan 20 03:06:15.266507 extend-filesystems[1534]: Resized partition /dev/vda9 Jan 20 03:06:15.272346 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting users, quitting Jan 20 03:06:15.272346 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 03:06:15.272326 oslogin_cache_refresh[1535]: Failure getting users, quitting Jan 20 03:06:15.272569 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing group entry cache Jan 20 03:06:15.272353 oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 03:06:15.272430 oslogin_cache_refresh[1535]: Refreshing group entry cache Jan 20 03:06:15.274388 extend-filesystems[1567]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 03:06:15.279338 jq[1564]: true Jan 20 03:06:15.289484 update_engine[1553]: I20260120 03:06:15.289266 1553 main.cc:92] Flatcar Update Engine starting Jan 20 03:06:15.294967 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 03:06:15.295119 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting groups, quitting Jan 20 03:06:15.295119 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 03:06:15.292018 oslogin_cache_refresh[1535]: Failure getting groups, quitting Jan 20 03:06:15.292045 oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 03:06:15.295573 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 03:06:15.296040 (ntainerd)[1574]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 03:06:15.299964 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 03:06:15.308792 tar[1556]: linux-amd64/LICENSE Jan 20 03:06:15.308792 tar[1556]: linux-amd64/helm Jan 20 03:06:15.335502 dbus-daemon[1531]: [system] SELinux support is enabled Jan 20 03:06:15.342812 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 03:06:15.348978 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 03:06:15.352312 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 03:06:15.362720 update_engine[1553]: I20260120 03:06:15.358232 1553 update_check_scheduler.cc:74] Next update check in 11m28s Jan 20 03:06:15.352341 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 03:06:15.358389 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 03:06:15.358414 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 03:06:15.366204 extend-filesystems[1567]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 03:06:15.366204 extend-filesystems[1567]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 03:06:15.366204 extend-filesystems[1567]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 03:06:15.378784 extend-filesystems[1534]: Resized filesystem in /dev/vda9 Jan 20 03:06:15.382300 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 03:06:15.382733 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 03:06:15.391126 systemd-logind[1551]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 03:06:15.391153 systemd-logind[1551]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 03:06:15.391710 systemd-logind[1551]: New seat seat0. Jan 20 03:06:15.396754 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 03:06:15.398150 bash[1593]: Updated "/home/core/.ssh/authorized_keys" Jan 20 03:06:15.402272 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 03:06:15.418668 systemd[1]: Started update-engine.service - Update Engine. Jan 20 03:06:15.426446 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 03:06:15.430743 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 03:06:15.433922 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 03:06:15.512412 locksmithd[1603]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 03:06:15.521300 containerd[1574]: time="2026-01-20T03:06:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 03:06:15.522386 containerd[1574]: time="2026-01-20T03:06:15.522326004Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 20 03:06:15.532520 containerd[1574]: time="2026-01-20T03:06:15.532444876Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.458µs" Jan 20 03:06:15.532520 containerd[1574]: time="2026-01-20T03:06:15.532500760Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 03:06:15.532520 containerd[1574]: time="2026-01-20T03:06:15.532520337Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 03:06:15.532856 containerd[1574]: time="2026-01-20T03:06:15.532793457Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 03:06:15.532856 containerd[1574]: time="2026-01-20T03:06:15.532837869Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 03:06:15.532912 containerd[1574]: time="2026-01-20T03:06:15.532871683Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 03:06:15.533026 containerd[1574]: time="2026-01-20T03:06:15.532968804Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 03:06:15.533049 containerd[1574]: time="2026-01-20T03:06:15.533024628Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 03:06:15.533883 containerd[1574]: time="2026-01-20T03:06:15.533780268Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 03:06:15.533883 containerd[1574]: time="2026-01-20T03:06:15.533841663Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 03:06:15.533883 containerd[1574]: time="2026-01-20T03:06:15.533862522Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 03:06:15.533883 containerd[1574]: time="2026-01-20T03:06:15.533876348Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 03:06:15.534167 containerd[1574]: time="2026-01-20T03:06:15.534025516Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 03:06:15.534404 containerd[1574]: time="2026-01-20T03:06:15.534350283Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 03:06:15.534436 containerd[1574]: time="2026-01-20T03:06:15.534420804Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 03:06:15.534456 containerd[1574]: time="2026-01-20T03:06:15.534434279Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 03:06:15.534950 containerd[1574]: time="2026-01-20T03:06:15.534670964Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 03:06:15.535478 containerd[1574]: time="2026-01-20T03:06:15.535427684Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 03:06:15.535713 containerd[1574]: time="2026-01-20T03:06:15.535575950Z" level=info msg="metadata content store policy set" policy=shared Jan 20 03:06:15.543725 containerd[1574]: time="2026-01-20T03:06:15.542973917Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 03:06:15.543725 containerd[1574]: time="2026-01-20T03:06:15.543066329Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 03:06:15.543725 containerd[1574]: time="2026-01-20T03:06:15.543180642Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 03:06:15.543725 containerd[1574]: time="2026-01-20T03:06:15.543206851Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 03:06:15.543725 containerd[1574]: time="2026-01-20T03:06:15.543229043Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 03:06:15.543725 containerd[1574]: time="2026-01-20T03:06:15.543247607Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 03:06:15.543725 containerd[1574]: time="2026-01-20T03:06:15.543263487Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 03:06:15.543725 containerd[1574]: time="2026-01-20T03:06:15.543276482Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 03:06:15.543725 containerd[1574]: time="2026-01-20T03:06:15.543287491Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 03:06:15.543725 containerd[1574]: time="2026-01-20T03:06:15.543303111Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 03:06:15.543725 containerd[1574]: time="2026-01-20T03:06:15.543327166Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 03:06:15.543725 containerd[1574]: time="2026-01-20T03:06:15.543346712Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 03:06:15.543725 containerd[1574]: time="2026-01-20T03:06:15.543491502Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 03:06:15.543725 containerd[1574]: time="2026-01-20T03:06:15.543511790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 03:06:15.543981 containerd[1574]: time="2026-01-20T03:06:15.543525366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 03:06:15.543981 containerd[1574]: time="2026-01-20T03:06:15.543535855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 03:06:15.543981 containerd[1574]: time="2026-01-20T03:06:15.543544802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 03:06:15.543981 containerd[1574]: time="2026-01-20T03:06:15.543555221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 03:06:15.543981 containerd[1574]: time="2026-01-20T03:06:15.543572564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 03:06:15.543981 containerd[1574]: time="2026-01-20T03:06:15.543719638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 03:06:15.543981 containerd[1574]: time="2026-01-20T03:06:15.543747360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 03:06:15.543981 containerd[1574]: time="2026-01-20T03:06:15.543759563Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 03:06:15.543981 containerd[1574]: time="2026-01-20T03:06:15.543769602Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 03:06:15.543981 containerd[1574]: time="2026-01-20T03:06:15.543811971Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 03:06:15.543981 containerd[1574]: time="2026-01-20T03:06:15.543823432Z" level=info msg="Start snapshots syncer" Jan 20 03:06:15.543981 containerd[1574]: time="2026-01-20T03:06:15.543884726Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 03:06:15.544284 containerd[1574]: time="2026-01-20T03:06:15.544221696Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 03:06:15.544411 containerd[1574]: time="2026-01-20T03:06:15.544298539Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 03:06:15.545552 containerd[1574]: time="2026-01-20T03:06:15.545499971Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 03:06:15.545914 containerd[1574]: time="2026-01-20T03:06:15.545869251Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 03:06:15.545941 containerd[1574]: time="2026-01-20T03:06:15.545921007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 03:06:15.545941 containerd[1574]: time="2026-01-20T03:06:15.545934352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 03:06:15.545985 containerd[1574]: time="2026-01-20T03:06:15.545943729Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 03:06:15.545985 containerd[1574]: time="2026-01-20T03:06:15.545955942Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 03:06:15.545985 containerd[1574]: time="2026-01-20T03:06:15.545965621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 03:06:15.546032 containerd[1574]: time="2026-01-20T03:06:15.545985999Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 03:06:15.546032 containerd[1574]: time="2026-01-20T03:06:15.546005885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 03:06:15.546032 containerd[1574]: time="2026-01-20T03:06:15.546016656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 03:06:15.546032 containerd[1574]: time="2026-01-20T03:06:15.546025823Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 03:06:15.546624 containerd[1574]: time="2026-01-20T03:06:15.546095543Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 03:06:15.546624 containerd[1574]: time="2026-01-20T03:06:15.546203044Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 03:06:15.546624 containerd[1574]: time="2026-01-20T03:06:15.546214045Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 03:06:15.546624 containerd[1574]: time="2026-01-20T03:06:15.546223262Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 03:06:15.546624 containerd[1574]: time="2026-01-20T03:06:15.546230034Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 03:06:15.546624 containerd[1574]: time="2026-01-20T03:06:15.546239292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 03:06:15.546624 containerd[1574]: time="2026-01-20T03:06:15.546253999Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 03:06:15.546624 containerd[1574]: time="2026-01-20T03:06:15.546274127Z" level=info msg="runtime interface created" Jan 20 03:06:15.546624 containerd[1574]: time="2026-01-20T03:06:15.546279266Z" level=info msg="created NRI interface" Jan 20 03:06:15.546624 containerd[1574]: time="2026-01-20T03:06:15.546290367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 03:06:15.546624 containerd[1574]: time="2026-01-20T03:06:15.546300335Z" level=info msg="Connect containerd service" Jan 20 03:06:15.546624 containerd[1574]: time="2026-01-20T03:06:15.546316716Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 03:06:15.547137 sshd_keygen[1575]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 03:06:15.547879 containerd[1574]: time="2026-01-20T03:06:15.547813049Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 03:06:15.575081 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 03:06:15.580868 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 03:06:15.587857 systemd[1]: Started sshd@0-10.0.0.5:22-10.0.0.1:35214.service - OpenSSH per-connection server daemon (10.0.0.1:35214). Jan 20 03:06:15.608477 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 03:06:15.609388 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 03:06:15.615972 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 03:06:15.632877 containerd[1574]: time="2026-01-20T03:06:15.632818043Z" level=info msg="Start subscribing containerd event" Jan 20 03:06:15.632946 containerd[1574]: time="2026-01-20T03:06:15.632889226Z" level=info msg="Start recovering state" Jan 20 03:06:15.633300 containerd[1574]: time="2026-01-20T03:06:15.633251211Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 03:06:15.633398 containerd[1574]: time="2026-01-20T03:06:15.633348303Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 03:06:15.634880 containerd[1574]: time="2026-01-20T03:06:15.634826798Z" level=info msg="Start event monitor" Jan 20 03:06:15.634930 containerd[1574]: time="2026-01-20T03:06:15.634881430Z" level=info msg="Start cni network conf syncer for default" Jan 20 03:06:15.634930 containerd[1574]: time="2026-01-20T03:06:15.634903572Z" level=info msg="Start streaming server" Jan 20 03:06:15.634930 containerd[1574]: time="2026-01-20T03:06:15.634917117Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 03:06:15.634930 containerd[1574]: time="2026-01-20T03:06:15.634926975Z" level=info msg="runtime interface starting up..." Jan 20 03:06:15.634994 containerd[1574]: time="2026-01-20T03:06:15.634935120Z" level=info msg="starting plugins..." Jan 20 03:06:15.634994 containerd[1574]: time="2026-01-20T03:06:15.634955047Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 03:06:15.635212 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 03:06:15.635442 containerd[1574]: time="2026-01-20T03:06:15.635394437Z" level=info msg="containerd successfully booted in 0.114559s" Jan 20 03:06:15.641149 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 03:06:15.649720 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 03:06:15.654779 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 03:06:15.660495 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 03:06:15.684897 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 35214 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:06:15.686865 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:06:15.694791 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 03:06:15.700328 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 03:06:15.702143 tar[1556]: linux-amd64/README.md Jan 20 03:06:15.717461 systemd-logind[1551]: New session 1 of user core. Jan 20 03:06:15.721182 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 03:06:15.730238 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 03:06:15.741781 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 03:06:15.769870 (systemd)[1648]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 03:06:15.774366 systemd-logind[1551]: New session c1 of user core. Jan 20 03:06:15.941745 systemd[1648]: Queued start job for default target default.target. Jan 20 03:06:15.962156 systemd[1648]: Created slice app.slice - User Application Slice. Jan 20 03:06:15.962217 systemd[1648]: Reached target paths.target - Paths. Jan 20 03:06:15.962287 systemd[1648]: Reached target timers.target - Timers. Jan 20 03:06:15.964041 systemd[1648]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 03:06:15.977212 systemd[1648]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 03:06:15.977367 systemd[1648]: Reached target sockets.target - Sockets. Jan 20 03:06:15.977409 systemd[1648]: Reached target basic.target - Basic System. Jan 20 03:06:15.977454 systemd[1648]: Reached target default.target - Main User Target. Jan 20 03:06:15.977489 systemd[1648]: Startup finished in 194ms. Jan 20 03:06:15.977858 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 03:06:15.983155 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 03:06:15.987856 systemd-networkd[1465]: eth0: Gained IPv6LL Jan 20 03:06:15.990764 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 03:06:15.995065 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 03:06:16.000243 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 03:06:16.005228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:06:16.013778 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 03:06:16.042641 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 03:06:16.046511 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 03:06:16.046940 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 03:06:16.062007 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 03:06:16.066458 systemd[1]: Started sshd@1-10.0.0.5:22-10.0.0.1:51410.service - OpenSSH per-connection server daemon (10.0.0.1:51410). Jan 20 03:06:16.135510 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 51410 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:06:16.137744 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:06:16.144097 systemd-logind[1551]: New session 2 of user core. Jan 20 03:06:16.153819 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 03:06:16.213088 sshd[1680]: Connection closed by 10.0.0.1 port 51410 Jan 20 03:06:16.213910 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Jan 20 03:06:16.222073 systemd[1]: sshd@1-10.0.0.5:22-10.0.0.1:51410.service: Deactivated successfully. Jan 20 03:06:16.223916 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 03:06:16.225217 systemd-logind[1551]: Session 2 logged out. Waiting for processes to exit. Jan 20 03:06:16.227444 systemd[1]: Started sshd@2-10.0.0.5:22-10.0.0.1:51418.service - OpenSSH per-connection server daemon (10.0.0.1:51418). Jan 20 03:06:16.234269 systemd-logind[1551]: Removed session 2. Jan 20 03:06:16.286114 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 51418 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:06:16.287822 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:06:16.294174 systemd-logind[1551]: New session 3 of user core. Jan 20 03:06:16.309931 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 03:06:16.368933 sshd[1689]: Connection closed by 10.0.0.1 port 51418 Jan 20 03:06:16.369206 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Jan 20 03:06:16.372964 systemd[1]: sshd@2-10.0.0.5:22-10.0.0.1:51418.service: Deactivated successfully. Jan 20 03:06:16.374820 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 03:06:16.375786 systemd-logind[1551]: Session 3 logged out. Waiting for processes to exit. Jan 20 03:06:16.377499 systemd-logind[1551]: Removed session 3. Jan 20 03:06:16.886323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:06:16.890287 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 03:06:16.893499 systemd[1]: Startup finished in 4.977s (kernel) + 14.081s (initrd) + 7.328s (userspace) = 26.386s. Jan 20 03:06:16.981120 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 03:06:17.484776 kubelet[1699]: E0120 03:06:17.484633 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 03:06:17.488179 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 03:06:17.488400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 03:06:17.488925 systemd[1]: kubelet.service: Consumed 995ms CPU time, 267.4M memory peak. Jan 20 03:06:26.387496 systemd[1]: Started sshd@3-10.0.0.5:22-10.0.0.1:36558.service - OpenSSH per-connection server daemon (10.0.0.1:36558). Jan 20 03:06:26.464541 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 36558 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:06:26.466331 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:06:26.473504 systemd-logind[1551]: New session 4 of user core. Jan 20 03:06:26.483935 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 03:06:26.542177 sshd[1715]: Connection closed by 10.0.0.1 port 36558 Jan 20 03:06:26.542680 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Jan 20 03:06:26.558477 systemd[1]: sshd@3-10.0.0.5:22-10.0.0.1:36558.service: Deactivated successfully. Jan 20 03:06:26.561323 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 03:06:26.562703 systemd-logind[1551]: Session 4 logged out. Waiting for processes to exit. Jan 20 03:06:26.567466 systemd[1]: Started sshd@4-10.0.0.5:22-10.0.0.1:36566.service - OpenSSH per-connection server daemon (10.0.0.1:36566). Jan 20 03:06:26.568552 systemd-logind[1551]: Removed session 4. Jan 20 03:06:26.643428 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 36566 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:06:26.645315 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:06:26.652568 systemd-logind[1551]: New session 5 of user core. Jan 20 03:06:26.666976 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 03:06:26.720720 sshd[1724]: Connection closed by 10.0.0.1 port 36566 Jan 20 03:06:26.721086 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Jan 20 03:06:26.734296 systemd[1]: sshd@4-10.0.0.5:22-10.0.0.1:36566.service: Deactivated successfully. Jan 20 03:06:26.736457 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 03:06:26.738008 systemd-logind[1551]: Session 5 logged out. Waiting for processes to exit. Jan 20 03:06:26.740518 systemd[1]: Started sshd@5-10.0.0.5:22-10.0.0.1:36572.service - OpenSSH per-connection server daemon (10.0.0.1:36572). Jan 20 03:06:26.742269 systemd-logind[1551]: Removed session 5. Jan 20 03:06:26.814748 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 36572 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:06:26.816548 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:06:26.823867 systemd-logind[1551]: New session 6 of user core. Jan 20 03:06:26.833876 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 03:06:26.892393 sshd[1733]: Connection closed by 10.0.0.1 port 36572 Jan 20 03:06:26.892780 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jan 20 03:06:26.910996 systemd[1]: sshd@5-10.0.0.5:22-10.0.0.1:36572.service: Deactivated successfully. Jan 20 03:06:26.913038 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 03:06:26.914470 systemd-logind[1551]: Session 6 logged out. Waiting for processes to exit. Jan 20 03:06:26.917317 systemd[1]: Started sshd@6-10.0.0.5:22-10.0.0.1:36574.service - OpenSSH per-connection server daemon (10.0.0.1:36574). Jan 20 03:06:26.919024 systemd-logind[1551]: Removed session 6. Jan 20 03:06:27.001529 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 36574 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:06:27.003324 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:06:27.009525 systemd-logind[1551]: New session 7 of user core. Jan 20 03:06:27.027872 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 03:06:27.094478 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 03:06:27.095028 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 03:06:27.118534 sudo[1743]: pam_unix(sudo:session): session closed for user root Jan 20 03:06:27.121213 sshd[1742]: Connection closed by 10.0.0.1 port 36574 Jan 20 03:06:27.122004 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jan 20 03:06:27.141459 systemd[1]: sshd@6-10.0.0.5:22-10.0.0.1:36574.service: Deactivated successfully. Jan 20 03:06:27.144273 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 03:06:27.145959 systemd-logind[1551]: Session 7 logged out. Waiting for processes to exit. Jan 20 03:06:27.150192 systemd[1]: Started sshd@7-10.0.0.5:22-10.0.0.1:36588.service - OpenSSH per-connection server daemon (10.0.0.1:36588). Jan 20 03:06:27.152021 systemd-logind[1551]: Removed session 7. Jan 20 03:06:27.213038 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 36588 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:06:27.214882 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:06:27.221348 systemd-logind[1551]: New session 8 of user core. Jan 20 03:06:27.230838 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 03:06:27.291271 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 03:06:27.291868 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 03:06:27.300424 sudo[1754]: pam_unix(sudo:session): session closed for user root Jan 20 03:06:27.310952 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 03:06:27.311462 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 03:06:27.324863 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 03:06:27.385301 augenrules[1776]: No rules Jan 20 03:06:27.387075 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 03:06:27.387423 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 03:06:27.388928 sudo[1753]: pam_unix(sudo:session): session closed for user root Jan 20 03:06:27.390853 sshd[1752]: Connection closed by 10.0.0.1 port 36588 Jan 20 03:06:27.391164 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jan 20 03:06:27.405412 systemd[1]: sshd@7-10.0.0.5:22-10.0.0.1:36588.service: Deactivated successfully. Jan 20 03:06:27.408195 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 03:06:27.409836 systemd-logind[1551]: Session 8 logged out. Waiting for processes to exit. Jan 20 03:06:27.413274 systemd[1]: Started sshd@8-10.0.0.5:22-10.0.0.1:36594.service - OpenSSH per-connection server daemon (10.0.0.1:36594). Jan 20 03:06:27.415224 systemd-logind[1551]: Removed session 8. Jan 20 03:06:27.484154 sshd[1785]: Accepted publickey for core from 10.0.0.1 port 36594 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:06:27.485513 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:06:27.489250 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 03:06:27.491000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:06:27.493052 systemd-logind[1551]: New session 9 of user core. Jan 20 03:06:27.507036 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 03:06:27.567892 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 03:06:27.568344 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 03:06:27.734413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:06:27.758279 (kubelet)[1811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 03:06:27.833346 kubelet[1811]: E0120 03:06:27.833236 1811 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 03:06:27.840479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 03:06:27.840796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 03:06:27.841356 systemd[1]: kubelet.service: Consumed 278ms CPU time, 110.9M memory peak. Jan 20 03:06:27.915151 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 03:06:27.933113 (dockerd)[1827]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 03:06:28.218135 dockerd[1827]: time="2026-01-20T03:06:28.218006237Z" level=info msg="Starting up" Jan 20 03:06:28.219241 dockerd[1827]: time="2026-01-20T03:06:28.219179616Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 03:06:28.236235 dockerd[1827]: time="2026-01-20T03:06:28.236143539Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 03:06:28.376381 dockerd[1827]: time="2026-01-20T03:06:28.376252418Z" level=info msg="Loading containers: start." Jan 20 03:06:28.392673 kernel: Initializing XFRM netlink socket Jan 20 03:06:28.818096 systemd-networkd[1465]: docker0: Link UP Jan 20 03:06:28.823985 dockerd[1827]: time="2026-01-20T03:06:28.823917433Z" level=info msg="Loading containers: done." Jan 20 03:06:28.841357 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2241874548-merged.mount: Deactivated successfully. Jan 20 03:06:28.846055 dockerd[1827]: time="2026-01-20T03:06:28.845984477Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 03:06:28.846134 dockerd[1827]: time="2026-01-20T03:06:28.846074105Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 03:06:28.846162 dockerd[1827]: time="2026-01-20T03:06:28.846151329Z" level=info msg="Initializing buildkit" Jan 20 03:06:28.888346 dockerd[1827]: time="2026-01-20T03:06:28.888309646Z" level=info msg="Completed buildkit initialization" Jan 20 03:06:28.896717 dockerd[1827]: time="2026-01-20T03:06:28.896649773Z" level=info msg="Daemon has completed initialization" Jan 20 03:06:28.896810 dockerd[1827]: time="2026-01-20T03:06:28.896761954Z" level=info msg="API listen on /run/docker.sock" Jan 20 03:06:28.896966 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 03:06:29.655764 containerd[1574]: time="2026-01-20T03:06:29.655563700Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 20 03:06:30.191471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount421119866.mount: Deactivated successfully. Jan 20 03:06:31.589416 containerd[1574]: time="2026-01-20T03:06:31.589321597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:31.590050 containerd[1574]: time="2026-01-20T03:06:31.590022978Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 20 03:06:31.591493 containerd[1574]: time="2026-01-20T03:06:31.591381614Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:31.594573 containerd[1574]: time="2026-01-20T03:06:31.594460790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:31.596451 containerd[1574]: time="2026-01-20T03:06:31.596333317Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.940666484s" Jan 20 03:06:31.596451 containerd[1574]: time="2026-01-20T03:06:31.596390643Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 20 03:06:31.597182 containerd[1574]: time="2026-01-20T03:06:31.597096091Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 20 03:06:33.309643 containerd[1574]: time="2026-01-20T03:06:33.309451291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:33.310159 containerd[1574]: time="2026-01-20T03:06:33.310081355Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 20 03:06:33.311611 containerd[1574]: time="2026-01-20T03:06:33.311518580Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:33.314512 containerd[1574]: time="2026-01-20T03:06:33.314446923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:33.316093 containerd[1574]: time="2026-01-20T03:06:33.316053292Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.718885637s" Jan 20 03:06:33.316159 containerd[1574]: time="2026-01-20T03:06:33.316094989Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 20 03:06:33.316759 containerd[1574]: time="2026-01-20T03:06:33.316681504Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 20 03:06:34.725844 containerd[1574]: time="2026-01-20T03:06:34.725653724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:34.726955 containerd[1574]: time="2026-01-20T03:06:34.726922044Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 20 03:06:34.728363 containerd[1574]: time="2026-01-20T03:06:34.728321992Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:34.731010 containerd[1574]: time="2026-01-20T03:06:34.730926283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:34.732127 containerd[1574]: time="2026-01-20T03:06:34.732053469Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.415313486s" Jan 20 03:06:34.732127 containerd[1574]: time="2026-01-20T03:06:34.732108081Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 20 03:06:34.733128 containerd[1574]: time="2026-01-20T03:06:34.733077941Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 20 03:06:35.698812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1674198547.mount: Deactivated successfully. Jan 20 03:06:36.168361 containerd[1574]: time="2026-01-20T03:06:36.168285718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:36.169546 containerd[1574]: time="2026-01-20T03:06:36.169429660Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 20 03:06:36.170729 containerd[1574]: time="2026-01-20T03:06:36.170651283Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:36.172776 containerd[1574]: time="2026-01-20T03:06:36.172670088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:36.173305 containerd[1574]: time="2026-01-20T03:06:36.173227086Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.440122635s" Jan 20 03:06:36.173305 containerd[1574]: time="2026-01-20T03:06:36.173283943Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 20 03:06:36.174128 containerd[1574]: time="2026-01-20T03:06:36.174061381Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 20 03:06:36.632199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3644385280.mount: Deactivated successfully. Jan 20 03:06:37.685446 containerd[1574]: time="2026-01-20T03:06:37.685311029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:37.686887 containerd[1574]: time="2026-01-20T03:06:37.686378993Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 20 03:06:37.688362 containerd[1574]: time="2026-01-20T03:06:37.688311629Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:37.691827 containerd[1574]: time="2026-01-20T03:06:37.691720122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:37.692973 containerd[1574]: time="2026-01-20T03:06:37.692857413Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.518742102s" Jan 20 03:06:37.692973 containerd[1574]: time="2026-01-20T03:06:37.692936109Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 20 03:06:37.693732 containerd[1574]: time="2026-01-20T03:06:37.693532354Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 03:06:37.891084 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 03:06:37.893193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:06:38.113098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:06:38.121130 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 03:06:38.176531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2567342500.mount: Deactivated successfully. Jan 20 03:06:38.187504 containerd[1574]: time="2026-01-20T03:06:38.187407450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 03:06:38.188342 containerd[1574]: time="2026-01-20T03:06:38.188318215Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 20 03:06:38.188832 kubelet[2180]: E0120 03:06:38.188732 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 03:06:38.190021 containerd[1574]: time="2026-01-20T03:06:38.189946527Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 03:06:38.193412 containerd[1574]: time="2026-01-20T03:06:38.193058800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 03:06:38.193288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 03:06:38.193534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 03:06:38.194154 systemd[1]: kubelet.service: Consumed 238ms CPU time, 110.4M memory peak. Jan 20 03:06:38.194556 containerd[1574]: time="2026-01-20T03:06:38.194205636Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 500.53338ms" Jan 20 03:06:38.194556 containerd[1574]: time="2026-01-20T03:06:38.194245821Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 03:06:38.195092 containerd[1574]: time="2026-01-20T03:06:38.195040655Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 20 03:06:38.659566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1636307297.mount: Deactivated successfully. Jan 20 03:06:40.516796 containerd[1574]: time="2026-01-20T03:06:40.516670280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:40.517794 containerd[1574]: time="2026-01-20T03:06:40.517761624Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 20 03:06:40.519392 containerd[1574]: time="2026-01-20T03:06:40.519325276Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:40.522353 containerd[1574]: time="2026-01-20T03:06:40.522253982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:06:40.523287 containerd[1574]: time="2026-01-20T03:06:40.523184900Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.328088962s" Jan 20 03:06:40.523287 containerd[1574]: time="2026-01-20T03:06:40.523238290Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 20 03:06:43.644395 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:06:43.644745 systemd[1]: kubelet.service: Consumed 238ms CPU time, 110.4M memory peak. Jan 20 03:06:43.647453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:06:43.677763 systemd[1]: Reload requested from client PID 2278 ('systemctl') (unit session-9.scope)... Jan 20 03:06:43.677808 systemd[1]: Reloading... Jan 20 03:06:43.760841 zram_generator::config[2317]: No configuration found. Jan 20 03:06:44.045833 systemd[1]: Reloading finished in 367 ms. Jan 20 03:06:44.133493 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 03:06:44.133692 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 03:06:44.134101 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:06:44.134175 systemd[1]: kubelet.service: Consumed 172ms CPU time, 98.2M memory peak. Jan 20 03:06:44.136027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:06:44.334899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:06:44.364353 (kubelet)[2368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 03:06:44.425822 kubelet[2368]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 03:06:44.425822 kubelet[2368]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 03:06:44.425822 kubelet[2368]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 03:06:44.425822 kubelet[2368]: I0120 03:06:44.425787 2368 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 03:06:45.178758 kubelet[2368]: I0120 03:06:45.178519 2368 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 03:06:45.178758 kubelet[2368]: I0120 03:06:45.178680 2368 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 03:06:45.179367 kubelet[2368]: I0120 03:06:45.179290 2368 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 03:06:45.219137 kubelet[2368]: E0120 03:06:45.219032 2368 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 03:06:45.219675 kubelet[2368]: I0120 03:06:45.219549 2368 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 03:06:45.230504 kubelet[2368]: I0120 03:06:45.228479 2368 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 03:06:45.238230 kubelet[2368]: I0120 03:06:45.238130 2368 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 03:06:45.238490 kubelet[2368]: I0120 03:06:45.238405 2368 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 03:06:45.238824 kubelet[2368]: I0120 03:06:45.238471 2368 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 03:06:45.238824 kubelet[2368]: I0120 03:06:45.238787 2368 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 03:06:45.238824 kubelet[2368]: I0120 03:06:45.238801 2368 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 03:06:45.239045 kubelet[2368]: I0120 03:06:45.238932 2368 state_mem.go:36] "Initialized new in-memory state store" Jan 20 03:06:45.242348 kubelet[2368]: I0120 03:06:45.242237 2368 kubelet.go:480] "Attempting to sync node with API server" Jan 20 03:06:45.242348 kubelet[2368]: I0120 03:06:45.242319 2368 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 03:06:45.242348 kubelet[2368]: I0120 03:06:45.242352 2368 kubelet.go:386] "Adding apiserver pod source" Jan 20 03:06:45.242468 kubelet[2368]: I0120 03:06:45.242376 2368 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 03:06:45.249030 kubelet[2368]: E0120 03:06:45.248919 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 03:06:45.249155 kubelet[2368]: E0120 03:06:45.249113 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 03:06:45.251491 kubelet[2368]: I0120 03:06:45.250940 2368 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 03:06:45.251931 kubelet[2368]: I0120 03:06:45.251796 2368 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 03:06:45.252682 kubelet[2368]: W0120 03:06:45.252560 2368 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 03:06:45.259204 kubelet[2368]: I0120 03:06:45.259133 2368 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 03:06:45.259302 kubelet[2368]: I0120 03:06:45.259251 2368 server.go:1289] "Started kubelet" Jan 20 03:06:45.260260 kubelet[2368]: I0120 03:06:45.260020 2368 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 03:06:45.261565 kubelet[2368]: I0120 03:06:45.261396 2368 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 03:06:45.261565 kubelet[2368]: I0120 03:06:45.261373 2368 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 03:06:45.263680 kubelet[2368]: I0120 03:06:45.262840 2368 server.go:317] "Adding debug handlers to kubelet server" Jan 20 03:06:45.263680 kubelet[2368]: I0120 03:06:45.263647 2368 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 03:06:45.265501 kubelet[2368]: I0120 03:06:45.265449 2368 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 03:06:45.265900 kubelet[2368]: E0120 03:06:45.265859 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 03:06:45.265954 kubelet[2368]: I0120 03:06:45.265918 2368 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 03:06:45.266206 kubelet[2368]: I0120 03:06:45.266169 2368 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 03:06:45.266260 kubelet[2368]: I0120 03:06:45.266229 2368 reconciler.go:26] "Reconciler: start to sync state" Jan 20 03:06:45.266691 kubelet[2368]: E0120 03:06:45.266568 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 03:06:45.269247 kubelet[2368]: I0120 03:06:45.267524 2368 factory.go:223] Registration of the systemd container factory successfully Jan 20 03:06:45.269247 kubelet[2368]: I0120 03:06:45.269071 2368 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 03:06:45.269872 kubelet[2368]: E0120 03:06:45.269812 2368 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 03:06:45.270059 kubelet[2368]: E0120 03:06:45.265453 2368 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c5189a4555d9e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 03:06:45.259189662 +0000 UTC m=+0.888264569,LastTimestamp:2026-01-20 03:06:45.259189662 +0000 UTC m=+0.888264569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 03:06:45.270574 kubelet[2368]: E0120 03:06:45.268125 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="200ms" Jan 20 03:06:45.273066 kubelet[2368]: I0120 03:06:45.272941 2368 factory.go:223] Registration of the containerd container factory successfully Jan 20 03:06:45.295483 kubelet[2368]: I0120 03:06:45.295440 2368 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 03:06:45.295483 kubelet[2368]: I0120 03:06:45.295479 2368 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 03:06:45.295669 kubelet[2368]: I0120 03:06:45.295499 2368 state_mem.go:36] "Initialized new in-memory state store" Jan 20 03:06:45.302377 kubelet[2368]: I0120 03:06:45.302291 2368 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 03:06:45.304892 kubelet[2368]: I0120 03:06:45.304810 2368 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 03:06:45.304892 kubelet[2368]: I0120 03:06:45.304836 2368 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 03:06:45.304892 kubelet[2368]: I0120 03:06:45.304862 2368 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 03:06:45.304892 kubelet[2368]: I0120 03:06:45.304871 2368 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 03:06:45.305056 kubelet[2368]: E0120 03:06:45.304924 2368 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 03:06:45.306655 kubelet[2368]: E0120 03:06:45.305870 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 03:06:45.366844 kubelet[2368]: E0120 03:06:45.366777 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 03:06:45.381519 kubelet[2368]: I0120 03:06:45.381371 2368 policy_none.go:49] "None policy: Start" Jan 20 03:06:45.381519 kubelet[2368]: I0120 03:06:45.381436 2368 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 03:06:45.381519 kubelet[2368]: I0120 03:06:45.381456 2368 state_mem.go:35] "Initializing new in-memory state store" Jan 20 03:06:45.392935 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 03:06:45.406123 kubelet[2368]: E0120 03:06:45.406060 2368 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 03:06:45.408050 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 03:06:45.413042 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 03:06:45.425915 kubelet[2368]: E0120 03:06:45.425884 2368 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 03:06:45.427215 kubelet[2368]: I0120 03:06:45.426211 2368 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 03:06:45.427215 kubelet[2368]: I0120 03:06:45.426228 2368 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 03:06:45.427215 kubelet[2368]: I0120 03:06:45.426948 2368 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 03:06:45.428670 kubelet[2368]: E0120 03:06:45.428471 2368 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 03:06:45.428670 kubelet[2368]: E0120 03:06:45.428556 2368 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 03:06:45.472306 kubelet[2368]: E0120 03:06:45.472101 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="400ms" Jan 20 03:06:45.529086 kubelet[2368]: I0120 03:06:45.528935 2368 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 03:06:45.529413 kubelet[2368]: E0120 03:06:45.529343 2368 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Jan 20 03:06:45.668383 kubelet[2368]: I0120 03:06:45.668189 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/caecb6ca432f78274ca405b945f4d92b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"caecb6ca432f78274ca405b945f4d92b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:45.668383 kubelet[2368]: I0120 03:06:45.668273 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/caecb6ca432f78274ca405b945f4d92b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"caecb6ca432f78274ca405b945f4d92b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:45.668383 kubelet[2368]: I0120 03:06:45.668304 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/caecb6ca432f78274ca405b945f4d92b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"caecb6ca432f78274ca405b945f4d92b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:45.724763 systemd[1]: Created slice kubepods-burstable-podcaecb6ca432f78274ca405b945f4d92b.slice - libcontainer container kubepods-burstable-podcaecb6ca432f78274ca405b945f4d92b.slice. Jan 20 03:06:45.731813 kubelet[2368]: I0120 03:06:45.731538 2368 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 03:06:45.732148 kubelet[2368]: E0120 03:06:45.732071 2368 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Jan 20 03:06:45.738308 kubelet[2368]: E0120 03:06:45.738200 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 03:06:45.744202 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 20 03:06:45.746953 kubelet[2368]: E0120 03:06:45.746878 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 03:06:45.749368 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 20 03:06:45.752303 kubelet[2368]: E0120 03:06:45.752240 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 03:06:45.769170 kubelet[2368]: I0120 03:06:45.768900 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:45.769170 kubelet[2368]: I0120 03:06:45.768942 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:45.769170 kubelet[2368]: I0120 03:06:45.768961 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:45.769170 kubelet[2368]: I0120 03:06:45.769035 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:45.769170 kubelet[2368]: I0120 03:06:45.769071 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:45.769403 kubelet[2368]: I0120 03:06:45.769098 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:45.873263 kubelet[2368]: E0120 03:06:45.873111 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="800ms" Jan 20 03:06:46.039566 kubelet[2368]: E0120 03:06:46.039397 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:46.040562 containerd[1574]: time="2026-01-20T03:06:46.040470367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:caecb6ca432f78274ca405b945f4d92b,Namespace:kube-system,Attempt:0,}" Jan 20 03:06:46.048093 kubelet[2368]: E0120 03:06:46.047829 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:46.048477 containerd[1574]: time="2026-01-20T03:06:46.048417290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 20 03:06:46.053232 kubelet[2368]: E0120 03:06:46.053148 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:46.053688 containerd[1574]: time="2026-01-20T03:06:46.053574667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 20 03:06:46.085075 containerd[1574]: time="2026-01-20T03:06:46.084907479Z" level=info msg="connecting to shim 6c0659d92d3f957ac4d91419bae470876e81d6d3f8ba2c64e2243d7fa3359ccf" address="unix:///run/containerd/s/55363d1f26e0cff41748d4e17efb8ad07f3851f550158771069037bfe71edf6e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:06:46.094175 containerd[1574]: time="2026-01-20T03:06:46.093895676Z" level=info msg="connecting to shim 9ea18a7677ea885c5911884acd7a312b21f5199482343580790c4b2652e6641d" address="unix:///run/containerd/s/2add020e17bd349ad14e4ad23af71fd030cda249541b36ff13493e1fff12b20f" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:06:46.102653 containerd[1574]: time="2026-01-20T03:06:46.102167896Z" level=info msg="connecting to shim cd02a459b3746bf3116ae8fd548c001b43ca4b77db105471c8998a86f8a63074" address="unix:///run/containerd/s/6d1c285f3ff6c99e5157ea23a4d7db90560616a54d8e5a9ba772bf60fe7e5b94" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:06:46.130816 systemd[1]: Started cri-containerd-9ea18a7677ea885c5911884acd7a312b21f5199482343580790c4b2652e6641d.scope - libcontainer container 9ea18a7677ea885c5911884acd7a312b21f5199482343580790c4b2652e6641d. Jan 20 03:06:46.136243 kubelet[2368]: I0120 03:06:46.136186 2368 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 03:06:46.138335 kubelet[2368]: E0120 03:06:46.138213 2368 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Jan 20 03:06:46.148812 systemd[1]: Started cri-containerd-6c0659d92d3f957ac4d91419bae470876e81d6d3f8ba2c64e2243d7fa3359ccf.scope - libcontainer container 6c0659d92d3f957ac4d91419bae470876e81d6d3f8ba2c64e2243d7fa3359ccf. Jan 20 03:06:46.154569 systemd[1]: Started cri-containerd-cd02a459b3746bf3116ae8fd548c001b43ca4b77db105471c8998a86f8a63074.scope - libcontainer container cd02a459b3746bf3116ae8fd548c001b43ca4b77db105471c8998a86f8a63074. Jan 20 03:06:46.214906 containerd[1574]: time="2026-01-20T03:06:46.214819521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ea18a7677ea885c5911884acd7a312b21f5199482343580790c4b2652e6641d\"" Jan 20 03:06:46.216125 kubelet[2368]: E0120 03:06:46.216056 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:46.225134 containerd[1574]: time="2026-01-20T03:06:46.224885269Z" level=info msg="CreateContainer within sandbox \"9ea18a7677ea885c5911884acd7a312b21f5199482343580790c4b2652e6641d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 03:06:46.240677 containerd[1574]: time="2026-01-20T03:06:46.240452185Z" level=info msg="Container 918b9d5a41b5f0f6abf71833c764c1d2d66f1232e160fbb9ccae644ce29ab943: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:06:46.245647 containerd[1574]: time="2026-01-20T03:06:46.244867820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd02a459b3746bf3116ae8fd548c001b43ca4b77db105471c8998a86f8a63074\"" Jan 20 03:06:46.247294 kubelet[2368]: E0120 03:06:46.247265 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:46.255546 containerd[1574]: time="2026-01-20T03:06:46.255365875Z" level=info msg="CreateContainer within sandbox \"9ea18a7677ea885c5911884acd7a312b21f5199482343580790c4b2652e6641d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"918b9d5a41b5f0f6abf71833c764c1d2d66f1232e160fbb9ccae644ce29ab943\"" Jan 20 03:06:46.257883 containerd[1574]: time="2026-01-20T03:06:46.257362933Z" level=info msg="CreateContainer within sandbox \"cd02a459b3746bf3116ae8fd548c001b43ca4b77db105471c8998a86f8a63074\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 03:06:46.258812 containerd[1574]: time="2026-01-20T03:06:46.258722615Z" level=info msg="StartContainer for \"918b9d5a41b5f0f6abf71833c764c1d2d66f1232e160fbb9ccae644ce29ab943\"" Jan 20 03:06:46.260484 containerd[1574]: time="2026-01-20T03:06:46.260438198Z" level=info msg="connecting to shim 918b9d5a41b5f0f6abf71833c764c1d2d66f1232e160fbb9ccae644ce29ab943" address="unix:///run/containerd/s/2add020e17bd349ad14e4ad23af71fd030cda249541b36ff13493e1fff12b20f" protocol=ttrpc version=3 Jan 20 03:06:46.263393 containerd[1574]: time="2026-01-20T03:06:46.262901437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:caecb6ca432f78274ca405b945f4d92b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c0659d92d3f957ac4d91419bae470876e81d6d3f8ba2c64e2243d7fa3359ccf\"" Jan 20 03:06:46.263764 kubelet[2368]: E0120 03:06:46.263714 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:46.268778 containerd[1574]: time="2026-01-20T03:06:46.268709588Z" level=info msg="CreateContainer within sandbox \"6c0659d92d3f957ac4d91419bae470876e81d6d3f8ba2c64e2243d7fa3359ccf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 03:06:46.278228 containerd[1574]: time="2026-01-20T03:06:46.278151833Z" level=info msg="Container 004b48c66f57f83c70be648e4dd0c6e4bb74587fe2c9dbf744547741ca1de6ae: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:06:46.282835 containerd[1574]: time="2026-01-20T03:06:46.282796983Z" level=info msg="Container 009b59a389f0f57b49852ad399c7d04b5767f12b3c2a9e386447a58dd5358114: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:06:46.291337 containerd[1574]: time="2026-01-20T03:06:46.290565067Z" level=info msg="CreateContainer within sandbox \"cd02a459b3746bf3116ae8fd548c001b43ca4b77db105471c8998a86f8a63074\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"004b48c66f57f83c70be648e4dd0c6e4bb74587fe2c9dbf744547741ca1de6ae\"" Jan 20 03:06:46.291836 containerd[1574]: time="2026-01-20T03:06:46.291573117Z" level=info msg="StartContainer for \"004b48c66f57f83c70be648e4dd0c6e4bb74587fe2c9dbf744547741ca1de6ae\"" Jan 20 03:06:46.294157 containerd[1574]: time="2026-01-20T03:06:46.294130360Z" level=info msg="connecting to shim 004b48c66f57f83c70be648e4dd0c6e4bb74587fe2c9dbf744547741ca1de6ae" address="unix:///run/containerd/s/6d1c285f3ff6c99e5157ea23a4d7db90560616a54d8e5a9ba772bf60fe7e5b94" protocol=ttrpc version=3 Jan 20 03:06:46.294967 systemd[1]: Started cri-containerd-918b9d5a41b5f0f6abf71833c764c1d2d66f1232e160fbb9ccae644ce29ab943.scope - libcontainer container 918b9d5a41b5f0f6abf71833c764c1d2d66f1232e160fbb9ccae644ce29ab943. Jan 20 03:06:46.300471 containerd[1574]: time="2026-01-20T03:06:46.300394880Z" level=info msg="CreateContainer within sandbox \"6c0659d92d3f957ac4d91419bae470876e81d6d3f8ba2c64e2243d7fa3359ccf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"009b59a389f0f57b49852ad399c7d04b5767f12b3c2a9e386447a58dd5358114\"" Jan 20 03:06:46.303639 containerd[1574]: time="2026-01-20T03:06:46.301777180Z" level=info msg="StartContainer for \"009b59a389f0f57b49852ad399c7d04b5767f12b3c2a9e386447a58dd5358114\"" Jan 20 03:06:46.304190 containerd[1574]: time="2026-01-20T03:06:46.304166059Z" level=info msg="connecting to shim 009b59a389f0f57b49852ad399c7d04b5767f12b3c2a9e386447a58dd5358114" address="unix:///run/containerd/s/55363d1f26e0cff41748d4e17efb8ad07f3851f550158771069037bfe71edf6e" protocol=ttrpc version=3 Jan 20 03:06:46.327834 systemd[1]: Started cri-containerd-004b48c66f57f83c70be648e4dd0c6e4bb74587fe2c9dbf744547741ca1de6ae.scope - libcontainer container 004b48c66f57f83c70be648e4dd0c6e4bb74587fe2c9dbf744547741ca1de6ae. Jan 20 03:06:46.350844 systemd[1]: Started cri-containerd-009b59a389f0f57b49852ad399c7d04b5767f12b3c2a9e386447a58dd5358114.scope - libcontainer container 009b59a389f0f57b49852ad399c7d04b5767f12b3c2a9e386447a58dd5358114. Jan 20 03:06:46.423535 containerd[1574]: time="2026-01-20T03:06:46.423450271Z" level=info msg="StartContainer for \"918b9d5a41b5f0f6abf71833c764c1d2d66f1232e160fbb9ccae644ce29ab943\" returns successfully" Jan 20 03:06:46.439966 containerd[1574]: time="2026-01-20T03:06:46.439904442Z" level=info msg="StartContainer for \"004b48c66f57f83c70be648e4dd0c6e4bb74587fe2c9dbf744547741ca1de6ae\" returns successfully" Jan 20 03:06:46.458449 containerd[1574]: time="2026-01-20T03:06:46.458362109Z" level=info msg="StartContainer for \"009b59a389f0f57b49852ad399c7d04b5767f12b3c2a9e386447a58dd5358114\" returns successfully" Jan 20 03:06:46.514646 kubelet[2368]: E0120 03:06:46.513741 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 03:06:46.947814 kubelet[2368]: I0120 03:06:46.947420 2368 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 03:06:47.331423 kubelet[2368]: E0120 03:06:47.331210 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 03:06:47.332332 kubelet[2368]: E0120 03:06:47.331982 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:47.341426 kubelet[2368]: E0120 03:06:47.340984 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 03:06:47.341426 kubelet[2368]: E0120 03:06:47.341142 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:47.341819 kubelet[2368]: E0120 03:06:47.341804 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 03:06:47.341948 kubelet[2368]: E0120 03:06:47.341936 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:47.619962 kubelet[2368]: E0120 03:06:47.619832 2368 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 03:06:47.752767 kubelet[2368]: E0120 03:06:47.752433 2368 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c5189a4555d9e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 03:06:45.259189662 +0000 UTC m=+0.888264569,LastTimestamp:2026-01-20 03:06:45.259189662 +0000 UTC m=+0.888264569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 03:06:47.811946 kubelet[2368]: I0120 03:06:47.811892 2368 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 03:06:47.811946 kubelet[2368]: E0120 03:06:47.811943 2368 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 03:06:47.869215 kubelet[2368]: I0120 03:06:47.868877 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:47.877792 kubelet[2368]: E0120 03:06:47.876770 2368 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:47.877792 kubelet[2368]: I0120 03:06:47.876842 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:47.880439 kubelet[2368]: E0120 03:06:47.880414 2368 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:47.880439 kubelet[2368]: I0120 03:06:47.880441 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:47.883224 kubelet[2368]: E0120 03:06:47.883153 2368 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:48.244783 kubelet[2368]: I0120 03:06:48.244682 2368 apiserver.go:52] "Watching apiserver" Jan 20 03:06:48.266800 kubelet[2368]: I0120 03:06:48.266731 2368 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 03:06:48.343478 kubelet[2368]: I0120 03:06:48.343371 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:48.343661 kubelet[2368]: I0120 03:06:48.343545 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:48.343960 kubelet[2368]: I0120 03:06:48.343885 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:48.348188 kubelet[2368]: E0120 03:06:48.348133 2368 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:48.348419 kubelet[2368]: E0120 03:06:48.348316 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:48.349404 kubelet[2368]: E0120 03:06:48.349345 2368 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:48.349945 kubelet[2368]: E0120 03:06:48.349748 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:48.350061 kubelet[2368]: E0120 03:06:48.349748 2368 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:48.350115 kubelet[2368]: E0120 03:06:48.350100 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:49.345812 kubelet[2368]: I0120 03:06:49.345683 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:49.346381 kubelet[2368]: I0120 03:06:49.345846 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:49.352282 kubelet[2368]: E0120 03:06:49.352222 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:49.352435 kubelet[2368]: E0120 03:06:49.352336 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:49.800005 systemd[1]: Reload requested from client PID 2659 ('systemctl') (unit session-9.scope)... Jan 20 03:06:49.800024 systemd[1]: Reloading... Jan 20 03:06:49.876252 zram_generator::config[2698]: No configuration found. Jan 20 03:06:50.206188 systemd[1]: Reloading finished in 405 ms. Jan 20 03:06:50.240923 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:06:50.253321 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 03:06:50.253719 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:06:50.253783 systemd[1]: kubelet.service: Consumed 1.561s CPU time, 131.8M memory peak. Jan 20 03:06:50.256840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:06:50.479059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:06:50.495261 (kubelet)[2747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 03:06:50.562448 kubelet[2747]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 03:06:50.562448 kubelet[2747]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 03:06:50.562448 kubelet[2747]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 03:06:50.563097 kubelet[2747]: I0120 03:06:50.562555 2747 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 03:06:50.573125 kubelet[2747]: I0120 03:06:50.573026 2747 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 03:06:50.573125 kubelet[2747]: I0120 03:06:50.573079 2747 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 03:06:50.573480 kubelet[2747]: I0120 03:06:50.573392 2747 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 03:06:50.574892 kubelet[2747]: I0120 03:06:50.574786 2747 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 03:06:50.578890 kubelet[2747]: I0120 03:06:50.578736 2747 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 03:06:50.588440 kubelet[2747]: I0120 03:06:50.588378 2747 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 03:06:50.596298 kubelet[2747]: I0120 03:06:50.596193 2747 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 03:06:50.596784 kubelet[2747]: I0120 03:06:50.596486 2747 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 03:06:50.596784 kubelet[2747]: I0120 03:06:50.596536 2747 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 03:06:50.596784 kubelet[2747]: I0120 03:06:50.596718 2747 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 03:06:50.596784 kubelet[2747]: I0120 03:06:50.596727 2747 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 03:06:50.596784 kubelet[2747]: I0120 03:06:50.596767 2747 state_mem.go:36] "Initialized new in-memory state store" Jan 20 03:06:50.597146 kubelet[2747]: I0120 03:06:50.596930 2747 kubelet.go:480] "Attempting to sync node with API server" Jan 20 03:06:50.597146 kubelet[2747]: I0120 03:06:50.597020 2747 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 03:06:50.597146 kubelet[2747]: I0120 03:06:50.597041 2747 kubelet.go:386] "Adding apiserver pod source" Jan 20 03:06:50.597246 kubelet[2747]: I0120 03:06:50.597208 2747 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 03:06:50.598533 kubelet[2747]: I0120 03:06:50.598352 2747 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 03:06:50.599121 kubelet[2747]: I0120 03:06:50.599016 2747 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 03:06:50.604501 kubelet[2747]: I0120 03:06:50.604445 2747 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 03:06:50.604659 kubelet[2747]: I0120 03:06:50.604535 2747 server.go:1289] "Started kubelet" Jan 20 03:06:50.608436 kubelet[2747]: I0120 03:06:50.608363 2747 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 03:06:50.609566 kubelet[2747]: I0120 03:06:50.609038 2747 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 03:06:50.614993 kubelet[2747]: I0120 03:06:50.614923 2747 server.go:317] "Adding debug handlers to kubelet server" Jan 20 03:06:50.623788 kubelet[2747]: I0120 03:06:50.623454 2747 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 03:06:50.625878 kubelet[2747]: I0120 03:06:50.625819 2747 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 03:06:50.626886 kubelet[2747]: I0120 03:06:50.626817 2747 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 03:06:50.627667 kubelet[2747]: I0120 03:06:50.627642 2747 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 03:06:50.628680 kubelet[2747]: I0120 03:06:50.628549 2747 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 03:06:50.630135 kubelet[2747]: I0120 03:06:50.630062 2747 reconciler.go:26] "Reconciler: start to sync state" Jan 20 03:06:50.644066 kubelet[2747]: E0120 03:06:50.644036 2747 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 03:06:50.644920 kubelet[2747]: I0120 03:06:50.644389 2747 factory.go:223] Registration of the containerd container factory successfully Jan 20 03:06:50.644920 kubelet[2747]: I0120 03:06:50.644848 2747 factory.go:223] Registration of the systemd container factory successfully Jan 20 03:06:50.645168 kubelet[2747]: I0120 03:06:50.645068 2747 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 03:06:50.663688 kubelet[2747]: I0120 03:06:50.663537 2747 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 03:06:50.666781 kubelet[2747]: I0120 03:06:50.666756 2747 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 03:06:50.667328 kubelet[2747]: I0120 03:06:50.667311 2747 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 03:06:50.667402 kubelet[2747]: I0120 03:06:50.667392 2747 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 03:06:50.667448 kubelet[2747]: I0120 03:06:50.667440 2747 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 03:06:50.667542 kubelet[2747]: E0120 03:06:50.667519 2747 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 03:06:50.713001 kubelet[2747]: I0120 03:06:50.712902 2747 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 03:06:50.713001 kubelet[2747]: I0120 03:06:50.712949 2747 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 03:06:50.713171 kubelet[2747]: I0120 03:06:50.713019 2747 state_mem.go:36] "Initialized new in-memory state store" Jan 20 03:06:50.713569 kubelet[2747]: I0120 03:06:50.713205 2747 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 03:06:50.713569 kubelet[2747]: I0120 03:06:50.713226 2747 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 03:06:50.713569 kubelet[2747]: I0120 03:06:50.713246 2747 policy_none.go:49] "None policy: Start" Jan 20 03:06:50.713569 kubelet[2747]: I0120 03:06:50.713257 2747 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 03:06:50.713569 kubelet[2747]: I0120 03:06:50.713273 2747 state_mem.go:35] "Initializing new in-memory state store" Jan 20 03:06:50.713569 kubelet[2747]: I0120 03:06:50.713392 2747 state_mem.go:75] "Updated machine memory state" Jan 20 03:06:50.722239 kubelet[2747]: E0120 03:06:50.722159 2747 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 03:06:50.722418 kubelet[2747]: I0120 03:06:50.722348 2747 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 03:06:50.722418 kubelet[2747]: I0120 03:06:50.722395 2747 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 03:06:50.722690 kubelet[2747]: I0120 03:06:50.722662 2747 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 03:06:50.724379 kubelet[2747]: E0120 03:06:50.724126 2747 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 03:06:50.770259 kubelet[2747]: I0120 03:06:50.770024 2747 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:50.770404 kubelet[2747]: I0120 03:06:50.770276 2747 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:50.771470 kubelet[2747]: I0120 03:06:50.770766 2747 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:50.782903 kubelet[2747]: E0120 03:06:50.782212 2747 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:50.784094 kubelet[2747]: E0120 03:06:50.784028 2747 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:50.818412 sudo[2788]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 20 03:06:50.819120 sudo[2788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 20 03:06:50.830837 kubelet[2747]: I0120 03:06:50.830798 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:50.830916 kubelet[2747]: I0120 03:06:50.830839 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/caecb6ca432f78274ca405b945f4d92b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"caecb6ca432f78274ca405b945f4d92b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:50.830916 kubelet[2747]: I0120 03:06:50.830871 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:50.830916 kubelet[2747]: I0120 03:06:50.830894 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:50.831126 kubelet[2747]: I0120 03:06:50.830914 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:50.831126 kubelet[2747]: I0120 03:06:50.830934 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/caecb6ca432f78274ca405b945f4d92b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"caecb6ca432f78274ca405b945f4d92b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:50.831126 kubelet[2747]: I0120 03:06:50.831007 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/caecb6ca432f78274ca405b945f4d92b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"caecb6ca432f78274ca405b945f4d92b\") " pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:50.831126 kubelet[2747]: I0120 03:06:50.831033 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:50.831126 kubelet[2747]: I0120 03:06:50.831053 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 03:06:50.838402 kubelet[2747]: I0120 03:06:50.838348 2747 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 03:06:50.848543 kubelet[2747]: I0120 03:06:50.848460 2747 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 03:06:50.848543 kubelet[2747]: I0120 03:06:50.848540 2747 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 03:06:51.083267 kubelet[2747]: E0120 03:06:51.083110 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:51.083267 kubelet[2747]: E0120 03:06:51.083265 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:51.085267 kubelet[2747]: E0120 03:06:51.085138 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:51.201483 sudo[2788]: pam_unix(sudo:session): session closed for user root Jan 20 03:06:51.598274 kubelet[2747]: I0120 03:06:51.598181 2747 apiserver.go:52] "Watching apiserver" Jan 20 03:06:51.629442 kubelet[2747]: I0120 03:06:51.629308 2747 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 03:06:51.694467 kubelet[2747]: I0120 03:06:51.694437 2747 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:51.695864 kubelet[2747]: E0120 03:06:51.694926 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:51.695990 kubelet[2747]: I0120 03:06:51.695506 2747 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:51.704557 kubelet[2747]: E0120 03:06:51.704379 2747 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 03:06:51.704718 kubelet[2747]: E0120 03:06:51.704662 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:51.706340 kubelet[2747]: E0120 03:06:51.706271 2747 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 03:06:51.707469 kubelet[2747]: E0120 03:06:51.707452 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:51.727787 kubelet[2747]: I0120 03:06:51.727721 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.7277043499999998 podStartE2EDuration="2.72770435s" podCreationTimestamp="2026-01-20 03:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:06:51.726172521 +0000 UTC m=+1.223516776" watchObservedRunningTime="2026-01-20 03:06:51.72770435 +0000 UTC m=+1.225048595" Jan 20 03:06:51.748916 kubelet[2747]: I0120 03:06:51.748786 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.7487689140000002 podStartE2EDuration="2.748768914s" podCreationTimestamp="2026-01-20 03:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:06:51.735888721 +0000 UTC m=+1.233232986" watchObservedRunningTime="2026-01-20 03:06:51.748768914 +0000 UTC m=+1.246113160" Jan 20 03:06:51.748916 kubelet[2747]: I0120 03:06:51.748910 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.748903543 podStartE2EDuration="1.748903543s" podCreationTimestamp="2026-01-20 03:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:06:51.748549328 +0000 UTC m=+1.245893583" watchObservedRunningTime="2026-01-20 03:06:51.748903543 +0000 UTC m=+1.246247788" Jan 20 03:06:52.696364 kubelet[2747]: E0120 03:06:52.696318 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:52.697061 kubelet[2747]: E0120 03:06:52.697013 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:52.730828 sudo[1792]: pam_unix(sudo:session): session closed for user root Jan 20 03:06:52.732982 sshd[1791]: Connection closed by 10.0.0.1 port 36594 Jan 20 03:06:52.733701 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Jan 20 03:06:52.738908 systemd[1]: sshd@8-10.0.0.5:22-10.0.0.1:36594.service: Deactivated successfully. Jan 20 03:06:52.741329 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 03:06:52.741679 systemd[1]: session-9.scope: Consumed 5.281s CPU time, 262.6M memory peak. Jan 20 03:06:52.743440 systemd-logind[1551]: Session 9 logged out. Waiting for processes to exit. Jan 20 03:06:52.745148 systemd-logind[1551]: Removed session 9. Jan 20 03:06:56.305650 kubelet[2747]: E0120 03:06:56.303570 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:56.445953 kubelet[2747]: I0120 03:06:56.445859 2747 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 03:06:56.446296 containerd[1574]: time="2026-01-20T03:06:56.446253703Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 03:06:56.446847 kubelet[2747]: I0120 03:06:56.446532 2747 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 03:06:56.706183 kubelet[2747]: E0120 03:06:56.705747 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:57.514644 systemd[1]: Created slice kubepods-besteffort-pod243e966a_f149_4685_ab27_9351d8b8db7f.slice - libcontainer container kubepods-besteffort-pod243e966a_f149_4685_ab27_9351d8b8db7f.slice. Jan 20 03:06:57.536680 systemd[1]: Created slice kubepods-burstable-pod4d8b902c_7287_4a58_86c3_239fdf52d565.slice - libcontainer container kubepods-burstable-pod4d8b902c_7287_4a58_86c3_239fdf52d565.slice. Jan 20 03:06:57.580973 kubelet[2747]: I0120 03:06:57.580850 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d8b902c-7287-4a58-86c3-239fdf52d565-hubble-tls\") pod \"cilium-vfhrl\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " pod="kube-system/cilium-vfhrl" Jan 20 03:06:57.581809 kubelet[2747]: I0120 03:06:57.581000 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpnbj\" (UniqueName: \"kubernetes.io/projected/243e966a-f149-4685-ab27-9351d8b8db7f-kube-api-access-lpnbj\") pod \"kube-proxy-g75wz\" (UID: \"243e966a-f149-4685-ab27-9351d8b8db7f\") " pod="kube-system/kube-proxy-g75wz" Jan 20 03:06:57.581809 kubelet[2747]: I0120 03:06:57.581024 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-cilium-cgroup\") pod \"cilium-vfhrl\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " pod="kube-system/cilium-vfhrl" Jan 20 03:06:57.581809 kubelet[2747]: I0120 03:06:57.581039 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-etc-cni-netd\") pod \"cilium-vfhrl\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " pod="kube-system/cilium-vfhrl" Jan 20 03:06:57.581809 kubelet[2747]: I0120 03:06:57.581052 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d8b902c-7287-4a58-86c3-239fdf52d565-cilium-config-path\") pod \"cilium-vfhrl\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " pod="kube-system/cilium-vfhrl" Jan 20 03:06:57.581809 kubelet[2747]: I0120 03:06:57.581065 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8vg7\" (UniqueName: \"kubernetes.io/projected/4d8b902c-7287-4a58-86c3-239fdf52d565-kube-api-access-w8vg7\") pod \"cilium-vfhrl\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " pod="kube-system/cilium-vfhrl" Jan 20 03:06:57.582124 kubelet[2747]: I0120 03:06:57.581079 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-cilium-run\") pod \"cilium-vfhrl\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " pod="kube-system/cilium-vfhrl" Jan 20 03:06:57.582124 kubelet[2747]: I0120 03:06:57.581092 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-host-proc-sys-net\") pod \"cilium-vfhrl\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " pod="kube-system/cilium-vfhrl" Jan 20 03:06:57.582124 kubelet[2747]: I0120 03:06:57.581105 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-host-proc-sys-kernel\") pod \"cilium-vfhrl\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " pod="kube-system/cilium-vfhrl" Jan 20 03:06:57.582124 kubelet[2747]: I0120 03:06:57.581118 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/243e966a-f149-4685-ab27-9351d8b8db7f-xtables-lock\") pod \"kube-proxy-g75wz\" (UID: \"243e966a-f149-4685-ab27-9351d8b8db7f\") " pod="kube-system/kube-proxy-g75wz" Jan 20 03:06:57.582124 kubelet[2747]: I0120 03:06:57.581149 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/243e966a-f149-4685-ab27-9351d8b8db7f-lib-modules\") pod \"kube-proxy-g75wz\" (UID: \"243e966a-f149-4685-ab27-9351d8b8db7f\") " pod="kube-system/kube-proxy-g75wz" Jan 20 03:06:57.582124 kubelet[2747]: I0120 03:06:57.581162 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-lib-modules\") pod \"cilium-vfhrl\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " pod="kube-system/cilium-vfhrl" Jan 20 03:06:57.583470 kubelet[2747]: I0120 03:06:57.581223 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-xtables-lock\") pod \"cilium-vfhrl\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " pod="kube-system/cilium-vfhrl" Jan 20 03:06:57.583470 kubelet[2747]: I0120 03:06:57.581253 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/243e966a-f149-4685-ab27-9351d8b8db7f-kube-proxy\") pod \"kube-proxy-g75wz\" (UID: \"243e966a-f149-4685-ab27-9351d8b8db7f\") " pod="kube-system/kube-proxy-g75wz" Jan 20 03:06:57.583470 kubelet[2747]: I0120 03:06:57.581269 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-bpf-maps\") pod \"cilium-vfhrl\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " pod="kube-system/cilium-vfhrl" Jan 20 03:06:57.583470 kubelet[2747]: I0120 03:06:57.581283 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-hostproc\") pod \"cilium-vfhrl\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " pod="kube-system/cilium-vfhrl" Jan 20 03:06:57.583470 kubelet[2747]: I0120 03:06:57.581297 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-cni-path\") pod \"cilium-vfhrl\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " pod="kube-system/cilium-vfhrl" Jan 20 03:06:57.583470 kubelet[2747]: I0120 03:06:57.581313 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d8b902c-7287-4a58-86c3-239fdf52d565-clustermesh-secrets\") pod \"cilium-vfhrl\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " pod="kube-system/cilium-vfhrl" Jan 20 03:06:57.617805 systemd[1]: Created slice kubepods-besteffort-podfaa7e1e9_50d2_4e3f_9cf5_8a1a7da82c2d.slice - libcontainer container kubepods-besteffort-podfaa7e1e9_50d2_4e3f_9cf5_8a1a7da82c2d.slice. Jan 20 03:06:57.683098 kubelet[2747]: I0120 03:06:57.682252 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-qfq9n\" (UID: \"faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d\") " pod="kube-system/cilium-operator-6c4d7847fc-qfq9n" Jan 20 03:06:57.683098 kubelet[2747]: I0120 03:06:57.682947 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrjhw\" (UniqueName: \"kubernetes.io/projected/faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d-kube-api-access-nrjhw\") pod \"cilium-operator-6c4d7847fc-qfq9n\" (UID: \"faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d\") " pod="kube-system/cilium-operator-6c4d7847fc-qfq9n" Jan 20 03:06:57.834571 kubelet[2747]: E0120 03:06:57.834411 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:57.836060 containerd[1574]: time="2026-01-20T03:06:57.835774836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g75wz,Uid:243e966a-f149-4685-ab27-9351d8b8db7f,Namespace:kube-system,Attempt:0,}" Jan 20 03:06:57.840421 kubelet[2747]: E0120 03:06:57.840258 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:57.841392 containerd[1574]: time="2026-01-20T03:06:57.841122266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vfhrl,Uid:4d8b902c-7287-4a58-86c3-239fdf52d565,Namespace:kube-system,Attempt:0,}" Jan 20 03:06:57.886366 containerd[1574]: time="2026-01-20T03:06:57.886205429Z" level=info msg="connecting to shim 983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5" address="unix:///run/containerd/s/14eedf6d056043a2c25d5d1ae30a6398f8f3533b3117906e8295789a85afe04c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:06:57.887466 containerd[1574]: time="2026-01-20T03:06:57.887430154Z" level=info msg="connecting to shim 53d030487c1dfadfea38eec729c8786a452e4a077b7f069bdc8c02d80fc5b865" address="unix:///run/containerd/s/b99dfcaf7dbea00dafee3edeb0f2b9041d0c04c3627735738493cc819b002d08" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:06:57.923993 kubelet[2747]: E0120 03:06:57.923849 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:57.927475 containerd[1574]: time="2026-01-20T03:06:57.927359931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qfq9n,Uid:faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d,Namespace:kube-system,Attempt:0,}" Jan 20 03:06:57.952037 systemd[1]: Started cri-containerd-53d030487c1dfadfea38eec729c8786a452e4a077b7f069bdc8c02d80fc5b865.scope - libcontainer container 53d030487c1dfadfea38eec729c8786a452e4a077b7f069bdc8c02d80fc5b865. Jan 20 03:06:57.957342 systemd[1]: Started cri-containerd-983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5.scope - libcontainer container 983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5. Jan 20 03:06:57.964391 containerd[1574]: time="2026-01-20T03:06:57.964276519Z" level=info msg="connecting to shim aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880" address="unix:///run/containerd/s/fdbb4123ef13c2b7800a09378a4797ab068c58431dd37e1474e85c9913a025a1" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:06:58.009803 systemd[1]: Started cri-containerd-aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880.scope - libcontainer container aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880. Jan 20 03:06:58.019563 containerd[1574]: time="2026-01-20T03:06:58.019468720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g75wz,Uid:243e966a-f149-4685-ab27-9351d8b8db7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"53d030487c1dfadfea38eec729c8786a452e4a077b7f069bdc8c02d80fc5b865\"" Jan 20 03:06:58.024802 kubelet[2747]: E0120 03:06:58.024736 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:58.025237 containerd[1574]: time="2026-01-20T03:06:58.025173840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vfhrl,Uid:4d8b902c-7287-4a58-86c3-239fdf52d565,Namespace:kube-system,Attempt:0,} returns sandbox id \"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\"" Jan 20 03:06:58.026244 kubelet[2747]: E0120 03:06:58.026178 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:58.029023 containerd[1574]: time="2026-01-20T03:06:58.028996756Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 20 03:06:58.036479 containerd[1574]: time="2026-01-20T03:06:58.036292824Z" level=info msg="CreateContainer within sandbox \"53d030487c1dfadfea38eec729c8786a452e4a077b7f069bdc8c02d80fc5b865\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 03:06:58.053413 containerd[1574]: time="2026-01-20T03:06:58.053322545Z" level=info msg="Container ece9fec97327ca4b5f0e17246b3d9b14a758561ca2acff629fab73d7267223d7: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:06:58.069557 containerd[1574]: time="2026-01-20T03:06:58.069482197Z" level=info msg="CreateContainer within sandbox \"53d030487c1dfadfea38eec729c8786a452e4a077b7f069bdc8c02d80fc5b865\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ece9fec97327ca4b5f0e17246b3d9b14a758561ca2acff629fab73d7267223d7\"" Jan 20 03:06:58.070779 containerd[1574]: time="2026-01-20T03:06:58.070734721Z" level=info msg="StartContainer for \"ece9fec97327ca4b5f0e17246b3d9b14a758561ca2acff629fab73d7267223d7\"" Jan 20 03:06:58.073819 containerd[1574]: time="2026-01-20T03:06:58.073699489Z" level=info msg="connecting to shim ece9fec97327ca4b5f0e17246b3d9b14a758561ca2acff629fab73d7267223d7" address="unix:///run/containerd/s/b99dfcaf7dbea00dafee3edeb0f2b9041d0c04c3627735738493cc819b002d08" protocol=ttrpc version=3 Jan 20 03:06:58.107331 containerd[1574]: time="2026-01-20T03:06:58.107187436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qfq9n,Uid:faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880\"" Jan 20 03:06:58.110871 systemd[1]: Started cri-containerd-ece9fec97327ca4b5f0e17246b3d9b14a758561ca2acff629fab73d7267223d7.scope - libcontainer container ece9fec97327ca4b5f0e17246b3d9b14a758561ca2acff629fab73d7267223d7. Jan 20 03:06:58.111256 kubelet[2747]: E0120 03:06:58.111124 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:58.242048 containerd[1574]: time="2026-01-20T03:06:58.241958014Z" level=info msg="StartContainer for \"ece9fec97327ca4b5f0e17246b3d9b14a758561ca2acff629fab73d7267223d7\" returns successfully" Jan 20 03:06:58.642945 kubelet[2747]: E0120 03:06:58.642828 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:58.659390 kubelet[2747]: E0120 03:06:58.659342 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:58.716832 kubelet[2747]: E0120 03:06:58.716050 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:58.716832 kubelet[2747]: E0120 03:06:58.716407 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:06:58.717836 kubelet[2747]: E0120 03:06:58.717820 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:00.508948 update_engine[1553]: I20260120 03:07:00.507725 1553 update_attempter.cc:509] Updating boot flags... Jan 20 03:07:10.168076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1616985892.mount: Deactivated successfully. Jan 20 03:07:12.425107 containerd[1574]: time="2026-01-20T03:07:12.424959282Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:07:12.426107 containerd[1574]: time="2026-01-20T03:07:12.426025602Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 20 03:07:12.427513 containerd[1574]: time="2026-01-20T03:07:12.427429293Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:07:12.429254 containerd[1574]: time="2026-01-20T03:07:12.429169674Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.399101856s" Jan 20 03:07:12.429254 containerd[1574]: time="2026-01-20T03:07:12.429231149Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 20 03:07:12.434707 containerd[1574]: time="2026-01-20T03:07:12.434683698Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 20 03:07:12.451755 containerd[1574]: time="2026-01-20T03:07:12.451700321Z" level=info msg="CreateContainer within sandbox \"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 03:07:12.466814 containerd[1574]: time="2026-01-20T03:07:12.466724982Z" level=info msg="Container 1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:07:12.480547 containerd[1574]: time="2026-01-20T03:07:12.480477054Z" level=info msg="CreateContainer within sandbox \"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49\"" Jan 20 03:07:12.482762 containerd[1574]: time="2026-01-20T03:07:12.481318695Z" level=info msg="StartContainer for \"1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49\"" Jan 20 03:07:12.482989 containerd[1574]: time="2026-01-20T03:07:12.482936053Z" level=info msg="connecting to shim 1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49" address="unix:///run/containerd/s/14eedf6d056043a2c25d5d1ae30a6398f8f3533b3117906e8295789a85afe04c" protocol=ttrpc version=3 Jan 20 03:07:12.518941 systemd[1]: Started cri-containerd-1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49.scope - libcontainer container 1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49. Jan 20 03:07:12.605675 containerd[1574]: time="2026-01-20T03:07:12.605453252Z" level=info msg="StartContainer for \"1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49\" returns successfully" Jan 20 03:07:12.623152 systemd[1]: cri-containerd-1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49.scope: Deactivated successfully. Jan 20 03:07:12.623377 containerd[1574]: time="2026-01-20T03:07:12.623251057Z" level=info msg="received container exit event container_id:\"1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49\" id:\"1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49\" pid:3195 exited_at:{seconds:1768878432 nanos:622413825}" Jan 20 03:07:12.666113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49-rootfs.mount: Deactivated successfully. Jan 20 03:07:12.752000 kubelet[2747]: E0120 03:07:12.751702 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:12.790251 kubelet[2747]: I0120 03:07:12.789917 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g75wz" podStartSLOduration=15.789855848 podStartE2EDuration="15.789855848s" podCreationTimestamp="2026-01-20 03:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:06:58.862315332 +0000 UTC m=+8.359659597" watchObservedRunningTime="2026-01-20 03:07:12.789855848 +0000 UTC m=+22.287200103" Jan 20 03:07:13.627375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3965598470.mount: Deactivated successfully. Jan 20 03:07:13.759712 kubelet[2747]: E0120 03:07:13.758784 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:13.770534 containerd[1574]: time="2026-01-20T03:07:13.770457909Z" level=info msg="CreateContainer within sandbox \"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 03:07:13.789752 containerd[1574]: time="2026-01-20T03:07:13.789687916Z" level=info msg="Container 95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:07:13.796943 containerd[1574]: time="2026-01-20T03:07:13.796811806Z" level=info msg="CreateContainer within sandbox \"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849\"" Jan 20 03:07:13.797918 containerd[1574]: time="2026-01-20T03:07:13.797815393Z" level=info msg="StartContainer for \"95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849\"" Jan 20 03:07:13.799201 containerd[1574]: time="2026-01-20T03:07:13.799075958Z" level=info msg="connecting to shim 95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849" address="unix:///run/containerd/s/14eedf6d056043a2c25d5d1ae30a6398f8f3533b3117906e8295789a85afe04c" protocol=ttrpc version=3 Jan 20 03:07:13.843112 systemd[1]: Started cri-containerd-95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849.scope - libcontainer container 95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849. Jan 20 03:07:13.906806 containerd[1574]: time="2026-01-20T03:07:13.906714246Z" level=info msg="StartContainer for \"95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849\" returns successfully" Jan 20 03:07:13.930911 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 03:07:13.931282 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 03:07:13.931863 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 20 03:07:13.934330 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 03:07:13.936067 systemd[1]: cri-containerd-95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849.scope: Deactivated successfully. Jan 20 03:07:13.938807 containerd[1574]: time="2026-01-20T03:07:13.938716243Z" level=info msg="received container exit event container_id:\"95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849\" id:\"95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849\" pid:3250 exited_at:{seconds:1768878433 nanos:936959133}" Jan 20 03:07:13.968332 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 03:07:14.622242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849-rootfs.mount: Deactivated successfully. Jan 20 03:07:14.763721 kubelet[2747]: E0120 03:07:14.763654 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:14.773869 containerd[1574]: time="2026-01-20T03:07:14.773170721Z" level=info msg="CreateContainer within sandbox \"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 03:07:14.796639 containerd[1574]: time="2026-01-20T03:07:14.796470140Z" level=info msg="Container 3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:07:14.810048 containerd[1574]: time="2026-01-20T03:07:14.809931693Z" level=info msg="CreateContainer within sandbox \"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b\"" Jan 20 03:07:14.811065 containerd[1574]: time="2026-01-20T03:07:14.811001363Z" level=info msg="StartContainer for \"3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b\"" Jan 20 03:07:14.814159 containerd[1574]: time="2026-01-20T03:07:14.814120527Z" level=info msg="connecting to shim 3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b" address="unix:///run/containerd/s/14eedf6d056043a2c25d5d1ae30a6398f8f3533b3117906e8295789a85afe04c" protocol=ttrpc version=3 Jan 20 03:07:14.847853 systemd[1]: Started cri-containerd-3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b.scope - libcontainer container 3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b. Jan 20 03:07:14.965180 systemd[1]: cri-containerd-3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b.scope: Deactivated successfully. Jan 20 03:07:14.965447 containerd[1574]: time="2026-01-20T03:07:14.965208983Z" level=info msg="StartContainer for \"3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b\" returns successfully" Jan 20 03:07:14.970218 containerd[1574]: time="2026-01-20T03:07:14.970156271Z" level=info msg="received container exit event container_id:\"3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b\" id:\"3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b\" pid:3302 exited_at:{seconds:1768878434 nanos:969929398}" Jan 20 03:07:15.008919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b-rootfs.mount: Deactivated successfully. Jan 20 03:07:15.139870 containerd[1574]: time="2026-01-20T03:07:15.139747003Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:07:15.141177 containerd[1574]: time="2026-01-20T03:07:15.141090283Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 20 03:07:15.142715 containerd[1574]: time="2026-01-20T03:07:15.142579565Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:07:15.144244 containerd[1574]: time="2026-01-20T03:07:15.144151471Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.709345355s" Jan 20 03:07:15.144244 containerd[1574]: time="2026-01-20T03:07:15.144207827Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 20 03:07:15.150649 containerd[1574]: time="2026-01-20T03:07:15.150372864Z" level=info msg="CreateContainer within sandbox \"aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 20 03:07:15.162169 containerd[1574]: time="2026-01-20T03:07:15.162044271Z" level=info msg="Container 3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:07:15.171683 containerd[1574]: time="2026-01-20T03:07:15.171489382Z" level=info msg="CreateContainer within sandbox \"aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337\"" Jan 20 03:07:15.174317 containerd[1574]: time="2026-01-20T03:07:15.172846783Z" level=info msg="StartContainer for \"3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337\"" Jan 20 03:07:15.176859 containerd[1574]: time="2026-01-20T03:07:15.176670290Z" level=info msg="connecting to shim 3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337" address="unix:///run/containerd/s/fdbb4123ef13c2b7800a09378a4797ab068c58431dd37e1474e85c9913a025a1" protocol=ttrpc version=3 Jan 20 03:07:15.211075 systemd[1]: Started cri-containerd-3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337.scope - libcontainer container 3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337. Jan 20 03:07:15.279649 containerd[1574]: time="2026-01-20T03:07:15.279111596Z" level=info msg="StartContainer for \"3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337\" returns successfully" Jan 20 03:07:15.774811 kubelet[2747]: E0120 03:07:15.774678 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:15.781214 kubelet[2747]: E0120 03:07:15.781103 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:15.795050 containerd[1574]: time="2026-01-20T03:07:15.794452145Z" level=info msg="CreateContainer within sandbox \"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 03:07:15.802260 kubelet[2747]: I0120 03:07:15.802143 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-qfq9n" podStartSLOduration=1.770482611 podStartE2EDuration="18.802127s" podCreationTimestamp="2026-01-20 03:06:57 +0000 UTC" firstStartedPulling="2026-01-20 03:06:58.11340993 +0000 UTC m=+7.610754175" lastFinishedPulling="2026-01-20 03:07:15.145054318 +0000 UTC m=+24.642398564" observedRunningTime="2026-01-20 03:07:15.80111189 +0000 UTC m=+25.298456135" watchObservedRunningTime="2026-01-20 03:07:15.802127 +0000 UTC m=+25.299471265" Jan 20 03:07:15.821970 containerd[1574]: time="2026-01-20T03:07:15.820578914Z" level=info msg="Container 0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:07:15.833925 containerd[1574]: time="2026-01-20T03:07:15.832527866Z" level=info msg="CreateContainer within sandbox \"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b\"" Jan 20 03:07:15.837030 containerd[1574]: time="2026-01-20T03:07:15.835141091Z" level=info msg="StartContainer for \"0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b\"" Jan 20 03:07:15.837030 containerd[1574]: time="2026-01-20T03:07:15.836418339Z" level=info msg="connecting to shim 0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b" address="unix:///run/containerd/s/14eedf6d056043a2c25d5d1ae30a6398f8f3533b3117906e8295789a85afe04c" protocol=ttrpc version=3 Jan 20 03:07:15.887828 systemd[1]: Started cri-containerd-0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b.scope - libcontainer container 0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b. Jan 20 03:07:15.992381 systemd[1]: cri-containerd-0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b.scope: Deactivated successfully. Jan 20 03:07:15.994770 containerd[1574]: time="2026-01-20T03:07:15.994714629Z" level=info msg="received container exit event container_id:\"0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b\" id:\"0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b\" pid:3379 exited_at:{seconds:1768878435 nanos:992755652}" Jan 20 03:07:16.008160 containerd[1574]: time="2026-01-20T03:07:16.008018734Z" level=info msg="StartContainer for \"0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b\" returns successfully" Jan 20 03:07:16.032293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b-rootfs.mount: Deactivated successfully. Jan 20 03:07:16.789385 kubelet[2747]: E0120 03:07:16.789295 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:16.791331 kubelet[2747]: E0120 03:07:16.790220 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:16.797696 containerd[1574]: time="2026-01-20T03:07:16.797487438Z" level=info msg="CreateContainer within sandbox \"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 03:07:16.818747 containerd[1574]: time="2026-01-20T03:07:16.818652462Z" level=info msg="Container 4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:07:16.832314 containerd[1574]: time="2026-01-20T03:07:16.832202089Z" level=info msg="CreateContainer within sandbox \"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6\"" Jan 20 03:07:16.833028 containerd[1574]: time="2026-01-20T03:07:16.832950580Z" level=info msg="StartContainer for \"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6\"" Jan 20 03:07:16.835853 containerd[1574]: time="2026-01-20T03:07:16.835794305Z" level=info msg="connecting to shim 4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6" address="unix:///run/containerd/s/14eedf6d056043a2c25d5d1ae30a6398f8f3533b3117906e8295789a85afe04c" protocol=ttrpc version=3 Jan 20 03:07:16.873946 systemd[1]: Started cri-containerd-4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6.scope - libcontainer container 4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6. Jan 20 03:07:16.936086 containerd[1574]: time="2026-01-20T03:07:16.935983864Z" level=info msg="StartContainer for \"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6\" returns successfully" Jan 20 03:07:17.134701 kubelet[2747]: I0120 03:07:17.134311 2747 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 03:07:17.224707 systemd[1]: Created slice kubepods-burstable-podcc6cc0db_b540_4d9b_8151_82f48d01934a.slice - libcontainer container kubepods-burstable-podcc6cc0db_b540_4d9b_8151_82f48d01934a.slice. Jan 20 03:07:17.235796 systemd[1]: Created slice kubepods-burstable-pod79ffac7d_1a51_4112_a8a6_e5ce9d6f3fce.slice - libcontainer container kubepods-burstable-pod79ffac7d_1a51_4112_a8a6_e5ce9d6f3fce.slice. Jan 20 03:07:17.260435 kubelet[2747]: I0120 03:07:17.260375 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc6cc0db-b540-4d9b-8151-82f48d01934a-config-volume\") pod \"coredns-674b8bbfcf-wgc9p\" (UID: \"cc6cc0db-b540-4d9b-8151-82f48d01934a\") " pod="kube-system/coredns-674b8bbfcf-wgc9p" Jan 20 03:07:17.260435 kubelet[2747]: I0120 03:07:17.260424 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p99hv\" (UniqueName: \"kubernetes.io/projected/79ffac7d-1a51-4112-a8a6-e5ce9d6f3fce-kube-api-access-p99hv\") pod \"coredns-674b8bbfcf-g8nw7\" (UID: \"79ffac7d-1a51-4112-a8a6-e5ce9d6f3fce\") " pod="kube-system/coredns-674b8bbfcf-g8nw7" Jan 20 03:07:17.260580 kubelet[2747]: I0120 03:07:17.260456 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm9zv\" (UniqueName: \"kubernetes.io/projected/cc6cc0db-b540-4d9b-8151-82f48d01934a-kube-api-access-bm9zv\") pod \"coredns-674b8bbfcf-wgc9p\" (UID: \"cc6cc0db-b540-4d9b-8151-82f48d01934a\") " pod="kube-system/coredns-674b8bbfcf-wgc9p" Jan 20 03:07:17.260580 kubelet[2747]: I0120 03:07:17.260487 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79ffac7d-1a51-4112-a8a6-e5ce9d6f3fce-config-volume\") pod \"coredns-674b8bbfcf-g8nw7\" (UID: \"79ffac7d-1a51-4112-a8a6-e5ce9d6f3fce\") " pod="kube-system/coredns-674b8bbfcf-g8nw7" Jan 20 03:07:17.532859 kubelet[2747]: E0120 03:07:17.532795 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:17.534355 containerd[1574]: time="2026-01-20T03:07:17.534254514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wgc9p,Uid:cc6cc0db-b540-4d9b-8151-82f48d01934a,Namespace:kube-system,Attempt:0,}" Jan 20 03:07:17.541070 kubelet[2747]: E0120 03:07:17.540677 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:17.541919 containerd[1574]: time="2026-01-20T03:07:17.541479187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g8nw7,Uid:79ffac7d-1a51-4112-a8a6-e5ce9d6f3fce,Namespace:kube-system,Attempt:0,}" Jan 20 03:07:17.802218 kubelet[2747]: E0120 03:07:17.802035 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:17.830367 kubelet[2747]: I0120 03:07:17.830060 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vfhrl" podStartSLOduration=6.423361822 podStartE2EDuration="20.830044318s" podCreationTimestamp="2026-01-20 03:06:57 +0000 UTC" firstStartedPulling="2026-01-20 03:06:58.027748425 +0000 UTC m=+7.525092660" lastFinishedPulling="2026-01-20 03:07:12.434430911 +0000 UTC m=+21.931775156" observedRunningTime="2026-01-20 03:07:17.828286434 +0000 UTC m=+27.325630689" watchObservedRunningTime="2026-01-20 03:07:17.830044318 +0000 UTC m=+27.327388562" Jan 20 03:07:18.804080 kubelet[2747]: E0120 03:07:18.803936 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:19.392026 systemd-networkd[1465]: cilium_host: Link UP Jan 20 03:07:19.394120 systemd-networkd[1465]: cilium_net: Link UP Jan 20 03:07:19.394390 systemd-networkd[1465]: cilium_net: Gained carrier Jan 20 03:07:19.397178 systemd-networkd[1465]: cilium_host: Gained carrier Jan 20 03:07:19.550517 systemd-networkd[1465]: cilium_vxlan: Link UP Jan 20 03:07:19.550655 systemd-networkd[1465]: cilium_vxlan: Gained carrier Jan 20 03:07:19.806418 kubelet[2747]: E0120 03:07:19.806238 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:19.815652 kernel: NET: Registered PF_ALG protocol family Jan 20 03:07:20.371841 systemd-networkd[1465]: cilium_net: Gained IPv6LL Jan 20 03:07:20.372386 systemd-networkd[1465]: cilium_host: Gained IPv6LL Jan 20 03:07:20.701680 systemd-networkd[1465]: lxc_health: Link UP Jan 20 03:07:20.718046 systemd-networkd[1465]: lxc_health: Gained carrier Jan 20 03:07:21.011996 systemd-networkd[1465]: cilium_vxlan: Gained IPv6LL Jan 20 03:07:21.128652 kernel: eth0: renamed from tmpb627e Jan 20 03:07:21.128424 systemd-networkd[1465]: lxc26a43aee2f7b: Link UP Jan 20 03:07:21.132036 systemd-networkd[1465]: lxc26a43aee2f7b: Gained carrier Jan 20 03:07:21.155056 systemd-networkd[1465]: lxc0159e993b158: Link UP Jan 20 03:07:21.158679 kernel: eth0: renamed from tmp53298 Jan 20 03:07:21.163147 systemd-networkd[1465]: lxc0159e993b158: Gained carrier Jan 20 03:07:21.843438 kubelet[2747]: E0120 03:07:21.843339 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:21.972012 systemd-networkd[1465]: lxc_health: Gained IPv6LL Jan 20 03:07:22.262076 systemd[1]: Started sshd@9-10.0.0.5:22-10.0.0.1:43472.service - OpenSSH per-connection server daemon (10.0.0.1:43472). Jan 20 03:07:22.330757 sshd[3915]: Accepted publickey for core from 10.0.0.1 port 43472 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:22.332534 sshd-session[3915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:22.338537 systemd-logind[1551]: New session 10 of user core. Jan 20 03:07:22.347970 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 03:07:22.493868 sshd[3918]: Connection closed by 10.0.0.1 port 43472 Jan 20 03:07:22.494198 sshd-session[3915]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:22.499075 systemd[1]: sshd@9-10.0.0.5:22-10.0.0.1:43472.service: Deactivated successfully. Jan 20 03:07:22.502020 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 03:07:22.503431 systemd-logind[1551]: Session 10 logged out. Waiting for processes to exit. Jan 20 03:07:22.505664 systemd-logind[1551]: Removed session 10. Jan 20 03:07:22.611989 systemd-networkd[1465]: lxc26a43aee2f7b: Gained IPv6LL Jan 20 03:07:22.812896 kubelet[2747]: E0120 03:07:22.812752 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:22.868166 systemd-networkd[1465]: lxc0159e993b158: Gained IPv6LL Jan 20 03:07:23.814635 kubelet[2747]: E0120 03:07:23.814545 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:24.969246 containerd[1574]: time="2026-01-20T03:07:24.969164572Z" level=info msg="connecting to shim 532987dc557ca903f5cef75b519fea90e9aeecf4bdcb9a0881a929eb447fda39" address="unix:///run/containerd/s/1a35cf30ac9f4ba7f122db37f3381d222f0e9b5e6f903ace55bb7d5b38d0ac60" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:07:25.007827 systemd[1]: Started cri-containerd-532987dc557ca903f5cef75b519fea90e9aeecf4bdcb9a0881a929eb447fda39.scope - libcontainer container 532987dc557ca903f5cef75b519fea90e9aeecf4bdcb9a0881a929eb447fda39. Jan 20 03:07:25.008751 containerd[1574]: time="2026-01-20T03:07:25.008204699Z" level=info msg="connecting to shim b627edefed53503ccf69b69464eec54dd6838d4b6a757aefb4c1d171c73e0cc4" address="unix:///run/containerd/s/eb75d14252328cd1ef55440e732e256ab482cfaf38c219572e7b8843e5090c06" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:07:25.032010 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 03:07:25.050953 systemd[1]: Started cri-containerd-b627edefed53503ccf69b69464eec54dd6838d4b6a757aefb4c1d171c73e0cc4.scope - libcontainer container b627edefed53503ccf69b69464eec54dd6838d4b6a757aefb4c1d171c73e0cc4. Jan 20 03:07:25.073264 containerd[1574]: time="2026-01-20T03:07:25.073147534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wgc9p,Uid:cc6cc0db-b540-4d9b-8151-82f48d01934a,Namespace:kube-system,Attempt:0,} returns sandbox id \"532987dc557ca903f5cef75b519fea90e9aeecf4bdcb9a0881a929eb447fda39\"" Jan 20 03:07:25.073982 kubelet[2747]: E0120 03:07:25.073957 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:25.074225 systemd-resolved[1466]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 03:07:25.084482 containerd[1574]: time="2026-01-20T03:07:25.084060683Z" level=info msg="CreateContainer within sandbox \"532987dc557ca903f5cef75b519fea90e9aeecf4bdcb9a0881a929eb447fda39\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 03:07:25.099130 containerd[1574]: time="2026-01-20T03:07:25.099053522Z" level=info msg="Container 3267db6adf0ed64bb782cc59d191e057a1787fbe4248fe457f25e04b52afce11: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:07:25.106721 containerd[1574]: time="2026-01-20T03:07:25.105748846Z" level=info msg="CreateContainer within sandbox \"532987dc557ca903f5cef75b519fea90e9aeecf4bdcb9a0881a929eb447fda39\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3267db6adf0ed64bb782cc59d191e057a1787fbe4248fe457f25e04b52afce11\"" Jan 20 03:07:25.107673 containerd[1574]: time="2026-01-20T03:07:25.107560943Z" level=info msg="StartContainer for \"3267db6adf0ed64bb782cc59d191e057a1787fbe4248fe457f25e04b52afce11\"" Jan 20 03:07:25.108726 containerd[1574]: time="2026-01-20T03:07:25.108559754Z" level=info msg="connecting to shim 3267db6adf0ed64bb782cc59d191e057a1787fbe4248fe457f25e04b52afce11" address="unix:///run/containerd/s/1a35cf30ac9f4ba7f122db37f3381d222f0e9b5e6f903ace55bb7d5b38d0ac60" protocol=ttrpc version=3 Jan 20 03:07:25.127203 containerd[1574]: time="2026-01-20T03:07:25.127145942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g8nw7,Uid:79ffac7d-1a51-4112-a8a6-e5ce9d6f3fce,Namespace:kube-system,Attempt:0,} returns sandbox id \"b627edefed53503ccf69b69464eec54dd6838d4b6a757aefb4c1d171c73e0cc4\"" Jan 20 03:07:25.128295 kubelet[2747]: E0120 03:07:25.128275 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:25.140058 containerd[1574]: time="2026-01-20T03:07:25.139967021Z" level=info msg="CreateContainer within sandbox \"b627edefed53503ccf69b69464eec54dd6838d4b6a757aefb4c1d171c73e0cc4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 03:07:25.146865 systemd[1]: Started cri-containerd-3267db6adf0ed64bb782cc59d191e057a1787fbe4248fe457f25e04b52afce11.scope - libcontainer container 3267db6adf0ed64bb782cc59d191e057a1787fbe4248fe457f25e04b52afce11. Jan 20 03:07:25.152709 containerd[1574]: time="2026-01-20T03:07:25.152366246Z" level=info msg="Container a3dec732700774dca399914fa5f644ce514ba76475f3d941a7f648400c458a06: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:07:25.171532 containerd[1574]: time="2026-01-20T03:07:25.171365852Z" level=info msg="CreateContainer within sandbox \"b627edefed53503ccf69b69464eec54dd6838d4b6a757aefb4c1d171c73e0cc4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a3dec732700774dca399914fa5f644ce514ba76475f3d941a7f648400c458a06\"" Jan 20 03:07:25.172763 containerd[1574]: time="2026-01-20T03:07:25.172701157Z" level=info msg="StartContainer for \"a3dec732700774dca399914fa5f644ce514ba76475f3d941a7f648400c458a06\"" Jan 20 03:07:25.173579 containerd[1574]: time="2026-01-20T03:07:25.173499313Z" level=info msg="connecting to shim a3dec732700774dca399914fa5f644ce514ba76475f3d941a7f648400c458a06" address="unix:///run/containerd/s/eb75d14252328cd1ef55440e732e256ab482cfaf38c219572e7b8843e5090c06" protocol=ttrpc version=3 Jan 20 03:07:25.207891 systemd[1]: Started cri-containerd-a3dec732700774dca399914fa5f644ce514ba76475f3d941a7f648400c458a06.scope - libcontainer container a3dec732700774dca399914fa5f644ce514ba76475f3d941a7f648400c458a06. Jan 20 03:07:25.215309 containerd[1574]: time="2026-01-20T03:07:25.215220964Z" level=info msg="StartContainer for \"3267db6adf0ed64bb782cc59d191e057a1787fbe4248fe457f25e04b52afce11\" returns successfully" Jan 20 03:07:25.260876 containerd[1574]: time="2026-01-20T03:07:25.260089212Z" level=info msg="StartContainer for \"a3dec732700774dca399914fa5f644ce514ba76475f3d941a7f648400c458a06\" returns successfully" Jan 20 03:07:25.828751 kubelet[2747]: E0120 03:07:25.828423 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:25.833872 kubelet[2747]: E0120 03:07:25.833792 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:25.845248 kubelet[2747]: I0120 03:07:25.845061 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-g8nw7" podStartSLOduration=28.845043447 podStartE2EDuration="28.845043447s" podCreationTimestamp="2026-01-20 03:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:07:25.844787403 +0000 UTC m=+35.342131658" watchObservedRunningTime="2026-01-20 03:07:25.845043447 +0000 UTC m=+35.342387702" Jan 20 03:07:25.879545 kubelet[2747]: I0120 03:07:25.879474 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wgc9p" podStartSLOduration=28.879451845 podStartE2EDuration="28.879451845s" podCreationTimestamp="2026-01-20 03:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:07:25.861260677 +0000 UTC m=+35.358604922" watchObservedRunningTime="2026-01-20 03:07:25.879451845 +0000 UTC m=+35.376796090" Jan 20 03:07:26.836899 kubelet[2747]: E0120 03:07:26.836816 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:26.836899 kubelet[2747]: E0120 03:07:26.836890 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:27.516452 systemd[1]: Started sshd@10-10.0.0.5:22-10.0.0.1:45154.service - OpenSSH per-connection server daemon (10.0.0.1:45154). Jan 20 03:07:27.585499 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 45154 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:27.587273 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:27.594156 systemd-logind[1551]: New session 11 of user core. Jan 20 03:07:27.608036 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 03:07:27.766259 sshd[4113]: Connection closed by 10.0.0.1 port 45154 Jan 20 03:07:27.766730 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:27.772144 systemd[1]: sshd@10-10.0.0.5:22-10.0.0.1:45154.service: Deactivated successfully. Jan 20 03:07:27.774371 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 03:07:27.775823 systemd-logind[1551]: Session 11 logged out. Waiting for processes to exit. Jan 20 03:07:27.778208 systemd-logind[1551]: Removed session 11. Jan 20 03:07:27.840890 kubelet[2747]: E0120 03:07:27.840835 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:27.841389 kubelet[2747]: E0120 03:07:27.841001 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:07:32.792973 systemd[1]: Started sshd@11-10.0.0.5:22-10.0.0.1:45156.service - OpenSSH per-connection server daemon (10.0.0.1:45156). Jan 20 03:07:32.876409 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 45156 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:32.878374 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:32.885032 systemd-logind[1551]: New session 12 of user core. Jan 20 03:07:32.894890 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 03:07:33.054092 sshd[4135]: Connection closed by 10.0.0.1 port 45156 Jan 20 03:07:33.054463 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:33.059190 systemd[1]: sshd@11-10.0.0.5:22-10.0.0.1:45156.service: Deactivated successfully. Jan 20 03:07:33.062397 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 03:07:33.066267 systemd-logind[1551]: Session 12 logged out. Waiting for processes to exit. Jan 20 03:07:33.068729 systemd-logind[1551]: Removed session 12. Jan 20 03:07:38.072541 systemd[1]: Started sshd@12-10.0.0.5:22-10.0.0.1:45718.service - OpenSSH per-connection server daemon (10.0.0.1:45718). Jan 20 03:07:38.144573 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 45718 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:38.146570 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:38.154079 systemd-logind[1551]: New session 13 of user core. Jan 20 03:07:38.164837 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 03:07:38.326083 sshd[4152]: Connection closed by 10.0.0.1 port 45718 Jan 20 03:07:38.326412 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:38.331896 systemd[1]: sshd@12-10.0.0.5:22-10.0.0.1:45718.service: Deactivated successfully. Jan 20 03:07:38.334080 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 03:07:38.336369 systemd-logind[1551]: Session 13 logged out. Waiting for processes to exit. Jan 20 03:07:38.338576 systemd-logind[1551]: Removed session 13. Jan 20 03:07:43.345654 systemd[1]: Started sshd@13-10.0.0.5:22-10.0.0.1:45724.service - OpenSSH per-connection server daemon (10.0.0.1:45724). Jan 20 03:07:43.415008 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 45724 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:43.416902 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:43.423425 systemd-logind[1551]: New session 14 of user core. Jan 20 03:07:43.434868 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 03:07:43.578338 sshd[4170]: Connection closed by 10.0.0.1 port 45724 Jan 20 03:07:43.578812 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:43.584770 systemd[1]: sshd@13-10.0.0.5:22-10.0.0.1:45724.service: Deactivated successfully. Jan 20 03:07:43.586976 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 03:07:43.588129 systemd-logind[1551]: Session 14 logged out. Waiting for processes to exit. Jan 20 03:07:43.590327 systemd-logind[1551]: Removed session 14. Jan 20 03:07:48.599575 systemd[1]: Started sshd@14-10.0.0.5:22-10.0.0.1:44088.service - OpenSSH per-connection server daemon (10.0.0.1:44088). Jan 20 03:07:48.669917 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 44088 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:48.672787 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:48.680461 systemd-logind[1551]: New session 15 of user core. Jan 20 03:07:48.691938 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 03:07:48.849420 sshd[4187]: Connection closed by 10.0.0.1 port 44088 Jan 20 03:07:48.849491 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:48.862215 systemd[1]: sshd@14-10.0.0.5:22-10.0.0.1:44088.service: Deactivated successfully. Jan 20 03:07:48.865227 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 03:07:48.866842 systemd-logind[1551]: Session 15 logged out. Waiting for processes to exit. Jan 20 03:07:48.869490 systemd[1]: Started sshd@15-10.0.0.5:22-10.0.0.1:44104.service - OpenSSH per-connection server daemon (10.0.0.1:44104). Jan 20 03:07:48.872524 systemd-logind[1551]: Removed session 15. Jan 20 03:07:48.942067 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 44104 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:48.944445 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:48.951306 systemd-logind[1551]: New session 16 of user core. Jan 20 03:07:48.958943 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 03:07:49.180037 sshd[4204]: Connection closed by 10.0.0.1 port 44104 Jan 20 03:07:49.184416 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:49.190960 systemd[1]: sshd@15-10.0.0.5:22-10.0.0.1:44104.service: Deactivated successfully. Jan 20 03:07:49.193393 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 03:07:49.195569 systemd-logind[1551]: Session 16 logged out. Waiting for processes to exit. Jan 20 03:07:49.199403 systemd[1]: Started sshd@16-10.0.0.5:22-10.0.0.1:44110.service - OpenSSH per-connection server daemon (10.0.0.1:44110). Jan 20 03:07:49.202833 systemd-logind[1551]: Removed session 16. Jan 20 03:07:49.279544 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 44110 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:49.281422 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:49.288337 systemd-logind[1551]: New session 17 of user core. Jan 20 03:07:49.300876 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 03:07:49.444731 sshd[4219]: Connection closed by 10.0.0.1 port 44110 Jan 20 03:07:49.444826 sshd-session[4216]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:49.449945 systemd[1]: sshd@16-10.0.0.5:22-10.0.0.1:44110.service: Deactivated successfully. Jan 20 03:07:49.452440 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 03:07:49.454014 systemd-logind[1551]: Session 17 logged out. Waiting for processes to exit. Jan 20 03:07:49.456273 systemd-logind[1551]: Removed session 17. Jan 20 03:07:54.465307 systemd[1]: Started sshd@17-10.0.0.5:22-10.0.0.1:33752.service - OpenSSH per-connection server daemon (10.0.0.1:33752). Jan 20 03:07:54.547142 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 33752 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:54.550474 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:54.558344 systemd-logind[1551]: New session 18 of user core. Jan 20 03:07:54.567924 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 03:07:54.708739 sshd[4238]: Connection closed by 10.0.0.1 port 33752 Jan 20 03:07:54.709029 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Jan 20 03:07:54.713789 systemd[1]: sshd@17-10.0.0.5:22-10.0.0.1:33752.service: Deactivated successfully. Jan 20 03:07:54.716152 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 03:07:54.717839 systemd-logind[1551]: Session 18 logged out. Waiting for processes to exit. Jan 20 03:07:54.720148 systemd-logind[1551]: Removed session 18. Jan 20 03:07:59.727051 systemd[1]: Started sshd@18-10.0.0.5:22-10.0.0.1:33774.service - OpenSSH per-connection server daemon (10.0.0.1:33774). Jan 20 03:07:59.800019 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 33774 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:07:59.801967 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:07:59.808870 systemd-logind[1551]: New session 19 of user core. Jan 20 03:07:59.828921 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 03:08:00.112910 sshd[4257]: Connection closed by 10.0.0.1 port 33774 Jan 20 03:08:00.114917 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Jan 20 03:08:00.127905 systemd[1]: sshd@18-10.0.0.5:22-10.0.0.1:33774.service: Deactivated successfully. Jan 20 03:08:00.130012 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 03:08:00.131273 systemd-logind[1551]: Session 19 logged out. Waiting for processes to exit. Jan 20 03:08:00.134127 systemd[1]: Started sshd@19-10.0.0.5:22-10.0.0.1:33776.service - OpenSSH per-connection server daemon (10.0.0.1:33776). Jan 20 03:08:00.136121 systemd-logind[1551]: Removed session 19. Jan 20 03:08:00.203389 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 33776 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:08:00.205166 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:08:00.214120 systemd-logind[1551]: New session 20 of user core. Jan 20 03:08:00.228034 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 03:08:00.541520 sshd[4274]: Connection closed by 10.0.0.1 port 33776 Jan 20 03:08:00.539351 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Jan 20 03:08:00.563827 systemd[1]: sshd@19-10.0.0.5:22-10.0.0.1:33776.service: Deactivated successfully. Jan 20 03:08:00.567411 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 03:08:00.569371 systemd-logind[1551]: Session 20 logged out. Waiting for processes to exit. Jan 20 03:08:00.573551 systemd[1]: Started sshd@20-10.0.0.5:22-10.0.0.1:33788.service - OpenSSH per-connection server daemon (10.0.0.1:33788). Jan 20 03:08:00.575654 systemd-logind[1551]: Removed session 20. Jan 20 03:08:00.657382 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 33788 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:08:00.659805 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:08:00.674307 kubelet[2747]: E0120 03:08:00.674215 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:08:00.678932 systemd-logind[1551]: New session 21 of user core. Jan 20 03:08:00.698307 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 03:08:01.452211 sshd[4288]: Connection closed by 10.0.0.1 port 33788 Jan 20 03:08:01.454020 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Jan 20 03:08:01.466896 systemd[1]: sshd@20-10.0.0.5:22-10.0.0.1:33788.service: Deactivated successfully. Jan 20 03:08:01.470982 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 03:08:01.479135 systemd-logind[1551]: Session 21 logged out. Waiting for processes to exit. Jan 20 03:08:01.485507 systemd[1]: Started sshd@21-10.0.0.5:22-10.0.0.1:33806.service - OpenSSH per-connection server daemon (10.0.0.1:33806). Jan 20 03:08:01.494759 systemd-logind[1551]: Removed session 21. Jan 20 03:08:01.577512 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 33806 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:08:01.580189 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:08:01.589406 systemd-logind[1551]: New session 22 of user core. Jan 20 03:08:01.601888 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 03:08:01.972810 sshd[4313]: Connection closed by 10.0.0.1 port 33806 Jan 20 03:08:01.973128 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Jan 20 03:08:01.987366 systemd[1]: sshd@21-10.0.0.5:22-10.0.0.1:33806.service: Deactivated successfully. Jan 20 03:08:01.991401 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 03:08:01.994505 systemd-logind[1551]: Session 22 logged out. Waiting for processes to exit. Jan 20 03:08:01.997990 systemd[1]: Started sshd@22-10.0.0.5:22-10.0.0.1:33812.service - OpenSSH per-connection server daemon (10.0.0.1:33812). Jan 20 03:08:02.000303 systemd-logind[1551]: Removed session 22. Jan 20 03:08:02.072909 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 33812 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:08:02.075731 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:08:02.092573 systemd-logind[1551]: New session 23 of user core. Jan 20 03:08:02.104922 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 03:08:02.334393 sshd[4327]: Connection closed by 10.0.0.1 port 33812 Jan 20 03:08:02.334552 sshd-session[4324]: pam_unix(sshd:session): session closed for user core Jan 20 03:08:02.344554 systemd[1]: sshd@22-10.0.0.5:22-10.0.0.1:33812.service: Deactivated successfully. Jan 20 03:08:02.347946 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 03:08:02.350105 systemd-logind[1551]: Session 23 logged out. Waiting for processes to exit. Jan 20 03:08:02.353784 systemd-logind[1551]: Removed session 23. Jan 20 03:08:05.688291 kubelet[2747]: E0120 03:08:05.687432 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:08:06.672166 kubelet[2747]: E0120 03:08:06.671421 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:08:07.366451 systemd[1]: Started sshd@23-10.0.0.5:22-10.0.0.1:41000.service - OpenSSH per-connection server daemon (10.0.0.1:41000). Jan 20 03:08:07.515180 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 41000 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:08:07.523152 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:08:07.547374 systemd-logind[1551]: New session 24 of user core. Jan 20 03:08:07.555357 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 03:08:07.846415 sshd[4343]: Connection closed by 10.0.0.1 port 41000 Jan 20 03:08:07.847243 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Jan 20 03:08:07.858522 systemd[1]: sshd@23-10.0.0.5:22-10.0.0.1:41000.service: Deactivated successfully. Jan 20 03:08:07.864547 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 03:08:07.871139 systemd-logind[1551]: Session 24 logged out. Waiting for processes to exit. Jan 20 03:08:07.874777 systemd-logind[1551]: Removed session 24. Jan 20 03:08:12.880929 systemd[1]: Started sshd@24-10.0.0.5:22-10.0.0.1:41050.service - OpenSSH per-connection server daemon (10.0.0.1:41050). Jan 20 03:08:13.021528 sshd[4357]: Accepted publickey for core from 10.0.0.1 port 41050 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:08:13.034962 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:08:13.082776 systemd-logind[1551]: New session 25 of user core. Jan 20 03:08:13.118382 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 03:08:13.544833 sshd[4360]: Connection closed by 10.0.0.1 port 41050 Jan 20 03:08:13.551209 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Jan 20 03:08:13.571907 systemd[1]: sshd@24-10.0.0.5:22-10.0.0.1:41050.service: Deactivated successfully. Jan 20 03:08:13.583259 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 03:08:13.598887 systemd-logind[1551]: Session 25 logged out. Waiting for processes to exit. Jan 20 03:08:13.608420 systemd-logind[1551]: Removed session 25. Jan 20 03:08:17.670938 kubelet[2747]: E0120 03:08:17.670532 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:08:18.587227 systemd[1]: Started sshd@25-10.0.0.5:22-10.0.0.1:33040.service - OpenSSH per-connection server daemon (10.0.0.1:33040). Jan 20 03:08:18.738828 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 33040 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:08:18.741344 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:08:18.757863 systemd-logind[1551]: New session 26 of user core. Jan 20 03:08:18.776380 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 03:08:19.144110 sshd[4380]: Connection closed by 10.0.0.1 port 33040 Jan 20 03:08:19.144860 sshd-session[4377]: pam_unix(sshd:session): session closed for user core Jan 20 03:08:19.162435 systemd[1]: sshd@25-10.0.0.5:22-10.0.0.1:33040.service: Deactivated successfully. Jan 20 03:08:19.169210 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 03:08:19.172807 systemd-logind[1551]: Session 26 logged out. Waiting for processes to exit. Jan 20 03:08:19.184257 systemd-logind[1551]: Removed session 26. Jan 20 03:08:21.671747 kubelet[2747]: E0120 03:08:21.670790 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:08:24.178951 systemd[1]: Started sshd@26-10.0.0.5:22-10.0.0.1:33070.service - OpenSSH per-connection server daemon (10.0.0.1:33070). Jan 20 03:08:24.323225 sshd[4394]: Accepted publickey for core from 10.0.0.1 port 33070 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:08:24.327165 sshd-session[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:08:24.344355 systemd-logind[1551]: New session 27 of user core. Jan 20 03:08:24.353492 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 03:08:24.627214 sshd[4397]: Connection closed by 10.0.0.1 port 33070 Jan 20 03:08:24.628477 sshd-session[4394]: pam_unix(sshd:session): session closed for user core Jan 20 03:08:24.644808 systemd[1]: sshd@26-10.0.0.5:22-10.0.0.1:33070.service: Deactivated successfully. Jan 20 03:08:24.649104 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 03:08:24.654226 systemd-logind[1551]: Session 27 logged out. Waiting for processes to exit. Jan 20 03:08:24.657431 systemd[1]: Started sshd@27-10.0.0.5:22-10.0.0.1:40742.service - OpenSSH per-connection server daemon (10.0.0.1:40742). Jan 20 03:08:24.662362 systemd-logind[1551]: Removed session 27. Jan 20 03:08:24.770417 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 40742 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:08:24.772492 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:08:24.798498 systemd-logind[1551]: New session 28 of user core. Jan 20 03:08:24.815099 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 03:08:26.282193 kernel: hrtimer: interrupt took 2861023 ns Jan 20 03:08:26.773881 containerd[1574]: time="2026-01-20T03:08:26.773503975Z" level=info msg="StopContainer for \"3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337\" with timeout 30 (s)" Jan 20 03:08:26.805548 containerd[1574]: time="2026-01-20T03:08:26.805232563Z" level=info msg="Stop container \"3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337\" with signal terminated" Jan 20 03:08:26.881939 systemd[1]: cri-containerd-3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337.scope: Deactivated successfully. Jan 20 03:08:26.892710 containerd[1574]: time="2026-01-20T03:08:26.892322869Z" level=info msg="received container exit event container_id:\"3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337\" id:\"3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337\" pid:3347 exited_at:{seconds:1768878506 nanos:887733321}" Jan 20 03:08:26.929724 containerd[1574]: time="2026-01-20T03:08:26.929677108Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 03:08:26.961217 containerd[1574]: time="2026-01-20T03:08:26.939759300Z" level=info msg="StopContainer for \"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6\" with timeout 2 (s)" Jan 20 03:08:26.962308 containerd[1574]: time="2026-01-20T03:08:26.962197538Z" level=info msg="Stop container \"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6\" with signal terminated" Jan 20 03:08:26.987106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337-rootfs.mount: Deactivated successfully. Jan 20 03:08:27.011279 systemd-networkd[1465]: lxc_health: Link DOWN Jan 20 03:08:27.013155 systemd-networkd[1465]: lxc_health: Lost carrier Jan 20 03:08:27.051880 containerd[1574]: time="2026-01-20T03:08:27.050331794Z" level=info msg="StopContainer for \"3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337\" returns successfully" Jan 20 03:08:27.054530 systemd[1]: cri-containerd-4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6.scope: Deactivated successfully. Jan 20 03:08:27.055566 systemd[1]: cri-containerd-4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6.scope: Consumed 8.440s CPU time, 125.7M memory peak, 188K read from disk, 13.3M written to disk. Jan 20 03:08:27.059343 containerd[1574]: time="2026-01-20T03:08:27.059186647Z" level=info msg="received container exit event container_id:\"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6\" id:\"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6\" pid:3417 exited_at:{seconds:1768878507 nanos:58107992}" Jan 20 03:08:27.063500 containerd[1574]: time="2026-01-20T03:08:27.060894150Z" level=info msg="StopPodSandbox for \"aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880\"" Jan 20 03:08:27.063715 containerd[1574]: time="2026-01-20T03:08:27.063524958Z" level=info msg="Container to stop \"3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 03:08:27.086936 systemd[1]: cri-containerd-aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880.scope: Deactivated successfully. Jan 20 03:08:27.104253 containerd[1574]: time="2026-01-20T03:08:27.103104831Z" level=info msg="received sandbox exit event container_id:\"aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880\" id:\"aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880\" exit_status:137 exited_at:{seconds:1768878507 nanos:102256935}" monitor_name=podsandbox Jan 20 03:08:27.192534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6-rootfs.mount: Deactivated successfully. Jan 20 03:08:27.224079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880-rootfs.mount: Deactivated successfully. Jan 20 03:08:27.230083 containerd[1574]: time="2026-01-20T03:08:27.229959979Z" level=info msg="StopContainer for \"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6\" returns successfully" Jan 20 03:08:27.232532 containerd[1574]: time="2026-01-20T03:08:27.232504586Z" level=info msg="StopPodSandbox for \"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\"" Jan 20 03:08:27.233389 containerd[1574]: time="2026-01-20T03:08:27.232958961Z" level=info msg="Container to stop \"1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 03:08:27.233389 containerd[1574]: time="2026-01-20T03:08:27.233063716Z" level=info msg="Container to stop \"3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 03:08:27.233389 containerd[1574]: time="2026-01-20T03:08:27.233079596Z" level=info msg="Container to stop \"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 03:08:27.233389 containerd[1574]: time="2026-01-20T03:08:27.233090925Z" level=info msg="Container to stop \"95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 03:08:27.233389 containerd[1574]: time="2026-01-20T03:08:27.233102878Z" level=info msg="Container to stop \"0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 20 03:08:27.240964 containerd[1574]: time="2026-01-20T03:08:27.240814407Z" level=info msg="shim disconnected" id=aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880 namespace=k8s.io Jan 20 03:08:27.240964 containerd[1574]: time="2026-01-20T03:08:27.240920474Z" level=warning msg="cleaning up after shim disconnected" id=aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880 namespace=k8s.io Jan 20 03:08:27.255386 systemd[1]: cri-containerd-983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5.scope: Deactivated successfully. Jan 20 03:08:27.267726 containerd[1574]: time="2026-01-20T03:08:27.240932997Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 03:08:27.269912 containerd[1574]: time="2026-01-20T03:08:27.253312128Z" level=info msg="received sandbox exit event container_id:\"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\" id:\"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\" exit_status:137 exited_at:{seconds:1768878507 nanos:252151582}" monitor_name=podsandbox Jan 20 03:08:27.375190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5-rootfs.mount: Deactivated successfully. Jan 20 03:08:27.396361 containerd[1574]: time="2026-01-20T03:08:27.396317634Z" level=info msg="TearDown network for sandbox \"aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880\" successfully" Jan 20 03:08:27.398271 containerd[1574]: time="2026-01-20T03:08:27.398244455Z" level=info msg="StopPodSandbox for \"aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880\" returns successfully" Jan 20 03:08:27.402463 containerd[1574]: time="2026-01-20T03:08:27.399968208Z" level=info msg="shim disconnected" id=983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5 namespace=k8s.io Jan 20 03:08:27.402463 containerd[1574]: time="2026-01-20T03:08:27.400136571Z" level=warning msg="cleaning up after shim disconnected" id=983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5 namespace=k8s.io Jan 20 03:08:27.402463 containerd[1574]: time="2026-01-20T03:08:27.400151027Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 03:08:27.404922 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880-shm.mount: Deactivated successfully. Jan 20 03:08:27.436399 containerd[1574]: time="2026-01-20T03:08:27.436118809Z" level=info msg="received sandbox container exit event sandbox_id:\"aa21a542b22110223a489306763d680ad2812c8a7b2cf460f309734b5debb880\" exit_status:137 exited_at:{seconds:1768878507 nanos:102256935}" monitor_name=criService Jan 20 03:08:27.455853 kubelet[2747]: I0120 03:08:27.455444 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d-cilium-config-path\") pod \"faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d\" (UID: \"faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d\") " Jan 20 03:08:27.455853 kubelet[2747]: I0120 03:08:27.455504 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrjhw\" (UniqueName: \"kubernetes.io/projected/faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d-kube-api-access-nrjhw\") pod \"faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d\" (UID: \"faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d\") " Jan 20 03:08:27.470461 containerd[1574]: time="2026-01-20T03:08:27.462099920Z" level=info msg="received sandbox container exit event sandbox_id:\"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\" exit_status:137 exited_at:{seconds:1768878507 nanos:252151582}" monitor_name=criService Jan 20 03:08:27.470461 containerd[1574]: time="2026-01-20T03:08:27.466760060Z" level=info msg="TearDown network for sandbox \"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\" successfully" Jan 20 03:08:27.470461 containerd[1574]: time="2026-01-20T03:08:27.466790916Z" level=info msg="StopPodSandbox for \"983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5\" returns successfully" Jan 20 03:08:27.477198 kubelet[2747]: I0120 03:08:27.476329 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d" (UID: "faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 03:08:27.482186 kubelet[2747]: I0120 03:08:27.482110 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d-kube-api-access-nrjhw" (OuterVolumeSpecName: "kube-api-access-nrjhw") pod "faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d" (UID: "faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d"). InnerVolumeSpecName "kube-api-access-nrjhw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 03:08:27.559856 kubelet[2747]: I0120 03:08:27.559327 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-cni-path\") pod \"4d8b902c-7287-4a58-86c3-239fdf52d565\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " Jan 20 03:08:27.559856 kubelet[2747]: I0120 03:08:27.559387 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d8b902c-7287-4a58-86c3-239fdf52d565-clustermesh-secrets\") pod \"4d8b902c-7287-4a58-86c3-239fdf52d565\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " Jan 20 03:08:27.559856 kubelet[2747]: I0120 03:08:27.559420 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d8b902c-7287-4a58-86c3-239fdf52d565-cilium-config-path\") pod \"4d8b902c-7287-4a58-86c3-239fdf52d565\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " Jan 20 03:08:27.559856 kubelet[2747]: I0120 03:08:27.559443 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-host-proc-sys-net\") pod \"4d8b902c-7287-4a58-86c3-239fdf52d565\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " Jan 20 03:08:27.559856 kubelet[2747]: I0120 03:08:27.559465 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-host-proc-sys-kernel\") pod \"4d8b902c-7287-4a58-86c3-239fdf52d565\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " Jan 20 03:08:27.559856 kubelet[2747]: I0120 03:08:27.559520 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-etc-cni-netd\") pod \"4d8b902c-7287-4a58-86c3-239fdf52d565\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " Jan 20 03:08:27.560891 kubelet[2747]: I0120 03:08:27.559543 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-xtables-lock\") pod \"4d8b902c-7287-4a58-86c3-239fdf52d565\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " Jan 20 03:08:27.560891 kubelet[2747]: I0120 03:08:27.559571 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8vg7\" (UniqueName: \"kubernetes.io/projected/4d8b902c-7287-4a58-86c3-239fdf52d565-kube-api-access-w8vg7\") pod \"4d8b902c-7287-4a58-86c3-239fdf52d565\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " Jan 20 03:08:27.560891 kubelet[2747]: I0120 03:08:27.560072 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-bpf-maps\") pod \"4d8b902c-7287-4a58-86c3-239fdf52d565\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " Jan 20 03:08:27.560891 kubelet[2747]: I0120 03:08:27.560100 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-hostproc\") pod \"4d8b902c-7287-4a58-86c3-239fdf52d565\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " Jan 20 03:08:27.560891 kubelet[2747]: I0120 03:08:27.560125 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-cilium-run\") pod \"4d8b902c-7287-4a58-86c3-239fdf52d565\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " Jan 20 03:08:27.560891 kubelet[2747]: I0120 03:08:27.560145 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-lib-modules\") pod \"4d8b902c-7287-4a58-86c3-239fdf52d565\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " Jan 20 03:08:27.564121 kubelet[2747]: I0120 03:08:27.560166 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-cilium-cgroup\") pod \"4d8b902c-7287-4a58-86c3-239fdf52d565\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " Jan 20 03:08:27.564121 kubelet[2747]: I0120 03:08:27.560191 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d8b902c-7287-4a58-86c3-239fdf52d565-hubble-tls\") pod \"4d8b902c-7287-4a58-86c3-239fdf52d565\" (UID: \"4d8b902c-7287-4a58-86c3-239fdf52d565\") " Jan 20 03:08:27.564121 kubelet[2747]: I0120 03:08:27.560246 2747 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 03:08:27.564121 kubelet[2747]: I0120 03:08:27.560261 2747 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nrjhw\" (UniqueName: \"kubernetes.io/projected/faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d-kube-api-access-nrjhw\") on node \"localhost\" DevicePath \"\"" Jan 20 03:08:27.564121 kubelet[2747]: I0120 03:08:27.561749 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-cni-path" (OuterVolumeSpecName: "cni-path") pod "4d8b902c-7287-4a58-86c3-239fdf52d565" (UID: "4d8b902c-7287-4a58-86c3-239fdf52d565"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:08:27.564121 kubelet[2747]: I0120 03:08:27.562271 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4d8b902c-7287-4a58-86c3-239fdf52d565" (UID: "4d8b902c-7287-4a58-86c3-239fdf52d565"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:08:27.564349 kubelet[2747]: I0120 03:08:27.562310 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4d8b902c-7287-4a58-86c3-239fdf52d565" (UID: "4d8b902c-7287-4a58-86c3-239fdf52d565"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:08:27.564349 kubelet[2747]: I0120 03:08:27.562334 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-hostproc" (OuterVolumeSpecName: "hostproc") pod "4d8b902c-7287-4a58-86c3-239fdf52d565" (UID: "4d8b902c-7287-4a58-86c3-239fdf52d565"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:08:27.564349 kubelet[2747]: I0120 03:08:27.564301 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4d8b902c-7287-4a58-86c3-239fdf52d565" (UID: "4d8b902c-7287-4a58-86c3-239fdf52d565"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:08:27.564349 kubelet[2747]: I0120 03:08:27.564345 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4d8b902c-7287-4a58-86c3-239fdf52d565" (UID: "4d8b902c-7287-4a58-86c3-239fdf52d565"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:08:27.564513 kubelet[2747]: I0120 03:08:27.564373 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4d8b902c-7287-4a58-86c3-239fdf52d565" (UID: "4d8b902c-7287-4a58-86c3-239fdf52d565"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:08:27.564513 kubelet[2747]: I0120 03:08:27.564396 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4d8b902c-7287-4a58-86c3-239fdf52d565" (UID: "4d8b902c-7287-4a58-86c3-239fdf52d565"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:08:27.564513 kubelet[2747]: I0120 03:08:27.564416 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4d8b902c-7287-4a58-86c3-239fdf52d565" (UID: "4d8b902c-7287-4a58-86c3-239fdf52d565"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:08:27.564513 kubelet[2747]: I0120 03:08:27.564437 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4d8b902c-7287-4a58-86c3-239fdf52d565" (UID: "4d8b902c-7287-4a58-86c3-239fdf52d565"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 20 03:08:27.574856 kubelet[2747]: I0120 03:08:27.574800 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d8b902c-7287-4a58-86c3-239fdf52d565-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4d8b902c-7287-4a58-86c3-239fdf52d565" (UID: "4d8b902c-7287-4a58-86c3-239fdf52d565"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 03:08:27.584142 kubelet[2747]: I0120 03:08:27.583801 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d8b902c-7287-4a58-86c3-239fdf52d565-kube-api-access-w8vg7" (OuterVolumeSpecName: "kube-api-access-w8vg7") pod "4d8b902c-7287-4a58-86c3-239fdf52d565" (UID: "4d8b902c-7287-4a58-86c3-239fdf52d565"). InnerVolumeSpecName "kube-api-access-w8vg7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 03:08:27.592197 kubelet[2747]: I0120 03:08:27.588377 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d8b902c-7287-4a58-86c3-239fdf52d565-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4d8b902c-7287-4a58-86c3-239fdf52d565" (UID: "4d8b902c-7287-4a58-86c3-239fdf52d565"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 03:08:27.609860 kubelet[2747]: I0120 03:08:27.609803 2747 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d8b902c-7287-4a58-86c3-239fdf52d565-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4d8b902c-7287-4a58-86c3-239fdf52d565" (UID: "4d8b902c-7287-4a58-86c3-239fdf52d565"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 03:08:27.663735 kubelet[2747]: I0120 03:08:27.663448 2747 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d8b902c-7287-4a58-86c3-239fdf52d565-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 20 03:08:27.664062 kubelet[2747]: I0120 03:08:27.663878 2747 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 20 03:08:27.664062 kubelet[2747]: I0120 03:08:27.663902 2747 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 20 03:08:27.664062 kubelet[2747]: I0120 03:08:27.663919 2747 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 20 03:08:27.664062 kubelet[2747]: I0120 03:08:27.663932 2747 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 20 03:08:27.664062 kubelet[2747]: I0120 03:08:27.663946 2747 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w8vg7\" (UniqueName: \"kubernetes.io/projected/4d8b902c-7287-4a58-86c3-239fdf52d565-kube-api-access-w8vg7\") on node \"localhost\" DevicePath \"\"" Jan 20 03:08:27.664062 kubelet[2747]: I0120 03:08:27.663960 2747 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 20 03:08:27.664062 kubelet[2747]: I0120 03:08:27.663974 2747 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 20 03:08:27.664283 kubelet[2747]: I0120 03:08:27.664085 2747 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 20 03:08:27.664283 kubelet[2747]: I0120 03:08:27.664098 2747 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 20 03:08:27.664283 kubelet[2747]: I0120 03:08:27.664111 2747 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 20 03:08:27.664283 kubelet[2747]: I0120 03:08:27.664124 2747 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d8b902c-7287-4a58-86c3-239fdf52d565-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 20 03:08:27.664283 kubelet[2747]: I0120 03:08:27.664134 2747 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d8b902c-7287-4a58-86c3-239fdf52d565-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 20 03:08:27.664283 kubelet[2747]: I0120 03:08:27.664144 2747 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d8b902c-7287-4a58-86c3-239fdf52d565-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 20 03:08:27.982411 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-983e8861cd9e1a223e3d3a470bef1bf6cad628692bf5d40e1f997e7fdc05d7a5-shm.mount: Deactivated successfully. Jan 20 03:08:27.983978 systemd[1]: var-lib-kubelet-pods-faa7e1e9\x2d50d2\x2d4e3f\x2d9cf5\x2d8a1a7da82c2d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnrjhw.mount: Deactivated successfully. Jan 20 03:08:27.984268 systemd[1]: var-lib-kubelet-pods-4d8b902c\x2d7287\x2d4a58\x2d86c3\x2d239fdf52d565-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw8vg7.mount: Deactivated successfully. Jan 20 03:08:27.984376 systemd[1]: var-lib-kubelet-pods-4d8b902c\x2d7287\x2d4a58\x2d86c3\x2d239fdf52d565-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 20 03:08:27.984495 systemd[1]: var-lib-kubelet-pods-4d8b902c\x2d7287\x2d4a58\x2d86c3\x2d239fdf52d565-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 20 03:08:28.176366 kubelet[2747]: I0120 03:08:28.176192 2747 scope.go:117] "RemoveContainer" containerID="3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337" Jan 20 03:08:28.191560 systemd[1]: Removed slice kubepods-besteffort-podfaa7e1e9_50d2_4e3f_9cf5_8a1a7da82c2d.slice - libcontainer container kubepods-besteffort-podfaa7e1e9_50d2_4e3f_9cf5_8a1a7da82c2d.slice. Jan 20 03:08:28.218396 containerd[1574]: time="2026-01-20T03:08:28.217221256Z" level=info msg="RemoveContainer for \"3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337\"" Jan 20 03:08:28.218567 systemd[1]: Removed slice kubepods-burstable-pod4d8b902c_7287_4a58_86c3_239fdf52d565.slice - libcontainer container kubepods-burstable-pod4d8b902c_7287_4a58_86c3_239fdf52d565.slice. Jan 20 03:08:28.220391 systemd[1]: kubepods-burstable-pod4d8b902c_7287_4a58_86c3_239fdf52d565.slice: Consumed 8.633s CPU time, 126M memory peak, 212K read from disk, 13.3M written to disk. Jan 20 03:08:28.272361 containerd[1574]: time="2026-01-20T03:08:28.271323846Z" level=info msg="RemoveContainer for \"3efd504bf564de1d7533913993c3e6b4f3556140241917af29cfb085e8b0c337\" returns successfully" Jan 20 03:08:28.273485 kubelet[2747]: I0120 03:08:28.273363 2747 scope.go:117] "RemoveContainer" containerID="4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6" Jan 20 03:08:28.279120 containerd[1574]: time="2026-01-20T03:08:28.277563399Z" level=info msg="RemoveContainer for \"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6\"" Jan 20 03:08:28.290057 containerd[1574]: time="2026-01-20T03:08:28.289448477Z" level=info msg="RemoveContainer for \"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6\" returns successfully" Jan 20 03:08:28.291416 kubelet[2747]: I0120 03:08:28.291331 2747 scope.go:117] "RemoveContainer" containerID="0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b" Jan 20 03:08:28.295192 containerd[1574]: time="2026-01-20T03:08:28.295077032Z" level=info msg="RemoveContainer for \"0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b\"" Jan 20 03:08:28.303573 containerd[1574]: time="2026-01-20T03:08:28.303375392Z" level=info msg="RemoveContainer for \"0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b\" returns successfully" Jan 20 03:08:28.304397 kubelet[2747]: I0120 03:08:28.304110 2747 scope.go:117] "RemoveContainer" containerID="3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b" Jan 20 03:08:28.308891 containerd[1574]: time="2026-01-20T03:08:28.308499595Z" level=info msg="RemoveContainer for \"3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b\"" Jan 20 03:08:28.316575 containerd[1574]: time="2026-01-20T03:08:28.316298825Z" level=info msg="RemoveContainer for \"3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b\" returns successfully" Jan 20 03:08:28.317423 kubelet[2747]: I0120 03:08:28.317134 2747 scope.go:117] "RemoveContainer" containerID="95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849" Jan 20 03:08:28.321063 containerd[1574]: time="2026-01-20T03:08:28.320953267Z" level=info msg="RemoveContainer for \"95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849\"" Jan 20 03:08:28.380697 containerd[1574]: time="2026-01-20T03:08:28.380441427Z" level=info msg="RemoveContainer for \"95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849\" returns successfully" Jan 20 03:08:28.382135 kubelet[2747]: I0120 03:08:28.381820 2747 scope.go:117] "RemoveContainer" containerID="1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49" Jan 20 03:08:28.386718 containerd[1574]: time="2026-01-20T03:08:28.386108011Z" level=info msg="RemoveContainer for \"1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49\"" Jan 20 03:08:28.396153 containerd[1574]: time="2026-01-20T03:08:28.395941365Z" level=info msg="RemoveContainer for \"1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49\" returns successfully" Jan 20 03:08:28.396828 kubelet[2747]: I0120 03:08:28.396468 2747 scope.go:117] "RemoveContainer" containerID="4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6" Jan 20 03:08:28.397793 containerd[1574]: time="2026-01-20T03:08:28.397247501Z" level=error msg="ContainerStatus for \"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6\": not found" Jan 20 03:08:28.397870 kubelet[2747]: E0120 03:08:28.397553 2747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6\": not found" containerID="4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6" Jan 20 03:08:28.397870 kubelet[2747]: I0120 03:08:28.397800 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6"} err="failed to get container status \"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f500f7409819c5f66728477981a249093904ddfa3a5427f9bd5ab6f280e0ed6\": not found" Jan 20 03:08:28.397870 kubelet[2747]: I0120 03:08:28.397854 2747 scope.go:117] "RemoveContainer" containerID="0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b" Jan 20 03:08:28.398514 containerd[1574]: time="2026-01-20T03:08:28.398449755Z" level=error msg="ContainerStatus for \"0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b\": not found" Jan 20 03:08:28.399205 kubelet[2747]: E0120 03:08:28.399086 2747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b\": not found" containerID="0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b" Jan 20 03:08:28.399205 kubelet[2747]: I0120 03:08:28.399193 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b"} err="failed to get container status \"0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b\": rpc error: code = NotFound desc = an error occurred when try to find container \"0be2b6663e87a4fc55ea2b2286570744f798b4ab522da86174a8cc1b6bd9c86b\": not found" Jan 20 03:08:28.399313 kubelet[2747]: I0120 03:08:28.399218 2747 scope.go:117] "RemoveContainer" containerID="3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b" Jan 20 03:08:28.400269 containerd[1574]: time="2026-01-20T03:08:28.399875745Z" level=error msg="ContainerStatus for \"3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b\": not found" Jan 20 03:08:28.401155 kubelet[2747]: E0120 03:08:28.400224 2747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b\": not found" containerID="3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b" Jan 20 03:08:28.401155 kubelet[2747]: I0120 03:08:28.400256 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b"} err="failed to get container status \"3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3bd0c3aa69a9d8454e9a68b0a8e05862a10a615b4590f723288483b0f2f7a61b\": not found" Jan 20 03:08:28.401155 kubelet[2747]: I0120 03:08:28.400282 2747 scope.go:117] "RemoveContainer" containerID="95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849" Jan 20 03:08:28.401155 kubelet[2747]: E0120 03:08:28.400883 2747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849\": not found" containerID="95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849" Jan 20 03:08:28.401155 kubelet[2747]: I0120 03:08:28.400908 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849"} err="failed to get container status \"95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849\": rpc error: code = NotFound desc = an error occurred when try to find container \"95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849\": not found" Jan 20 03:08:28.401155 kubelet[2747]: I0120 03:08:28.400927 2747 scope.go:117] "RemoveContainer" containerID="1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49" Jan 20 03:08:28.401354 containerd[1574]: time="2026-01-20T03:08:28.400751533Z" level=error msg="ContainerStatus for \"95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95190b7fd1f063b897f2dcb688ec905ec08fb11137783be03816adfe31093849\": not found" Jan 20 03:08:28.401354 containerd[1574]: time="2026-01-20T03:08:28.401340126Z" level=error msg="ContainerStatus for \"1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49\": not found" Jan 20 03:08:28.401750 kubelet[2747]: E0120 03:08:28.401431 2747 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49\": not found" containerID="1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49" Jan 20 03:08:28.401750 kubelet[2747]: I0120 03:08:28.401450 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49"} err="failed to get container status \"1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e672b2d4dbc36623d32d3c54f77661efe36c8d9c22d0c67d577a6bc5342fa49\": not found" Jan 20 03:08:28.602421 sshd[4414]: Connection closed by 10.0.0.1 port 40742 Jan 20 03:08:28.602177 sshd-session[4411]: pam_unix(sshd:session): session closed for user core Jan 20 03:08:28.622499 systemd[1]: sshd@27-10.0.0.5:22-10.0.0.1:40742.service: Deactivated successfully. Jan 20 03:08:28.627226 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 03:08:28.628234 systemd[1]: session-28.scope: Consumed 1.020s CPU time, 25.4M memory peak. Jan 20 03:08:28.631891 systemd-logind[1551]: Session 28 logged out. Waiting for processes to exit. Jan 20 03:08:28.637370 systemd[1]: Started sshd@28-10.0.0.5:22-10.0.0.1:40762.service - OpenSSH per-connection server daemon (10.0.0.1:40762). Jan 20 03:08:28.640373 systemd-logind[1551]: Removed session 28. Jan 20 03:08:28.675819 kubelet[2747]: I0120 03:08:28.675340 2747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d8b902c-7287-4a58-86c3-239fdf52d565" path="/var/lib/kubelet/pods/4d8b902c-7287-4a58-86c3-239fdf52d565/volumes" Jan 20 03:08:28.676977 kubelet[2747]: I0120 03:08:28.676570 2747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d" path="/var/lib/kubelet/pods/faa7e1e9-50d2-4e3f-9cf5-8a1a7da82c2d/volumes" Jan 20 03:08:28.756565 sshd[4561]: Accepted publickey for core from 10.0.0.1 port 40762 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:08:28.759487 sshd-session[4561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:08:28.776178 systemd-logind[1551]: New session 29 of user core. Jan 20 03:08:28.795906 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 03:08:29.904532 sshd[4564]: Connection closed by 10.0.0.1 port 40762 Jan 20 03:08:29.910120 sshd-session[4561]: pam_unix(sshd:session): session closed for user core Jan 20 03:08:29.931945 systemd[1]: sshd@28-10.0.0.5:22-10.0.0.1:40762.service: Deactivated successfully. Jan 20 03:08:29.937452 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 03:08:29.948424 systemd-logind[1551]: Session 29 logged out. Waiting for processes to exit. Jan 20 03:08:29.962120 systemd[1]: Started sshd@29-10.0.0.5:22-10.0.0.1:40818.service - OpenSSH per-connection server daemon (10.0.0.1:40818). Jan 20 03:08:29.970487 systemd-logind[1551]: Removed session 29. Jan 20 03:08:30.041192 systemd[1]: Created slice kubepods-burstable-pod381c5953_7fa4_4136_91a9_2274f50a635e.slice - libcontainer container kubepods-burstable-pod381c5953_7fa4_4136_91a9_2274f50a635e.slice. Jan 20 03:08:30.089538 kubelet[2747]: I0120 03:08:30.089491 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/381c5953-7fa4-4136-91a9-2274f50a635e-hostproc\") pod \"cilium-5rxzw\" (UID: \"381c5953-7fa4-4136-91a9-2274f50a635e\") " pod="kube-system/cilium-5rxzw" Jan 20 03:08:30.094194 kubelet[2747]: I0120 03:08:30.093755 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/381c5953-7fa4-4136-91a9-2274f50a635e-cilium-config-path\") pod \"cilium-5rxzw\" (UID: \"381c5953-7fa4-4136-91a9-2274f50a635e\") " pod="kube-system/cilium-5rxzw" Jan 20 03:08:30.094194 kubelet[2747]: I0120 03:08:30.094077 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/381c5953-7fa4-4136-91a9-2274f50a635e-host-proc-sys-kernel\") pod \"cilium-5rxzw\" (UID: \"381c5953-7fa4-4136-91a9-2274f50a635e\") " pod="kube-system/cilium-5rxzw" Jan 20 03:08:30.094194 kubelet[2747]: I0120 03:08:30.094113 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/381c5953-7fa4-4136-91a9-2274f50a635e-xtables-lock\") pod \"cilium-5rxzw\" (UID: \"381c5953-7fa4-4136-91a9-2274f50a635e\") " pod="kube-system/cilium-5rxzw" Jan 20 03:08:30.094194 kubelet[2747]: I0120 03:08:30.094134 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/381c5953-7fa4-4136-91a9-2274f50a635e-clustermesh-secrets\") pod \"cilium-5rxzw\" (UID: \"381c5953-7fa4-4136-91a9-2274f50a635e\") " pod="kube-system/cilium-5rxzw" Jan 20 03:08:30.094194 kubelet[2747]: I0120 03:08:30.094156 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/381c5953-7fa4-4136-91a9-2274f50a635e-cilium-cgroup\") pod \"cilium-5rxzw\" (UID: \"381c5953-7fa4-4136-91a9-2274f50a635e\") " pod="kube-system/cilium-5rxzw" Jan 20 03:08:30.094194 kubelet[2747]: I0120 03:08:30.094188 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/381c5953-7fa4-4136-91a9-2274f50a635e-cni-path\") pod \"cilium-5rxzw\" (UID: \"381c5953-7fa4-4136-91a9-2274f50a635e\") " pod="kube-system/cilium-5rxzw" Jan 20 03:08:30.094755 kubelet[2747]: I0120 03:08:30.094210 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/381c5953-7fa4-4136-91a9-2274f50a635e-lib-modules\") pod \"cilium-5rxzw\" (UID: \"381c5953-7fa4-4136-91a9-2274f50a635e\") " pod="kube-system/cilium-5rxzw" Jan 20 03:08:30.094755 kubelet[2747]: I0120 03:08:30.094233 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/381c5953-7fa4-4136-91a9-2274f50a635e-cilium-ipsec-secrets\") pod \"cilium-5rxzw\" (UID: \"381c5953-7fa4-4136-91a9-2274f50a635e\") " pod="kube-system/cilium-5rxzw" Jan 20 03:08:30.094755 kubelet[2747]: I0120 03:08:30.094254 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/381c5953-7fa4-4136-91a9-2274f50a635e-host-proc-sys-net\") pod \"cilium-5rxzw\" (UID: \"381c5953-7fa4-4136-91a9-2274f50a635e\") " pod="kube-system/cilium-5rxzw" Jan 20 03:08:30.094755 kubelet[2747]: I0120 03:08:30.094278 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/381c5953-7fa4-4136-91a9-2274f50a635e-bpf-maps\") pod \"cilium-5rxzw\" (UID: \"381c5953-7fa4-4136-91a9-2274f50a635e\") " pod="kube-system/cilium-5rxzw" Jan 20 03:08:30.094755 kubelet[2747]: I0120 03:08:30.094300 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8q72\" (UniqueName: \"kubernetes.io/projected/381c5953-7fa4-4136-91a9-2274f50a635e-kube-api-access-n8q72\") pod \"cilium-5rxzw\" (UID: \"381c5953-7fa4-4136-91a9-2274f50a635e\") " pod="kube-system/cilium-5rxzw" Jan 20 03:08:30.094755 kubelet[2747]: I0120 03:08:30.094319 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/381c5953-7fa4-4136-91a9-2274f50a635e-cilium-run\") pod \"cilium-5rxzw\" (UID: \"381c5953-7fa4-4136-91a9-2274f50a635e\") " pod="kube-system/cilium-5rxzw" Jan 20 03:08:30.094956 kubelet[2747]: I0120 03:08:30.094345 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/381c5953-7fa4-4136-91a9-2274f50a635e-etc-cni-netd\") pod \"cilium-5rxzw\" (UID: \"381c5953-7fa4-4136-91a9-2274f50a635e\") " pod="kube-system/cilium-5rxzw" Jan 20 03:08:30.094956 kubelet[2747]: I0120 03:08:30.094368 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/381c5953-7fa4-4136-91a9-2274f50a635e-hubble-tls\") pod \"cilium-5rxzw\" (UID: \"381c5953-7fa4-4136-91a9-2274f50a635e\") " pod="kube-system/cilium-5rxzw" Jan 20 03:08:30.130977 sshd[4576]: Accepted publickey for core from 10.0.0.1 port 40818 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:08:30.135301 sshd-session[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:08:30.150841 systemd-logind[1551]: New session 30 of user core. Jan 20 03:08:30.164248 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 03:08:30.254871 sshd[4579]: Connection closed by 10.0.0.1 port 40818 Jan 20 03:08:30.256907 sshd-session[4576]: pam_unix(sshd:session): session closed for user core Jan 20 03:08:30.270739 systemd[1]: sshd@29-10.0.0.5:22-10.0.0.1:40818.service: Deactivated successfully. Jan 20 03:08:30.279084 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 03:08:30.283730 systemd-logind[1551]: Session 30 logged out. Waiting for processes to exit. Jan 20 03:08:30.292400 systemd[1]: Started sshd@30-10.0.0.5:22-10.0.0.1:40838.service - OpenSSH per-connection server daemon (10.0.0.1:40838). Jan 20 03:08:30.295284 systemd-logind[1551]: Removed session 30. Jan 20 03:08:30.366072 kubelet[2747]: E0120 03:08:30.365530 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:08:30.368546 containerd[1574]: time="2026-01-20T03:08:30.368063972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5rxzw,Uid:381c5953-7fa4-4136-91a9-2274f50a635e,Namespace:kube-system,Attempt:0,}" Jan 20 03:08:30.447839 sshd[4590]: Accepted publickey for core from 10.0.0.1 port 40838 ssh2: RSA SHA256:rs3S8coWSJFTYHYfRmGEv2RPj1qmKyKdcrDKOFxFSdQ Jan 20 03:08:30.469091 sshd-session[4590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:08:30.485472 containerd[1574]: time="2026-01-20T03:08:30.485367087Z" level=info msg="connecting to shim 4bb08d749ae59a5c1ac3bc092fbd5e08cd038cafde2d2589c5ba9d47cab4a809" address="unix:///run/containerd/s/7b0e982f3847d602978ae7686b9fa0455a6a8c28f2eed3c91ff5e8136795658f" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:08:30.502160 systemd-logind[1551]: New session 31 of user core. Jan 20 03:08:30.511950 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 03:08:30.583563 systemd[1]: Started cri-containerd-4bb08d749ae59a5c1ac3bc092fbd5e08cd038cafde2d2589c5ba9d47cab4a809.scope - libcontainer container 4bb08d749ae59a5c1ac3bc092fbd5e08cd038cafde2d2589c5ba9d47cab4a809. Jan 20 03:08:30.715364 containerd[1574]: time="2026-01-20T03:08:30.714764375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5rxzw,Uid:381c5953-7fa4-4136-91a9-2274f50a635e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bb08d749ae59a5c1ac3bc092fbd5e08cd038cafde2d2589c5ba9d47cab4a809\"" Jan 20 03:08:30.717488 kubelet[2747]: E0120 03:08:30.716871 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:08:30.760511 containerd[1574]: time="2026-01-20T03:08:30.749414557Z" level=info msg="CreateContainer within sandbox \"4bb08d749ae59a5c1ac3bc092fbd5e08cd038cafde2d2589c5ba9d47cab4a809\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 20 03:08:30.805074 kubelet[2747]: E0120 03:08:30.804908 2747 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 03:08:30.810753 containerd[1574]: time="2026-01-20T03:08:30.810400448Z" level=info msg="Container 96d68bf660f6a6be437eb8895b83381a3064c087be6c4c33e50ec20cc4137b98: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:08:30.836067 containerd[1574]: time="2026-01-20T03:08:30.835862995Z" level=info msg="CreateContainer within sandbox \"4bb08d749ae59a5c1ac3bc092fbd5e08cd038cafde2d2589c5ba9d47cab4a809\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"96d68bf660f6a6be437eb8895b83381a3064c087be6c4c33e50ec20cc4137b98\"" Jan 20 03:08:30.837403 containerd[1574]: time="2026-01-20T03:08:30.837375433Z" level=info msg="StartContainer for \"96d68bf660f6a6be437eb8895b83381a3064c087be6c4c33e50ec20cc4137b98\"" Jan 20 03:08:30.838954 containerd[1574]: time="2026-01-20T03:08:30.838925685Z" level=info msg="connecting to shim 96d68bf660f6a6be437eb8895b83381a3064c087be6c4c33e50ec20cc4137b98" address="unix:///run/containerd/s/7b0e982f3847d602978ae7686b9fa0455a6a8c28f2eed3c91ff5e8136795658f" protocol=ttrpc version=3 Jan 20 03:08:30.899419 systemd[1]: Started cri-containerd-96d68bf660f6a6be437eb8895b83381a3064c087be6c4c33e50ec20cc4137b98.scope - libcontainer container 96d68bf660f6a6be437eb8895b83381a3064c087be6c4c33e50ec20cc4137b98. Jan 20 03:08:31.010942 containerd[1574]: time="2026-01-20T03:08:31.010554860Z" level=info msg="StartContainer for \"96d68bf660f6a6be437eb8895b83381a3064c087be6c4c33e50ec20cc4137b98\" returns successfully" Jan 20 03:08:31.032445 systemd[1]: cri-containerd-96d68bf660f6a6be437eb8895b83381a3064c087be6c4c33e50ec20cc4137b98.scope: Deactivated successfully. Jan 20 03:08:31.040501 containerd[1574]: time="2026-01-20T03:08:31.040453002Z" level=info msg="received container exit event container_id:\"96d68bf660f6a6be437eb8895b83381a3064c087be6c4c33e50ec20cc4137b98\" id:\"96d68bf660f6a6be437eb8895b83381a3064c087be6c4c33e50ec20cc4137b98\" pid:4660 exited_at:{seconds:1768878511 nanos:39772013}" Jan 20 03:08:31.245180 kubelet[2747]: E0120 03:08:31.245114 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:08:31.261246 containerd[1574]: time="2026-01-20T03:08:31.260928055Z" level=info msg="CreateContainer within sandbox \"4bb08d749ae59a5c1ac3bc092fbd5e08cd038cafde2d2589c5ba9d47cab4a809\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 20 03:08:31.289289 containerd[1574]: time="2026-01-20T03:08:31.287762060Z" level=info msg="Container 30bd22403a1eeed66b8259646b2299eb7d5c83653207b1040932c840e89efaef: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:08:31.292419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount582259807.mount: Deactivated successfully. Jan 20 03:08:31.305812 containerd[1574]: time="2026-01-20T03:08:31.305413521Z" level=info msg="CreateContainer within sandbox \"4bb08d749ae59a5c1ac3bc092fbd5e08cd038cafde2d2589c5ba9d47cab4a809\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"30bd22403a1eeed66b8259646b2299eb7d5c83653207b1040932c840e89efaef\"" Jan 20 03:08:31.308405 containerd[1574]: time="2026-01-20T03:08:31.307871766Z" level=info msg="StartContainer for \"30bd22403a1eeed66b8259646b2299eb7d5c83653207b1040932c840e89efaef\"" Jan 20 03:08:31.309359 containerd[1574]: time="2026-01-20T03:08:31.309250679Z" level=info msg="connecting to shim 30bd22403a1eeed66b8259646b2299eb7d5c83653207b1040932c840e89efaef" address="unix:///run/containerd/s/7b0e982f3847d602978ae7686b9fa0455a6a8c28f2eed3c91ff5e8136795658f" protocol=ttrpc version=3 Jan 20 03:08:31.364086 systemd[1]: Started cri-containerd-30bd22403a1eeed66b8259646b2299eb7d5c83653207b1040932c840e89efaef.scope - libcontainer container 30bd22403a1eeed66b8259646b2299eb7d5c83653207b1040932c840e89efaef. Jan 20 03:08:31.467355 containerd[1574]: time="2026-01-20T03:08:31.466541193Z" level=info msg="StartContainer for \"30bd22403a1eeed66b8259646b2299eb7d5c83653207b1040932c840e89efaef\" returns successfully" Jan 20 03:08:31.493429 systemd[1]: cri-containerd-30bd22403a1eeed66b8259646b2299eb7d5c83653207b1040932c840e89efaef.scope: Deactivated successfully. Jan 20 03:08:31.508546 containerd[1574]: time="2026-01-20T03:08:31.505473527Z" level=info msg="received container exit event container_id:\"30bd22403a1eeed66b8259646b2299eb7d5c83653207b1040932c840e89efaef\" id:\"30bd22403a1eeed66b8259646b2299eb7d5c83653207b1040932c840e89efaef\" pid:4704 exited_at:{seconds:1768878511 nanos:499490200}" Jan 20 03:08:31.630493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30bd22403a1eeed66b8259646b2299eb7d5c83653207b1040932c840e89efaef-rootfs.mount: Deactivated successfully. Jan 20 03:08:32.263408 kubelet[2747]: E0120 03:08:32.261750 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:08:32.307860 containerd[1574]: time="2026-01-20T03:08:32.302926860Z" level=info msg="CreateContainer within sandbox \"4bb08d749ae59a5c1ac3bc092fbd5e08cd038cafde2d2589c5ba9d47cab4a809\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 20 03:08:32.384782 containerd[1574]: time="2026-01-20T03:08:32.384407395Z" level=info msg="Container 593cab3cf4de3d460ae3a7f78c7c5e4bfa6f439699823ca6f12bcda71a25430d: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:08:32.435918 containerd[1574]: time="2026-01-20T03:08:32.435538842Z" level=info msg="CreateContainer within sandbox \"4bb08d749ae59a5c1ac3bc092fbd5e08cd038cafde2d2589c5ba9d47cab4a809\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"593cab3cf4de3d460ae3a7f78c7c5e4bfa6f439699823ca6f12bcda71a25430d\"" Jan 20 03:08:32.441761 containerd[1574]: time="2026-01-20T03:08:32.441448503Z" level=info msg="StartContainer for \"593cab3cf4de3d460ae3a7f78c7c5e4bfa6f439699823ca6f12bcda71a25430d\"" Jan 20 03:08:32.446757 containerd[1574]: time="2026-01-20T03:08:32.444533237Z" level=info msg="connecting to shim 593cab3cf4de3d460ae3a7f78c7c5e4bfa6f439699823ca6f12bcda71a25430d" address="unix:///run/containerd/s/7b0e982f3847d602978ae7686b9fa0455a6a8c28f2eed3c91ff5e8136795658f" protocol=ttrpc version=3 Jan 20 03:08:32.521498 systemd[1]: Started cri-containerd-593cab3cf4de3d460ae3a7f78c7c5e4bfa6f439699823ca6f12bcda71a25430d.scope - libcontainer container 593cab3cf4de3d460ae3a7f78c7c5e4bfa6f439699823ca6f12bcda71a25430d. Jan 20 03:08:32.669556 kubelet[2747]: E0120 03:08:32.669504 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-wgc9p" podUID="cc6cc0db-b540-4d9b-8151-82f48d01934a" Jan 20 03:08:32.736656 containerd[1574]: time="2026-01-20T03:08:32.736300720Z" level=info msg="StartContainer for \"593cab3cf4de3d460ae3a7f78c7c5e4bfa6f439699823ca6f12bcda71a25430d\" returns successfully" Jan 20 03:08:32.736847 systemd[1]: cri-containerd-593cab3cf4de3d460ae3a7f78c7c5e4bfa6f439699823ca6f12bcda71a25430d.scope: Deactivated successfully. Jan 20 03:08:32.741754 containerd[1574]: time="2026-01-20T03:08:32.741505471Z" level=info msg="received container exit event container_id:\"593cab3cf4de3d460ae3a7f78c7c5e4bfa6f439699823ca6f12bcda71a25430d\" id:\"593cab3cf4de3d460ae3a7f78c7c5e4bfa6f439699823ca6f12bcda71a25430d\" pid:4748 exited_at:{seconds:1768878512 nanos:741208269}" Jan 20 03:08:32.861166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-593cab3cf4de3d460ae3a7f78c7c5e4bfa6f439699823ca6f12bcda71a25430d-rootfs.mount: Deactivated successfully. Jan 20 03:08:33.297912 kubelet[2747]: I0120 03:08:33.293561 2747 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T03:08:33Z","lastTransitionTime":"2026-01-20T03:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 20 03:08:33.299896 kubelet[2747]: E0120 03:08:33.299717 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:08:33.324189 containerd[1574]: time="2026-01-20T03:08:33.323915776Z" level=info msg="CreateContainer within sandbox \"4bb08d749ae59a5c1ac3bc092fbd5e08cd038cafde2d2589c5ba9d47cab4a809\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 20 03:08:33.388939 containerd[1574]: time="2026-01-20T03:08:33.388870502Z" level=info msg="Container e658fed7ea06101f9f9708683e3f57f2182d20a9570b50c7cd5277ff2ffaca84: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:08:33.392465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1518296650.mount: Deactivated successfully. Jan 20 03:08:33.419723 containerd[1574]: time="2026-01-20T03:08:33.419235189Z" level=info msg="CreateContainer within sandbox \"4bb08d749ae59a5c1ac3bc092fbd5e08cd038cafde2d2589c5ba9d47cab4a809\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e658fed7ea06101f9f9708683e3f57f2182d20a9570b50c7cd5277ff2ffaca84\"" Jan 20 03:08:33.424109 containerd[1574]: time="2026-01-20T03:08:33.423362610Z" level=info msg="StartContainer for \"e658fed7ea06101f9f9708683e3f57f2182d20a9570b50c7cd5277ff2ffaca84\"" Jan 20 03:08:33.425711 containerd[1574]: time="2026-01-20T03:08:33.425147308Z" level=info msg="connecting to shim e658fed7ea06101f9f9708683e3f57f2182d20a9570b50c7cd5277ff2ffaca84" address="unix:///run/containerd/s/7b0e982f3847d602978ae7686b9fa0455a6a8c28f2eed3c91ff5e8136795658f" protocol=ttrpc version=3 Jan 20 03:08:33.503384 systemd[1]: Started cri-containerd-e658fed7ea06101f9f9708683e3f57f2182d20a9570b50c7cd5277ff2ffaca84.scope - libcontainer container e658fed7ea06101f9f9708683e3f57f2182d20a9570b50c7cd5277ff2ffaca84. Jan 20 03:08:33.693151 systemd[1]: cri-containerd-e658fed7ea06101f9f9708683e3f57f2182d20a9570b50c7cd5277ff2ffaca84.scope: Deactivated successfully. Jan 20 03:08:33.698264 containerd[1574]: time="2026-01-20T03:08:33.697840797Z" level=info msg="received container exit event container_id:\"e658fed7ea06101f9f9708683e3f57f2182d20a9570b50c7cd5277ff2ffaca84\" id:\"e658fed7ea06101f9f9708683e3f57f2182d20a9570b50c7cd5277ff2ffaca84\" pid:4788 exited_at:{seconds:1768878513 nanos:693288692}" Jan 20 03:08:33.705922 containerd[1574]: time="2026-01-20T03:08:33.705364273Z" level=info msg="StartContainer for \"e658fed7ea06101f9f9708683e3f57f2182d20a9570b50c7cd5277ff2ffaca84\" returns successfully" Jan 20 03:08:33.817332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e658fed7ea06101f9f9708683e3f57f2182d20a9570b50c7cd5277ff2ffaca84-rootfs.mount: Deactivated successfully. Jan 20 03:08:34.324314 kubelet[2747]: E0120 03:08:34.324132 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:08:34.350724 containerd[1574]: time="2026-01-20T03:08:34.350372937Z" level=info msg="CreateContainer within sandbox \"4bb08d749ae59a5c1ac3bc092fbd5e08cd038cafde2d2589c5ba9d47cab4a809\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 20 03:08:34.403051 containerd[1574]: time="2026-01-20T03:08:34.401925938Z" level=info msg="Container 38b77b113dfb59b5cf72ece482e3ecbcad08c57b65961c4ddd70ed3151608431: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:08:34.406913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1121714501.mount: Deactivated successfully. Jan 20 03:08:34.431455 containerd[1574]: time="2026-01-20T03:08:34.431224459Z" level=info msg="CreateContainer within sandbox \"4bb08d749ae59a5c1ac3bc092fbd5e08cd038cafde2d2589c5ba9d47cab4a809\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"38b77b113dfb59b5cf72ece482e3ecbcad08c57b65961c4ddd70ed3151608431\"" Jan 20 03:08:34.433833 containerd[1574]: time="2026-01-20T03:08:34.433493884Z" level=info msg="StartContainer for \"38b77b113dfb59b5cf72ece482e3ecbcad08c57b65961c4ddd70ed3151608431\"" Jan 20 03:08:34.439714 containerd[1574]: time="2026-01-20T03:08:34.438383539Z" level=info msg="connecting to shim 38b77b113dfb59b5cf72ece482e3ecbcad08c57b65961c4ddd70ed3151608431" address="unix:///run/containerd/s/7b0e982f3847d602978ae7686b9fa0455a6a8c28f2eed3c91ff5e8136795658f" protocol=ttrpc version=3 Jan 20 03:08:34.498344 systemd[1]: Started cri-containerd-38b77b113dfb59b5cf72ece482e3ecbcad08c57b65961c4ddd70ed3151608431.scope - libcontainer container 38b77b113dfb59b5cf72ece482e3ecbcad08c57b65961c4ddd70ed3151608431. Jan 20 03:08:34.671976 kubelet[2747]: E0120 03:08:34.670491 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-wgc9p" podUID="cc6cc0db-b540-4d9b-8151-82f48d01934a" Jan 20 03:08:34.699790 containerd[1574]: time="2026-01-20T03:08:34.699125920Z" level=info msg="StartContainer for \"38b77b113dfb59b5cf72ece482e3ecbcad08c57b65961c4ddd70ed3151608431\" returns successfully" Jan 20 03:08:35.342432 kubelet[2747]: E0120 03:08:35.341833 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:08:35.389574 kubelet[2747]: I0120 03:08:35.389400 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5rxzw" podStartSLOduration=6.389380733 podStartE2EDuration="6.389380733s" podCreationTimestamp="2026-01-20 03:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:08:35.388473767 +0000 UTC m=+104.885818022" watchObservedRunningTime="2026-01-20 03:08:35.389380733 +0000 UTC m=+104.886724979" Jan 20 03:08:35.662764 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 20 03:08:36.373774 kubelet[2747]: E0120 03:08:36.373261 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:08:36.670894 kubelet[2747]: E0120 03:08:36.670851 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:08:41.594466 systemd-networkd[1465]: lxc_health: Link UP Jan 20 03:08:41.610892 systemd-networkd[1465]: lxc_health: Gained carrier Jan 20 03:08:42.378222 kubelet[2747]: E0120 03:08:42.374533 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:08:42.405261 kubelet[2747]: E0120 03:08:42.405155 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 03:08:43.252439 systemd-networkd[1465]: lxc_health: Gained IPv6LL Jan 20 03:08:47.023981 sshd[4614]: Connection closed by 10.0.0.1 port 40838 Jan 20 03:08:47.025523 sshd-session[4590]: pam_unix(sshd:session): session closed for user core Jan 20 03:08:47.032987 systemd[1]: sshd@30-10.0.0.5:22-10.0.0.1:40838.service: Deactivated successfully. Jan 20 03:08:47.038441 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 03:08:47.042411 systemd-logind[1551]: Session 31 logged out. Waiting for processes to exit. Jan 20 03:08:47.045210 systemd-logind[1551]: Removed session 31.