Jan 24 00:52:39.061909 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:52:39.061929 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:52:39.061940 kernel: BIOS-provided physical RAM map: Jan 24 00:52:39.061946 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 00:52:39.061951 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 00:52:39.061956 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 00:52:39.061963 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 24 00:52:39.061968 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 24 00:52:39.061973 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:52:39.061981 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 00:52:39.061987 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:52:39.061992 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 00:52:39.061997 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:52:39.062003 kernel: NX (Execute Disable) protection: active Jan 24 00:52:39.062009 kernel: APIC: Static calls initialized Jan 24 00:52:39.062017 kernel: SMBIOS 2.8 present. Jan 24 00:52:39.062023 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 24 00:52:39.062029 kernel: Hypervisor detected: KVM Jan 24 00:52:39.062034 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:52:39.062040 kernel: kvm-clock: using sched offset of 5903447910 cycles Jan 24 00:52:39.062046 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:52:39.062052 kernel: tsc: Detected 2445.426 MHz processor Jan 24 00:52:39.062058 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:52:39.062064 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:52:39.062073 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 00:52:39.062079 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 00:52:39.062085 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:52:39.062090 kernel: Using GB pages for direct mapping Jan 24 00:52:39.062096 kernel: ACPI: Early table checksum verification disabled Jan 24 00:52:39.062102 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 24 00:52:39.062108 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:52:39.062114 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:52:39.062120 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:52:39.062128 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 24 00:52:39.062134 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:52:39.062140 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:52:39.062146 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:52:39.062151 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:52:39.062157 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 24 00:52:39.062163 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 24 00:52:39.062173 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 24 00:52:39.062181 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 24 00:52:39.062187 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 24 00:52:39.062194 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 24 00:52:39.062200 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 24 00:52:39.062206 kernel: No NUMA configuration found Jan 24 00:52:39.062212 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 24 00:52:39.062221 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 24 00:52:39.062231 kernel: Zone ranges: Jan 24 00:52:39.062237 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:52:39.062243 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 24 00:52:39.062249 kernel: Normal empty Jan 24 00:52:39.062255 kernel: Movable zone start for each node Jan 24 00:52:39.062261 kernel: Early memory node ranges Jan 24 00:52:39.062267 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 00:52:39.062273 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 24 00:52:39.062279 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 24 00:52:39.062288 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:52:39.062294 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 00:52:39.062300 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 24 00:52:39.062306 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:52:39.062312 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:52:39.062318 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:52:39.062324 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:52:39.062330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:52:39.062337 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:52:39.062345 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:52:39.062351 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:52:39.062357 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:52:39.062363 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:52:39.062369 kernel: TSC deadline timer available Jan 24 00:52:39.062375 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 24 00:52:39.062381 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:52:39.062387 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:52:39.062393 kernel: kvm-guest: setup PV sched yield Jan 24 00:52:39.062402 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 00:52:39.062408 kernel: Booting paravirtualized kernel on KVM Jan 24 00:52:39.062414 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:52:39.062420 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 00:52:39.062426 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 24 00:52:39.062432 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 24 00:52:39.062438 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 00:52:39.062444 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:52:39.062451 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:52:39.062460 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:52:39.062466 kernel: random: crng init done Jan 24 00:52:39.062472 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:52:39.062478 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:52:39.062484 kernel: Fallback order for Node 0: 0 Jan 24 00:52:39.062490 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 24 00:52:39.062497 kernel: Policy zone: DMA32 Jan 24 00:52:39.062550 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:52:39.062557 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 136884K reserved, 0K cma-reserved) Jan 24 00:52:39.062567 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 00:52:39.062573 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:52:39.062579 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:52:39.062585 kernel: Dynamic Preempt: voluntary Jan 24 00:52:39.062591 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:52:39.062598 kernel: rcu: RCU event tracing is enabled. Jan 24 00:52:39.062604 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 00:52:39.062611 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:52:39.062617 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:52:39.062626 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:52:39.062635 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:52:39.062647 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 00:52:39.062657 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 00:52:39.062668 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:52:39.062679 kernel: Console: colour VGA+ 80x25 Jan 24 00:52:39.062690 kernel: printk: console [ttyS0] enabled Jan 24 00:52:39.062701 kernel: ACPI: Core revision 20230628 Jan 24 00:52:39.062709 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:52:39.062719 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:52:39.062725 kernel: x2apic enabled Jan 24 00:52:39.062731 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:52:39.062737 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:52:39.062744 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:52:39.062750 kernel: kvm-guest: setup PV IPIs Jan 24 00:52:39.062756 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:52:39.062879 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:52:39.062893 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 00:52:39.062900 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:52:39.062906 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:52:39.062913 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:52:39.062943 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:52:39.062950 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:52:39.062957 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:52:39.062963 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:52:39.062973 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:52:39.062980 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:52:39.062986 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:52:39.062993 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:52:39.063027 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:52:39.063035 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:52:39.063041 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:52:39.063063 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:52:39.063069 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:52:39.063079 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:52:39.063101 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 00:52:39.063107 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:52:39.063114 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:52:39.063134 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:52:39.063141 kernel: landlock: Up and running. Jan 24 00:52:39.063148 kernel: SELinux: Initializing. Jan 24 00:52:39.063154 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:52:39.063161 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:52:39.063185 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:52:39.063192 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:52:39.063213 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:52:39.063220 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:52:39.063226 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 00:52:39.063247 kernel: signal: max sigframe size: 1776 Jan 24 00:52:39.063254 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:52:39.063291 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:52:39.063329 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:52:39.063361 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:52:39.063368 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:52:39.063375 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 00:52:39.063387 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 00:52:39.063399 kernel: smpboot: Max logical packages: 1 Jan 24 00:52:39.063411 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 00:52:39.063421 kernel: devtmpfs: initialized Jan 24 00:52:39.063427 kernel: x86/mm: Memory block size: 128MB Jan 24 00:52:39.063434 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:52:39.063444 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 00:52:39.063450 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:52:39.063457 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:52:39.063463 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:52:39.063470 kernel: audit: type=2000 audit(1769215957.231:1): state=initialized audit_enabled=0 res=1 Jan 24 00:52:39.063477 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:52:39.063483 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:52:39.063489 kernel: cpuidle: using governor menu Jan 24 00:52:39.063496 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:52:39.063556 kernel: dca service started, version 1.12.1 Jan 24 00:52:39.063563 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 00:52:39.063570 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:52:39.063576 kernel: PCI: Using configuration type 1 for base access Jan 24 00:52:39.063583 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:52:39.063589 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:52:39.063596 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:52:39.063602 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:52:39.063611 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:52:39.063618 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:52:39.063624 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:52:39.063631 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:52:39.063637 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:52:39.063643 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:52:39.063650 kernel: ACPI: Interpreter enabled Jan 24 00:52:39.063656 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:52:39.063663 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:52:39.063669 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:52:39.063678 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:52:39.063684 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:52:39.063691 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:52:39.063920 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:52:39.064056 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:52:39.064188 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:52:39.064205 kernel: PCI host bridge to bus 0000:00 Jan 24 00:52:39.064353 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:52:39.064468 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:52:39.064656 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:52:39.064770 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 24 00:52:39.064967 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:52:39.065082 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 24 00:52:39.065191 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:52:39.065358 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:52:39.065494 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 24 00:52:39.065674 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 24 00:52:39.065845 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 24 00:52:39.065971 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 24 00:52:39.066089 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:52:39.066227 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 24 00:52:39.066348 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 24 00:52:39.066467 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 24 00:52:39.066642 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 00:52:39.066984 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 24 00:52:39.067153 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 24 00:52:39.067278 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 24 00:52:39.067404 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 00:52:39.067591 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 24 00:52:39.067740 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 24 00:52:39.067911 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 24 00:52:39.068034 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 24 00:52:39.068172 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 24 00:52:39.068334 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:52:39.068487 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:52:39.068682 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:52:39.068856 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 24 00:52:39.068980 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 24 00:52:39.069107 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:52:39.069263 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 24 00:52:39.069287 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:52:39.069294 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:52:39.069303 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:52:39.069315 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:52:39.069324 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:52:39.069331 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:52:39.069337 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:52:39.069344 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:52:39.069352 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:52:39.069368 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:52:39.069380 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:52:39.069392 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:52:39.069399 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:52:39.069406 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:52:39.069412 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:52:39.069418 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:52:39.069425 kernel: iommu: Default domain type: Translated Jan 24 00:52:39.069432 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:52:39.069441 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:52:39.069447 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:52:39.069454 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 00:52:39.069460 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 24 00:52:39.069648 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:52:39.069870 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:52:39.069997 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:52:39.070007 kernel: vgaarb: loaded Jan 24 00:52:39.070013 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:52:39.070025 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:52:39.070031 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:52:39.070038 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:52:39.070045 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:52:39.070051 kernel: pnp: PnP ACPI init Jan 24 00:52:39.070180 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:52:39.070190 kernel: pnp: PnP ACPI: found 6 devices Jan 24 00:52:39.070197 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:52:39.070207 kernel: NET: Registered PF_INET protocol family Jan 24 00:52:39.070214 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:52:39.070220 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:52:39.070227 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:52:39.070233 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:52:39.070240 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:52:39.070246 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:52:39.070253 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:52:39.070262 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:52:39.070268 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:52:39.070275 kernel: NET: Registered PF_XDP protocol family Jan 24 00:52:39.070386 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:52:39.070497 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:52:39.070664 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:52:39.070773 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 24 00:52:39.070943 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:52:39.071054 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 24 00:52:39.071068 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:52:39.071074 kernel: Initialise system trusted keyrings Jan 24 00:52:39.071081 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:52:39.071087 kernel: Key type asymmetric registered Jan 24 00:52:39.071093 kernel: Asymmetric key parser 'x509' registered Jan 24 00:52:39.071100 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:52:39.071106 kernel: io scheduler mq-deadline registered Jan 24 00:52:39.071113 kernel: io scheduler kyber registered Jan 24 00:52:39.071119 kernel: io scheduler bfq registered Jan 24 00:52:39.071128 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:52:39.071136 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:52:39.071142 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:52:39.071149 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 00:52:39.071155 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:52:39.071162 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:52:39.071169 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:52:39.071175 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:52:39.071182 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:52:39.071309 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 00:52:39.071319 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:52:39.071485 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 00:52:39.071770 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T00:52:38 UTC (1769215958) Jan 24 00:52:39.072137 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:52:39.072180 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:52:39.072187 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:52:39.072194 kernel: Segment Routing with IPv6 Jan 24 00:52:39.072223 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:52:39.072233 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:52:39.072255 kernel: Key type dns_resolver registered Jan 24 00:52:39.072262 kernel: IPI shorthand broadcast: enabled Jan 24 00:52:39.072269 kernel: sched_clock: Marking stable (1089021108, 476297795)->(1984782537, -419463634) Jan 24 00:52:39.072290 kernel: registered taskstats version 1 Jan 24 00:52:39.072297 kernel: Loading compiled-in X.509 certificates Jan 24 00:52:39.072304 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:52:39.072310 kernel: Key type .fscrypt registered Jan 24 00:52:39.072334 kernel: Key type fscrypt-provisioning registered Jan 24 00:52:39.072341 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:52:39.072347 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:52:39.072354 kernel: ima: No architecture policies found Jan 24 00:52:39.072360 kernel: clk: Disabling unused clocks Jan 24 00:52:39.072367 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:52:39.072374 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:52:39.072380 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:52:39.072387 kernel: Run /init as init process Jan 24 00:52:39.072396 kernel: with arguments: Jan 24 00:52:39.072402 kernel: /init Jan 24 00:52:39.072409 kernel: with environment: Jan 24 00:52:39.072415 kernel: HOME=/ Jan 24 00:52:39.072422 kernel: TERM=linux Jan 24 00:52:39.072430 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:52:39.072438 systemd[1]: Detected virtualization kvm. Jan 24 00:52:39.072448 systemd[1]: Detected architecture x86-64. Jan 24 00:52:39.072454 systemd[1]: Running in initrd. Jan 24 00:52:39.072461 systemd[1]: No hostname configured, using default hostname. Jan 24 00:52:39.072467 systemd[1]: Hostname set to . Jan 24 00:52:39.072474 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:52:39.072481 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:52:39.072488 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:52:39.072495 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:52:39.072545 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:52:39.072553 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:52:39.072560 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:52:39.072567 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:52:39.072575 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:52:39.072583 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:52:39.072590 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:52:39.072599 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:52:39.072606 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:52:39.072632 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:52:39.072640 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:52:39.072661 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:52:39.072689 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:52:39.072697 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:52:39.072707 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:52:39.072714 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:52:39.072721 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:52:39.072728 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:52:39.072735 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:52:39.072742 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:52:39.072749 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:52:39.072756 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:52:39.072766 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:52:39.072773 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:52:39.072780 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:52:39.072833 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:52:39.072842 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:52:39.072849 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:52:39.072881 systemd-journald[194]: Collecting audit messages is disabled. Jan 24 00:52:39.072904 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:52:39.072912 systemd-journald[194]: Journal started Jan 24 00:52:39.072930 systemd-journald[194]: Runtime Journal (/run/log/journal/529e2db61f8347ab8ed095f7a663e2c2) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:52:39.072729 systemd-modules-load[195]: Inserted module 'overlay' Jan 24 00:52:39.077762 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:52:39.084931 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:52:39.095438 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:52:39.101337 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:52:39.287471 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:52:39.287500 kernel: Bridge firewalling registered Jan 24 00:52:39.119906 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 24 00:52:39.294085 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:52:39.298103 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:52:39.298749 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:52:39.332245 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:52:39.337494 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:52:39.348692 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:52:39.355846 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:52:39.362602 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:52:39.369604 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:52:39.375338 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:52:39.385005 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:52:39.388909 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:52:39.407286 dracut-cmdline[228]: dracut-dracut-053 Jan 24 00:52:39.412066 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:52:39.428946 systemd-resolved[232]: Positive Trust Anchors: Jan 24 00:52:39.428985 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:52:39.429030 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:52:39.432494 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 24 00:52:39.433972 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:52:39.436418 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:52:39.540935 kernel: SCSI subsystem initialized Jan 24 00:52:39.550870 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:52:39.562890 kernel: iscsi: registered transport (tcp) Jan 24 00:52:39.584896 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:52:39.584994 kernel: QLogic iSCSI HBA Driver Jan 24 00:52:39.635924 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:52:39.657044 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:52:39.686365 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:52:39.686425 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:52:39.689383 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:52:39.734895 kernel: raid6: avx2x4 gen() 29661 MB/s Jan 24 00:52:39.752889 kernel: raid6: avx2x2 gen() 26099 MB/s Jan 24 00:52:39.772556 kernel: raid6: avx2x1 gen() 21832 MB/s Jan 24 00:52:39.772622 kernel: raid6: using algorithm avx2x4 gen() 29661 MB/s Jan 24 00:52:39.793477 kernel: raid6: .... xor() 3998 MB/s, rmw enabled Jan 24 00:52:39.793603 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:52:39.814882 kernel: xor: automatically using best checksumming function avx Jan 24 00:52:39.965876 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:52:39.979399 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:52:39.996030 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:52:40.021363 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jan 24 00:52:40.031626 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:52:40.045027 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:52:40.060110 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Jan 24 00:52:40.093315 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:52:40.114241 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:52:40.190155 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:52:40.202023 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:52:40.218322 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:52:40.225162 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:52:40.232292 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:52:40.238932 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:52:40.251040 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:52:40.258388 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:52:40.265938 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 00:52:40.273943 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 24 00:52:40.274133 kernel: libata version 3.00 loaded. Jan 24 00:52:40.270754 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:52:40.270921 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:52:40.292321 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:52:40.292354 kernel: GPT:9289727 != 19775487 Jan 24 00:52:40.292365 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:52:40.292375 kernel: GPT:9289727 != 19775487 Jan 24 00:52:40.292384 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:52:40.292394 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:52:40.292300 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:52:40.300825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:52:40.301072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:52:40.310135 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:52:40.319059 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:52:40.319260 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:52:40.325000 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:52:40.326963 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:52:40.328099 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:52:40.338958 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:52:40.335559 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:52:40.345437 kernel: scsi host0: ahci Jan 24 00:52:40.351848 kernel: scsi host1: ahci Jan 24 00:52:40.358632 kernel: scsi host2: ahci Jan 24 00:52:40.359303 kernel: AES CTR mode by8 optimization enabled Jan 24 00:52:40.366890 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (463) Jan 24 00:52:40.366972 kernel: scsi host3: ahci Jan 24 00:52:40.370737 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Jan 24 00:52:40.372837 kernel: scsi host4: ahci Jan 24 00:52:40.382847 kernel: scsi host5: ahci Jan 24 00:52:40.383899 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 24 00:52:40.383936 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 24 00:52:40.383956 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 24 00:52:40.383974 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 24 00:52:40.384002 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 24 00:52:40.384019 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 24 00:52:40.386463 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 00:52:40.543386 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:52:40.557482 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 00:52:40.575259 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 00:52:40.579005 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 24 00:52:40.592581 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:52:40.610049 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:52:40.615405 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:52:40.628407 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:52:40.628428 disk-uuid[557]: Primary Header is updated. Jan 24 00:52:40.628428 disk-uuid[557]: Secondary Entries is updated. Jan 24 00:52:40.628428 disk-uuid[557]: Secondary Header is updated. Jan 24 00:52:40.638358 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:52:40.644292 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:52:40.705175 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:52:40.705248 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:52:40.705266 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:52:40.705281 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:52:40.709680 kernel: ata3.00: applying bridge limits Jan 24 00:52:40.713882 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:52:40.713918 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:52:40.715949 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:52:40.724871 kernel: ata3.00: configured for UDMA/100 Jan 24 00:52:40.724917 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:52:40.780899 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:52:40.781275 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:52:40.799893 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:52:41.640067 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:52:41.640144 disk-uuid[559]: The operation has completed successfully. Jan 24 00:52:41.677056 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:52:41.677221 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:52:41.704133 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:52:41.711727 sh[595]: Success Jan 24 00:52:41.725916 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:52:41.769265 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:52:41.782732 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:52:41.786906 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:52:41.809491 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:52:41.809588 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:52:41.809610 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:52:41.815161 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:52:41.815196 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:52:41.827566 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:52:41.833175 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:52:41.847078 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:52:41.850683 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:52:41.871393 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:52:41.871442 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:52:41.871462 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:52:41.880879 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:52:41.894603 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:52:41.900999 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:52:41.910253 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:52:41.924096 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:52:41.987846 ignition[691]: Ignition 2.19.0 Jan 24 00:52:41.987858 ignition[691]: Stage: fetch-offline Jan 24 00:52:41.987913 ignition[691]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:52:41.987925 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:52:41.988012 ignition[691]: parsed url from cmdline: "" Jan 24 00:52:41.988017 ignition[691]: no config URL provided Jan 24 00:52:41.988023 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:52:41.988032 ignition[691]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:52:41.988063 ignition[691]: op(1): [started] loading QEMU firmware config module Jan 24 00:52:41.988069 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 00:52:41.997929 ignition[691]: op(1): [finished] loading QEMU firmware config module Jan 24 00:52:42.071301 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:52:42.091274 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:52:42.116074 systemd-networkd[783]: lo: Link UP Jan 24 00:52:42.116106 systemd-networkd[783]: lo: Gained carrier Jan 24 00:52:42.117750 systemd-networkd[783]: Enumeration completed Jan 24 00:52:42.117990 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:52:42.118706 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:52:42.118710 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:52:42.120066 systemd-networkd[783]: eth0: Link UP Jan 24 00:52:42.120070 systemd-networkd[783]: eth0: Gained carrier Jan 24 00:52:42.120077 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:52:42.126071 systemd[1]: Reached target network.target - Network. Jan 24 00:52:42.168894 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:52:42.229419 ignition[691]: parsing config with SHA512: 3345bfe63435ef79836ff58b5b6acc8c17d053dfb1d7ced3a55a712d42dc009a0d3d17747cce3ece6d6bd39e378e3e995046a10fcd86d68d31dcfdf5bef28653 Jan 24 00:52:42.235746 unknown[691]: fetched base config from "system" Jan 24 00:52:42.236348 ignition[691]: fetch-offline: fetch-offline passed Jan 24 00:52:42.235759 unknown[691]: fetched user config from "qemu" Jan 24 00:52:42.236595 ignition[691]: Ignition finished successfully Jan 24 00:52:42.237019 systemd-resolved[232]: Detected conflict on linux IN A 10.0.0.97 Jan 24 00:52:42.237028 systemd-resolved[232]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jan 24 00:52:42.240183 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:52:42.245695 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 00:52:42.250971 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:52:42.271994 ignition[787]: Ignition 2.19.0 Jan 24 00:52:42.272035 ignition[787]: Stage: kargs Jan 24 00:52:42.272308 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:52:42.272341 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:52:42.278999 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:52:42.273851 ignition[787]: kargs: kargs passed Jan 24 00:52:42.273938 ignition[787]: Ignition finished successfully Jan 24 00:52:42.294116 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:52:42.308906 ignition[795]: Ignition 2.19.0 Jan 24 00:52:42.308931 ignition[795]: Stage: disks Jan 24 00:52:42.309108 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:52:42.311431 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:52:42.309129 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:52:42.316615 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:52:42.309931 ignition[795]: disks: disks passed Jan 24 00:52:42.321473 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:52:42.309978 ignition[795]: Ignition finished successfully Jan 24 00:52:42.329030 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:52:42.335000 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:52:42.341024 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:52:42.358195 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:52:42.374890 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:52:42.380225 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:52:42.398982 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:52:42.503852 kernel: EXT4-fs (vda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:52:42.504369 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:52:42.507487 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:52:42.534074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:52:42.546759 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Jan 24 00:52:42.538357 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:52:42.561668 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:52:42.561695 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:52:42.561706 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:52:42.546847 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:52:42.546922 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:52:42.579344 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:52:42.546961 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:52:42.558905 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:52:42.582956 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:52:42.590010 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:52:42.661303 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:52:42.668220 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:52:42.674461 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:52:42.683298 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:52:42.802083 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:52:42.819043 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:52:42.827673 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:52:42.837478 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:52:42.843412 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:52:42.863088 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:52:42.872199 ignition[928]: INFO : Ignition 2.19.0 Jan 24 00:52:42.872199 ignition[928]: INFO : Stage: mount Jan 24 00:52:42.877194 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:52:42.877194 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:52:42.877194 ignition[928]: INFO : mount: mount passed Jan 24 00:52:42.877194 ignition[928]: INFO : Ignition finished successfully Jan 24 00:52:42.892275 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:52:42.911139 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:52:42.921324 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:52:42.952373 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Jan 24 00:52:42.952439 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:52:42.952460 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:52:42.957053 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:52:42.965928 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:52:42.968727 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:52:43.002360 ignition[956]: INFO : Ignition 2.19.0 Jan 24 00:52:43.002360 ignition[956]: INFO : Stage: files Jan 24 00:52:43.007390 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:52:43.007390 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:52:43.007390 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:52:43.007390 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:52:43.007390 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:52:43.029237 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:52:43.033831 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:52:43.033831 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:52:43.033831 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:52:43.033831 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 00:52:43.030426 unknown[956]: wrote ssh authorized keys file for user: core Jan 24 00:52:44.121155 systemd-networkd[783]: eth0: Gained IPv6LL Jan 24 00:52:44.143885 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:52:44.284206 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:52:44.284206 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 24 00:52:44.295973 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 24 00:52:44.419581 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 24 00:52:44.520628 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 24 00:52:44.520628 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:52:44.532240 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:52:44.539205 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:52:44.546481 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:52:44.553193 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:52:44.560474 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:52:44.567253 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:52:44.574456 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:52:44.581191 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:52:44.588522 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:52:44.595663 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:52:44.603378 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:52:44.610982 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:52:44.617653 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 24 00:52:44.868397 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 24 00:52:45.421047 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:52:45.421047 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 24 00:52:45.432602 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:52:45.432602 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:52:45.432602 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 24 00:52:45.432602 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 24 00:52:45.432602 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:52:45.432602 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:52:45.432602 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 24 00:52:45.432602 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 00:52:45.482158 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:52:45.482158 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:52:45.482158 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 00:52:45.482158 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:52:45.482158 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:52:45.482158 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:52:45.482158 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:52:45.482158 ignition[956]: INFO : files: files passed Jan 24 00:52:45.482158 ignition[956]: INFO : Ignition finished successfully Jan 24 00:52:45.462958 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:52:45.499019 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:52:45.507885 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:52:45.516216 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:52:45.554007 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 00:52:45.516341 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:52:45.564106 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:52:45.564106 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:52:45.530208 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:52:45.576464 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:52:45.535741 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:52:45.552081 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:52:45.582253 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:52:45.582396 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:52:45.587464 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:52:45.590371 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:52:45.593295 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:52:45.594289 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:52:45.633192 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:52:45.649010 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:52:45.667397 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:52:45.671349 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:52:45.678115 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:52:45.684747 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:52:45.684962 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:52:45.691529 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:52:45.696555 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:52:45.702656 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:52:45.708650 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:52:45.715859 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:52:45.722707 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:52:45.729173 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:52:45.735896 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:52:45.742094 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:52:45.745066 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:52:45.750917 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:52:45.751079 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:52:45.757728 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:52:45.764472 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:52:45.770683 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:52:45.770953 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:52:45.777239 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:52:45.777394 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:52:45.784040 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:52:45.784173 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:52:45.790061 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:52:45.795967 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:52:45.796264 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:52:45.802293 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:52:45.808932 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:52:45.814359 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:52:45.814476 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:52:45.820266 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:52:45.820351 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:52:45.867007 ignition[1010]: INFO : Ignition 2.19.0 Jan 24 00:52:45.867007 ignition[1010]: INFO : Stage: umount Jan 24 00:52:45.867007 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:52:45.867007 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:52:45.826081 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:52:45.926078 ignition[1010]: INFO : umount: umount passed Jan 24 00:52:45.926078 ignition[1010]: INFO : Ignition finished successfully Jan 24 00:52:45.826215 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:52:45.832927 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:52:45.833052 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:52:45.854008 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:52:45.859774 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:52:45.864151 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:52:45.864298 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:52:45.870426 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:52:45.870557 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:52:45.874932 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:52:45.875037 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:52:45.880412 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:52:45.880534 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:52:45.887412 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:52:45.887857 systemd[1]: Stopped target network.target - Network. Jan 24 00:52:45.890473 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:52:45.890528 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:52:45.891649 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:52:45.891696 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:52:45.892625 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:52:45.892670 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:52:45.893560 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:52:45.893639 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:52:45.894838 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:52:45.895226 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:52:45.895961 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:52:45.896077 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:52:45.896259 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:52:45.896304 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:52:45.913322 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:52:45.913469 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:52:45.920403 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:52:45.920464 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:52:45.927943 systemd-networkd[783]: eth0: DHCPv6 lease lost Jan 24 00:52:45.931708 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:52:45.931944 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:52:45.938679 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:52:45.938754 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:52:45.950950 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:52:45.955405 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:52:45.955481 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:52:45.962102 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:52:45.962151 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:52:45.968617 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:52:45.968674 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:52:45.974963 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:52:45.995142 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:52:46.166904 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 24 00:52:45.995328 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:52:46.000284 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:52:46.000421 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:52:46.006558 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:52:46.006682 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:52:46.011634 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:52:46.011692 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:52:46.017535 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:52:46.017637 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:52:46.024037 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:52:46.024097 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:52:46.029948 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:52:46.030011 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:52:46.050035 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:52:46.054136 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:52:46.054226 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:52:46.060684 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:52:46.060750 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:52:46.067104 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:52:46.067171 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:52:46.073685 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:52:46.073748 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:52:46.084146 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:52:46.084327 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:52:46.090433 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:52:46.113038 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:52:46.125516 systemd[1]: Switching root. Jan 24 00:52:46.272094 systemd-journald[194]: Journal stopped Jan 24 00:52:47.531995 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:52:47.532114 kernel: SELinux: policy capability open_perms=1 Jan 24 00:52:47.532136 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:52:47.532147 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:52:47.532157 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:52:47.532167 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:52:47.532177 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:52:47.532187 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:52:47.532198 kernel: audit: type=1403 audit(1769215966.375:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:52:47.532214 systemd[1]: Successfully loaded SELinux policy in 53.214ms. Jan 24 00:52:47.532238 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.406ms. Jan 24 00:52:47.532252 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:52:47.532263 systemd[1]: Detected virtualization kvm. Jan 24 00:52:47.532275 systemd[1]: Detected architecture x86-64. Jan 24 00:52:47.532286 systemd[1]: Detected first boot. Jan 24 00:52:47.532296 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:52:47.532307 zram_generator::config[1052]: No configuration found. Jan 24 00:52:47.532319 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:52:47.532330 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:52:47.532344 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:52:47.532354 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:52:47.532366 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:52:47.532376 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:52:47.532387 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:52:47.532397 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:52:47.532413 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:52:47.532424 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:52:47.532435 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:52:47.532448 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:52:47.532459 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:52:47.532470 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:52:47.532480 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:52:47.532491 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:52:47.532502 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:52:47.532514 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:52:47.532524 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:52:47.532535 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:52:47.532549 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:52:47.532559 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:52:47.532570 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:52:47.532617 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:52:47.532630 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:52:47.532642 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:52:47.532653 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:52:47.532663 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:52:47.532678 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:52:47.532689 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:52:47.532700 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:52:47.532711 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:52:47.532721 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:52:47.532732 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:52:47.532742 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:52:47.532753 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:52:47.532764 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:52:47.532778 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:52:47.532832 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:52:47.532844 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:52:47.532854 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:52:47.532865 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:52:47.532876 systemd[1]: Reached target machines.target - Containers. Jan 24 00:52:47.532887 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:52:47.532897 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:52:47.532911 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:52:47.532922 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:52:47.532933 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:52:47.532944 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:52:47.532954 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:52:47.532965 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:52:47.532976 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:52:47.532987 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:52:47.533000 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:52:47.533010 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:52:47.533021 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:52:47.533031 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:52:47.533042 kernel: fuse: init (API version 7.39) Jan 24 00:52:47.533053 kernel: loop: module loaded Jan 24 00:52:47.533065 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:52:47.533076 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:52:47.533086 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:52:47.533099 kernel: ACPI: bus type drm_connector registered Jan 24 00:52:47.533110 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:52:47.533139 systemd-journald[1133]: Collecting audit messages is disabled. Jan 24 00:52:47.533161 systemd-journald[1133]: Journal started Jan 24 00:52:47.533180 systemd-journald[1133]: Runtime Journal (/run/log/journal/529e2db61f8347ab8ed095f7a663e2c2) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:52:47.037279 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:52:47.062296 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 00:52:47.063058 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:52:47.063464 systemd[1]: systemd-journald.service: Consumed 1.651s CPU time. Jan 24 00:52:47.545292 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:52:47.551860 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:52:47.551901 systemd[1]: Stopped verity-setup.service. Jan 24 00:52:47.559940 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:52:47.565122 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:52:47.568843 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:52:47.572192 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:52:47.575780 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:52:47.579062 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:52:47.582223 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:52:47.585654 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:52:47.588952 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:52:47.592724 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:52:47.596973 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:52:47.597244 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:52:47.601014 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:52:47.601242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:52:47.604697 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:52:47.604986 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:52:47.608409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:52:47.608634 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:52:47.612333 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:52:47.612688 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:52:47.616638 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:52:47.617202 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:52:47.621112 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:52:47.624624 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:52:47.628514 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:52:47.649423 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:52:47.663086 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:52:47.669062 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:52:47.672888 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:52:47.672963 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:52:47.678116 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:52:47.684367 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:52:47.690260 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:52:47.694217 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:52:47.696846 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:52:47.703174 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:52:47.708401 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:52:47.724175 systemd-journald[1133]: Time spent on flushing to /var/log/journal/529e2db61f8347ab8ed095f7a663e2c2 is 28.658ms for 944 entries. Jan 24 00:52:47.724175 systemd-journald[1133]: System Journal (/var/log/journal/529e2db61f8347ab8ed095f7a663e2c2) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:52:47.794835 systemd-journald[1133]: Received client request to flush runtime journal. Jan 24 00:52:47.794899 kernel: loop0: detected capacity change from 0 to 219144 Jan 24 00:52:47.711106 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:52:47.716474 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:52:47.719024 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:52:47.731181 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:52:47.744045 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:52:47.754171 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:52:47.772497 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:52:47.784745 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:52:47.790134 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:52:47.798367 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:52:47.804575 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:52:47.817377 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:52:47.826354 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:52:47.835836 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:52:47.838350 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:52:47.843636 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 24 00:52:47.843652 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 24 00:52:47.849973 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:52:47.861736 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:52:47.870107 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:52:47.871297 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:52:47.881672 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 24 00:52:47.883098 kernel: loop1: detected capacity change from 0 to 140768 Jan 24 00:52:47.889125 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:52:47.932853 kernel: loop2: detected capacity change from 0 to 142488 Jan 24 00:52:47.938979 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:52:47.952182 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:52:47.973475 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 24 00:52:47.973911 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 24 00:52:47.980647 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:52:47.993883 kernel: loop3: detected capacity change from 0 to 219144 Jan 24 00:52:48.009854 kernel: loop4: detected capacity change from 0 to 140768 Jan 24 00:52:48.028904 kernel: loop5: detected capacity change from 0 to 142488 Jan 24 00:52:48.047838 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 24 00:52:48.048482 (sd-merge)[1195]: Merged extensions into '/usr'. Jan 24 00:52:48.054369 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:52:48.054516 systemd[1]: Reloading... Jan 24 00:52:48.108865 zram_generator::config[1221]: No configuration found. Jan 24 00:52:48.182340 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:52:48.251074 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:52:48.301502 systemd[1]: Reloading finished in 246 ms. Jan 24 00:52:48.342628 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:52:48.347419 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:52:48.352894 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:52:48.374206 systemd[1]: Starting ensure-sysext.service... Jan 24 00:52:48.378458 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:52:48.383961 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:52:48.387097 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:52:48.387132 systemd[1]: Reloading... Jan 24 00:52:48.419870 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:52:48.423066 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:52:48.423520 systemd-udevd[1262]: Using default interface naming scheme 'v255'. Jan 24 00:52:48.427454 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:52:48.427926 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 24 00:52:48.428067 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 24 00:52:48.434497 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:52:48.438842 zram_generator::config[1288]: No configuration found. Jan 24 00:52:48.435780 systemd-tmpfiles[1261]: Skipping /boot Jan 24 00:52:48.451478 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:52:48.451582 systemd-tmpfiles[1261]: Skipping /boot Jan 24 00:52:48.506847 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1318) Jan 24 00:52:48.578931 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:52:48.579243 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:52:48.582683 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:52:48.584390 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 24 00:52:48.594837 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:52:48.600924 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:52:48.645851 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 24 00:52:48.655837 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:52:48.678056 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:52:48.682118 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:52:48.682725 systemd[1]: Reloading finished in 295 ms. Jan 24 00:52:48.762537 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:52:48.763437 kernel: kvm_amd: TSC scaling supported Jan 24 00:52:48.763473 kernel: kvm_amd: Nested Virtualization enabled Jan 24 00:52:48.763486 kernel: kvm_amd: Nested Paging enabled Jan 24 00:52:48.763525 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 00:52:48.772257 kernel: kvm_amd: PMU virtualization is disabled Jan 24 00:52:48.813357 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:52:48.826894 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:52:48.838127 systemd[1]: Finished ensure-sysext.service. Jan 24 00:52:48.862768 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:52:48.867954 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:52:48.883050 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:52:48.888577 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:52:48.892429 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:52:48.894379 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:52:48.901972 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:52:48.909167 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:52:48.915043 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:52:48.920137 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:52:48.924054 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:52:48.927969 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:52:48.929174 lvm[1362]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:52:48.933213 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:52:48.939926 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:52:48.947210 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:52:48.955210 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:52:48.957706 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:52:48.961338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:52:48.963413 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:52:48.964537 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:52:48.964769 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:52:48.968252 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:52:48.968445 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:52:48.972219 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:52:48.972413 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:52:48.976137 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:52:48.977015 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:52:48.983536 augenrules[1384]: No rules Jan 24 00:52:48.985668 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:52:48.989637 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:52:48.993385 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:52:48.993646 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:52:49.003162 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:52:49.006351 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:52:49.008416 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:52:49.011900 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:52:49.014971 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:52:49.019117 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:52:49.024328 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:52:49.026698 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:52:49.027700 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:52:49.032308 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:52:49.051542 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:52:49.140325 systemd-networkd[1381]: lo: Link UP Jan 24 00:52:49.140365 systemd-networkd[1381]: lo: Gained carrier Jan 24 00:52:49.142399 systemd-networkd[1381]: Enumeration completed Jan 24 00:52:49.143331 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:52:49.143360 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:52:49.144662 systemd-networkd[1381]: eth0: Link UP Jan 24 00:52:49.144687 systemd-networkd[1381]: eth0: Gained carrier Jan 24 00:52:49.144700 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:52:49.146509 systemd-resolved[1383]: Positive Trust Anchors: Jan 24 00:52:49.146554 systemd-resolved[1383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:52:49.146581 systemd-resolved[1383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:52:49.151068 systemd-resolved[1383]: Defaulting to hostname 'linux'. Jan 24 00:52:49.162864 systemd-networkd[1381]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:52:49.163736 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Jan 24 00:52:49.695982 systemd-resolved[1383]: Clock change detected. Flushing caches. Jan 24 00:52:49.696040 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 00:52:49.696095 systemd-timesyncd[1385]: Initial clock synchronization to Sat 2026-01-24 00:52:49.695945 UTC. Jan 24 00:52:49.757948 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:52:49.761907 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:52:49.765939 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:52:49.770013 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:52:49.774252 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:52:49.778541 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:52:49.784367 systemd[1]: Reached target network.target - Network. Jan 24 00:52:49.787332 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:52:49.791137 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:52:49.794791 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:52:49.798834 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:52:49.803123 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:52:49.807133 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:52:49.807180 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:52:49.810498 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:52:49.814203 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:52:49.817957 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:52:49.822182 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:52:49.826373 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:52:49.831741 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:52:49.843380 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:52:49.848536 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:52:49.852986 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:52:49.857012 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:52:49.860253 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:52:49.863300 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:52:49.863364 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:52:49.864739 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:52:49.869939 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:52:49.874465 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:52:49.880089 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:52:49.883640 jq[1426]: false Jan 24 00:52:49.884183 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:52:49.885997 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:52:49.892010 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:52:49.897107 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:52:49.906102 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:52:49.909468 extend-filesystems[1427]: Found loop3 Jan 24 00:52:49.913051 extend-filesystems[1427]: Found loop4 Jan 24 00:52:49.913051 extend-filesystems[1427]: Found loop5 Jan 24 00:52:49.913051 extend-filesystems[1427]: Found sr0 Jan 24 00:52:49.913051 extend-filesystems[1427]: Found vda Jan 24 00:52:49.913051 extend-filesystems[1427]: Found vda1 Jan 24 00:52:49.913051 extend-filesystems[1427]: Found vda2 Jan 24 00:52:49.913051 extend-filesystems[1427]: Found vda3 Jan 24 00:52:49.913051 extend-filesystems[1427]: Found usr Jan 24 00:52:49.913051 extend-filesystems[1427]: Found vda4 Jan 24 00:52:49.913051 extend-filesystems[1427]: Found vda6 Jan 24 00:52:49.913051 extend-filesystems[1427]: Found vda7 Jan 24 00:52:49.913051 extend-filesystems[1427]: Found vda9 Jan 24 00:52:49.913051 extend-filesystems[1427]: Checking size of /dev/vda9 Jan 24 00:52:49.932651 dbus-daemon[1425]: [system] SELinux support is enabled Jan 24 00:52:49.967960 extend-filesystems[1427]: Resized partition /dev/vda9 Jan 24 00:52:49.986060 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1318) Jan 24 00:52:49.918088 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:52:49.986421 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:52:49.999941 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 24 00:52:49.920481 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:52:49.922078 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:52:50.000290 update_engine[1441]: I20260124 00:52:49.978356 1441 main.cc:92] Flatcar Update Engine starting Jan 24 00:52:50.000290 update_engine[1441]: I20260124 00:52:49.985004 1441 update_check_scheduler.cc:74] Next update check in 4m30s Jan 24 00:52:49.926036 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:52:49.936260 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:52:49.939443 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:52:49.950210 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:52:49.950470 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:52:49.951014 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:52:49.951244 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:52:49.962376 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:52:49.963399 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:52:50.009961 jq[1443]: true Jan 24 00:52:50.019433 (ntainerd)[1451]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:52:50.022933 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:52:50.032131 systemd-logind[1439]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:52:50.032181 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:52:50.034048 systemd-logind[1439]: New seat seat0. Jan 24 00:52:50.036776 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:52:50.036899 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:52:50.043521 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:52:50.043582 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:52:50.046138 jq[1458]: true Jan 24 00:52:50.055180 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:52:50.062236 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:52:50.078039 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 24 00:52:50.084257 tar[1449]: linux-amd64/LICENSE Jan 24 00:52:50.086932 locksmithd[1462]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:52:50.093778 tar[1449]: linux-amd64/helm Jan 24 00:52:50.097022 extend-filesystems[1450]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 00:52:50.097022 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 00:52:50.097022 extend-filesystems[1450]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 24 00:52:50.117242 extend-filesystems[1427]: Resized filesystem in /dev/vda9 Jan 24 00:52:50.101220 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:52:50.101437 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:52:50.131152 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:52:50.132952 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:52:50.137430 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:52:50.218925 containerd[1451]: time="2026-01-24T00:52:50.218750982Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:52:50.236932 containerd[1451]: time="2026-01-24T00:52:50.236887064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:52:50.239475 containerd[1451]: time="2026-01-24T00:52:50.239424868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:52:50.239475 containerd[1451]: time="2026-01-24T00:52:50.239472226Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:52:50.239533 containerd[1451]: time="2026-01-24T00:52:50.239489077Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:52:50.239731 containerd[1451]: time="2026-01-24T00:52:50.239657592Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:52:50.239757 containerd[1451]: time="2026-01-24T00:52:50.239731570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:52:50.239832 containerd[1451]: time="2026-01-24T00:52:50.239799798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:52:50.239909 containerd[1451]: time="2026-01-24T00:52:50.239831497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:52:50.240127 containerd[1451]: time="2026-01-24T00:52:50.240091642Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:52:50.240152 containerd[1451]: time="2026-01-24T00:52:50.240128301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:52:50.240152 containerd[1451]: time="2026-01-24T00:52:50.240141716Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:52:50.240183 containerd[1451]: time="2026-01-24T00:52:50.240155361Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:52:50.240278 containerd[1451]: time="2026-01-24T00:52:50.240246541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:52:50.240549 containerd[1451]: time="2026-01-24T00:52:50.240499704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:52:50.240713 containerd[1451]: time="2026-01-24T00:52:50.240636529Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:52:50.240713 containerd[1451]: time="2026-01-24T00:52:50.240701140Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:52:50.240831 containerd[1451]: time="2026-01-24T00:52:50.240799894Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:52:50.240954 containerd[1451]: time="2026-01-24T00:52:50.240925869Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:52:50.246067 containerd[1451]: time="2026-01-24T00:52:50.246013215Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:52:50.246173 containerd[1451]: time="2026-01-24T00:52:50.246100728Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:52:50.246207 containerd[1451]: time="2026-01-24T00:52:50.246190907Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:52:50.246227 containerd[1451]: time="2026-01-24T00:52:50.246213980Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:52:50.246256 containerd[1451]: time="2026-01-24T00:52:50.246226974Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:52:50.246417 containerd[1451]: time="2026-01-24T00:52:50.246358049Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:52:50.246624 containerd[1451]: time="2026-01-24T00:52:50.246574022Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:52:50.246810 containerd[1451]: time="2026-01-24T00:52:50.246760520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:52:50.246810 containerd[1451]: time="2026-01-24T00:52:50.246801667Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:52:50.246902 containerd[1451]: time="2026-01-24T00:52:50.246816154Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:52:50.246902 containerd[1451]: time="2026-01-24T00:52:50.246828386Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:52:50.246902 containerd[1451]: time="2026-01-24T00:52:50.246839578Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:52:50.246966 containerd[1451]: time="2026-01-24T00:52:50.246902946Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:52:50.246966 containerd[1451]: time="2026-01-24T00:52:50.246916772Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:52:50.246966 containerd[1451]: time="2026-01-24T00:52:50.246929716Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:52:50.246966 containerd[1451]: time="2026-01-24T00:52:50.246940506Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:52:50.246966 containerd[1451]: time="2026-01-24T00:52:50.246951175Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:52:50.246966 containerd[1451]: time="2026-01-24T00:52:50.246960513Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:52:50.247067 containerd[1451]: time="2026-01-24T00:52:50.246978026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247067 containerd[1451]: time="2026-01-24T00:52:50.246994156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247067 containerd[1451]: time="2026-01-24T00:52:50.247006138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247067 containerd[1451]: time="2026-01-24T00:52:50.247023911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247067 containerd[1451]: time="2026-01-24T00:52:50.247045001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247067 containerd[1451]: time="2026-01-24T00:52:50.247064598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247165 containerd[1451]: time="2026-01-24T00:52:50.247083763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247165 containerd[1451]: time="2026-01-24T00:52:50.247105644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247165 containerd[1451]: time="2026-01-24T00:52:50.247145068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247210 containerd[1451]: time="2026-01-24T00:52:50.247165095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247210 containerd[1451]: time="2026-01-24T00:52:50.247176777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247210 containerd[1451]: time="2026-01-24T00:52:50.247187958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247210 containerd[1451]: time="2026-01-24T00:52:50.247198758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247277 containerd[1451]: time="2026-01-24T00:52:50.247211231Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:52:50.247277 containerd[1451]: time="2026-01-24T00:52:50.247233502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247277 containerd[1451]: time="2026-01-24T00:52:50.247244884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247277 containerd[1451]: time="2026-01-24T00:52:50.247259100Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:52:50.247385 containerd[1451]: time="2026-01-24T00:52:50.247345562Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:52:50.247408 containerd[1451]: time="2026-01-24T00:52:50.247387610Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:52:50.247408 containerd[1451]: time="2026-01-24T00:52:50.247398180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:52:50.247441 containerd[1451]: time="2026-01-24T00:52:50.247409441Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:52:50.247441 containerd[1451]: time="2026-01-24T00:52:50.247417927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247441 containerd[1451]: time="2026-01-24T00:52:50.247436632Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:52:50.247493 containerd[1451]: time="2026-01-24T00:52:50.247447122Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:52:50.247493 containerd[1451]: time="2026-01-24T00:52:50.247456449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:52:50.247792 containerd[1451]: time="2026-01-24T00:52:50.247712957Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:52:50.247792 containerd[1451]: time="2026-01-24T00:52:50.247784210Z" level=info msg="Connect containerd service" Jan 24 00:52:50.248000 containerd[1451]: time="2026-01-24T00:52:50.247814747Z" level=info msg="using legacy CRI server" Jan 24 00:52:50.248000 containerd[1451]: time="2026-01-24T00:52:50.247821090Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:52:50.248000 containerd[1451]: time="2026-01-24T00:52:50.247966050Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:52:50.248658 containerd[1451]: time="2026-01-24T00:52:50.248588151Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:52:50.248922 containerd[1451]: time="2026-01-24T00:52:50.248831699Z" level=info msg="Start subscribing containerd event" Jan 24 00:52:50.248951 containerd[1451]: time="2026-01-24T00:52:50.248934120Z" level=info msg="Start recovering state" Jan 24 00:52:50.249079 containerd[1451]: time="2026-01-24T00:52:50.249019570Z" level=info msg="Start event monitor" Jan 24 00:52:50.249079 containerd[1451]: time="2026-01-24T00:52:50.249071427Z" level=info msg="Start snapshots syncer" Jan 24 00:52:50.249128 containerd[1451]: time="2026-01-24T00:52:50.249081105Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:52:50.249128 containerd[1451]: time="2026-01-24T00:52:50.249089681Z" level=info msg="Start streaming server" Jan 24 00:52:50.249200 containerd[1451]: time="2026-01-24T00:52:50.249162934Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:52:50.249254 containerd[1451]: time="2026-01-24T00:52:50.249219740Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:52:50.249348 containerd[1451]: time="2026-01-24T00:52:50.249305510Z" level=info msg="containerd successfully booted in 0.033484s" Jan 24 00:52:50.249439 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:52:50.298518 sshd_keygen[1444]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:52:50.322532 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:52:50.331141 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:52:50.342113 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:52:50.342368 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:52:50.353458 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:52:50.364302 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:52:50.375204 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:52:50.379298 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:52:50.382979 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:52:50.527605 tar[1449]: linux-amd64/README.md Jan 24 00:52:50.543394 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:52:51.180170 systemd-networkd[1381]: eth0: Gained IPv6LL Jan 24 00:52:51.183604 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:52:51.188170 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:52:51.201141 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:52:51.205583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:52:51.209833 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:52:51.229718 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:52:51.230051 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:52:51.233493 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:52:51.242252 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:52:51.960455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:52:51.964953 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:52:51.966666 (kubelet)[1537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:52:51.971021 systemd[1]: Startup finished in 1.236s (kernel) + 7.621s (initrd) + 5.113s (userspace) = 13.971s. Jan 24 00:52:52.371947 kubelet[1537]: E0124 00:52:52.371785 1537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:52:52.375505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:52:52.375770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:52:53.509581 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:52:53.528201 systemd[1]: Started sshd@0-10.0.0.97:22-10.0.0.1:53322.service - OpenSSH per-connection server daemon (10.0.0.1:53322). Jan 24 00:52:53.578766 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 53322 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:52:53.581208 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:53.591582 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:52:53.606286 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:52:53.608469 systemd-logind[1439]: New session 1 of user core. Jan 24 00:52:53.621964 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:52:53.625034 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:52:53.635601 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:52:53.752737 systemd[1554]: Queued start job for default target default.target. Jan 24 00:52:53.766235 systemd[1554]: Created slice app.slice - User Application Slice. Jan 24 00:52:53.766282 systemd[1554]: Reached target paths.target - Paths. Jan 24 00:52:53.766295 systemd[1554]: Reached target timers.target - Timers. Jan 24 00:52:53.768064 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:52:53.780385 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:52:53.780525 systemd[1554]: Reached target sockets.target - Sockets. Jan 24 00:52:53.780562 systemd[1554]: Reached target basic.target - Basic System. Jan 24 00:52:53.780599 systemd[1554]: Reached target default.target - Main User Target. Jan 24 00:52:53.780634 systemd[1554]: Startup finished in 136ms. Jan 24 00:52:53.781141 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:52:53.782980 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:52:53.852246 systemd[1]: Started sshd@1-10.0.0.97:22-10.0.0.1:53332.service - OpenSSH per-connection server daemon (10.0.0.1:53332). Jan 24 00:52:53.883334 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 53332 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:52:53.885257 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:53.890328 systemd-logind[1439]: New session 2 of user core. Jan 24 00:52:53.909076 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:52:53.966410 sshd[1565]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:53.981740 systemd[1]: sshd@1-10.0.0.97:22-10.0.0.1:53332.service: Deactivated successfully. Jan 24 00:52:53.983317 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:52:53.985178 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:52:53.993288 systemd[1]: Started sshd@2-10.0.0.97:22-10.0.0.1:53334.service - OpenSSH per-connection server daemon (10.0.0.1:53334). Jan 24 00:52:53.994615 systemd-logind[1439]: Removed session 2. Jan 24 00:52:54.026309 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 53334 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:52:54.028056 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:54.033250 systemd-logind[1439]: New session 3 of user core. Jan 24 00:52:54.044059 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:52:54.098316 sshd[1572]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:54.115770 systemd[1]: sshd@2-10.0.0.97:22-10.0.0.1:53334.service: Deactivated successfully. Jan 24 00:52:54.117537 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:52:54.119288 systemd-logind[1439]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:52:54.126149 systemd[1]: Started sshd@3-10.0.0.97:22-10.0.0.1:53344.service - OpenSSH per-connection server daemon (10.0.0.1:53344). Jan 24 00:52:54.127143 systemd-logind[1439]: Removed session 3. Jan 24 00:52:54.154367 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 53344 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:52:54.155970 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:54.160430 systemd-logind[1439]: New session 4 of user core. Jan 24 00:52:54.169027 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:52:54.226451 sshd[1579]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:54.233606 systemd[1]: sshd@3-10.0.0.97:22-10.0.0.1:53344.service: Deactivated successfully. Jan 24 00:52:54.235237 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:52:54.236719 systemd-logind[1439]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:52:54.238034 systemd[1]: Started sshd@4-10.0.0.97:22-10.0.0.1:53354.service - OpenSSH per-connection server daemon (10.0.0.1:53354). Jan 24 00:52:54.239380 systemd-logind[1439]: Removed session 4. Jan 24 00:52:54.270065 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 53354 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:52:54.271458 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:54.277005 systemd-logind[1439]: New session 5 of user core. Jan 24 00:52:54.284061 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:52:54.350116 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:52:54.350491 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:52:54.370961 sudo[1589]: pam_unix(sudo:session): session closed for user root Jan 24 00:52:54.373658 sshd[1586]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:54.385839 systemd[1]: sshd@4-10.0.0.97:22-10.0.0.1:53354.service: Deactivated successfully. Jan 24 00:52:54.387492 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:52:54.389192 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:52:54.401180 systemd[1]: Started sshd@5-10.0.0.97:22-10.0.0.1:56902.service - OpenSSH per-connection server daemon (10.0.0.1:56902). Jan 24 00:52:54.402669 systemd-logind[1439]: Removed session 5. Jan 24 00:52:54.430592 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 56902 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:52:54.432444 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:54.438287 systemd-logind[1439]: New session 6 of user core. Jan 24 00:52:54.448145 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:52:54.507599 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:52:54.508223 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:52:54.513602 sudo[1598]: pam_unix(sudo:session): session closed for user root Jan 24 00:52:54.522940 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:52:54.523473 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:52:54.552262 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:52:54.554733 auditctl[1601]: No rules Jan 24 00:52:54.555249 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:52:54.555595 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:52:54.559447 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:52:54.598712 augenrules[1619]: No rules Jan 24 00:52:54.599769 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:52:54.601014 sudo[1597]: pam_unix(sudo:session): session closed for user root Jan 24 00:52:54.603220 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:54.615491 systemd[1]: sshd@5-10.0.0.97:22-10.0.0.1:56902.service: Deactivated successfully. Jan 24 00:52:54.617216 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:52:54.619414 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:52:54.633204 systemd[1]: Started sshd@6-10.0.0.97:22-10.0.0.1:56906.service - OpenSSH per-connection server daemon (10.0.0.1:56906). Jan 24 00:52:54.634250 systemd-logind[1439]: Removed session 6. Jan 24 00:52:54.662579 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 56906 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:52:54.664301 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:54.669182 systemd-logind[1439]: New session 7 of user core. Jan 24 00:52:54.680065 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:52:54.736419 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:52:54.736817 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:52:55.018118 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:52:55.018392 (dockerd)[1648]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:52:55.310006 dockerd[1648]: time="2026-01-24T00:52:55.309761295Z" level=info msg="Starting up" Jan 24 00:52:55.598544 dockerd[1648]: time="2026-01-24T00:52:55.598225370Z" level=info msg="Loading containers: start." Jan 24 00:52:55.745969 kernel: Initializing XFRM netlink socket Jan 24 00:52:55.843677 systemd-networkd[1381]: docker0: Link UP Jan 24 00:52:55.873205 dockerd[1648]: time="2026-01-24T00:52:55.873053863Z" level=info msg="Loading containers: done." Jan 24 00:52:55.889996 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2006389296-merged.mount: Deactivated successfully. Jan 24 00:52:55.891968 dockerd[1648]: time="2026-01-24T00:52:55.891915106Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:52:55.892059 dockerd[1648]: time="2026-01-24T00:52:55.892025932Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:52:55.892185 dockerd[1648]: time="2026-01-24T00:52:55.892137491Z" level=info msg="Daemon has completed initialization" Jan 24 00:52:55.937664 dockerd[1648]: time="2026-01-24T00:52:55.937601879Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:52:55.937941 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:52:56.650216 containerd[1451]: time="2026-01-24T00:52:56.650152659Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 24 00:52:57.387355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1350530121.mount: Deactivated successfully. Jan 24 00:52:58.444620 containerd[1451]: time="2026-01-24T00:52:58.444529619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:58.445564 containerd[1451]: time="2026-01-24T00:52:58.445480490Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 24 00:52:58.446541 containerd[1451]: time="2026-01-24T00:52:58.446465161Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:58.450275 containerd[1451]: time="2026-01-24T00:52:58.450212526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:58.451673 containerd[1451]: time="2026-01-24T00:52:58.451624651Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.801398665s" Jan 24 00:52:58.451806 containerd[1451]: time="2026-01-24T00:52:58.451671098Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 24 00:52:58.452774 containerd[1451]: time="2026-01-24T00:52:58.452692692Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 24 00:52:59.439409 containerd[1451]: time="2026-01-24T00:52:59.439276318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:59.440402 containerd[1451]: time="2026-01-24T00:52:59.440336577Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 24 00:52:59.441987 containerd[1451]: time="2026-01-24T00:52:59.441929080Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:59.445630 containerd[1451]: time="2026-01-24T00:52:59.445520243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:52:59.446693 containerd[1451]: time="2026-01-24T00:52:59.446618758Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 993.827453ms" Jan 24 00:52:59.446693 containerd[1451]: time="2026-01-24T00:52:59.446671907Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 24 00:52:59.447533 containerd[1451]: time="2026-01-24T00:52:59.447339580Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 24 00:53:00.250288 containerd[1451]: time="2026-01-24T00:53:00.250166880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:00.251122 containerd[1451]: time="2026-01-24T00:53:00.251077131Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 24 00:53:00.252241 containerd[1451]: time="2026-01-24T00:53:00.252187878Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:00.255632 containerd[1451]: time="2026-01-24T00:53:00.255534149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:00.256825 containerd[1451]: time="2026-01-24T00:53:00.256686254Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 809.316297ms" Jan 24 00:53:00.256825 containerd[1451]: time="2026-01-24T00:53:00.256782364Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 24 00:53:00.257425 containerd[1451]: time="2026-01-24T00:53:00.257333347Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 24 00:53:01.245679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932352276.mount: Deactivated successfully. Jan 24 00:53:01.468472 containerd[1451]: time="2026-01-24T00:53:01.468400149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:01.469407 containerd[1451]: time="2026-01-24T00:53:01.469350914Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 24 00:53:01.470637 containerd[1451]: time="2026-01-24T00:53:01.470587142Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:01.472915 containerd[1451]: time="2026-01-24T00:53:01.472821943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:01.473429 containerd[1451]: time="2026-01-24T00:53:01.473375681Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.215973827s" Jan 24 00:53:01.473429 containerd[1451]: time="2026-01-24T00:53:01.473418943Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 24 00:53:01.474040 containerd[1451]: time="2026-01-24T00:53:01.473980330Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 24 00:53:01.901835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752106798.mount: Deactivated successfully. Jan 24 00:53:02.626242 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:53:02.635301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:02.793096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:02.798241 (kubelet)[1932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:53:02.855441 kubelet[1932]: E0124 00:53:02.855400 1932 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:53:02.862032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:53:02.862347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:53:03.007284 containerd[1451]: time="2026-01-24T00:53:03.007085294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:03.008372 containerd[1451]: time="2026-01-24T00:53:03.008284398Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 24 00:53:03.009434 containerd[1451]: time="2026-01-24T00:53:03.009377638Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:03.012655 containerd[1451]: time="2026-01-24T00:53:03.012588811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:03.013843 containerd[1451]: time="2026-01-24T00:53:03.013745110Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.539705829s" Jan 24 00:53:03.013843 containerd[1451]: time="2026-01-24T00:53:03.013831141Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 24 00:53:03.014508 containerd[1451]: time="2026-01-24T00:53:03.014446700Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 24 00:53:03.624170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3752886322.mount: Deactivated successfully. Jan 24 00:53:03.631135 containerd[1451]: time="2026-01-24T00:53:03.631068576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:03.632182 containerd[1451]: time="2026-01-24T00:53:03.632088159Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 24 00:53:03.633240 containerd[1451]: time="2026-01-24T00:53:03.633161302Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:03.635962 containerd[1451]: time="2026-01-24T00:53:03.635843769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:03.636656 containerd[1451]: time="2026-01-24T00:53:03.636573421Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 622.056951ms" Jan 24 00:53:03.636656 containerd[1451]: time="2026-01-24T00:53:03.636624236Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 24 00:53:03.637386 containerd[1451]: time="2026-01-24T00:53:03.637175216Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 24 00:53:04.055553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3285633887.mount: Deactivated successfully. Jan 24 00:53:06.099210 containerd[1451]: time="2026-01-24T00:53:06.099124875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:06.100115 containerd[1451]: time="2026-01-24T00:53:06.100037481Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 24 00:53:06.101343 containerd[1451]: time="2026-01-24T00:53:06.101287562Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:06.105264 containerd[1451]: time="2026-01-24T00:53:06.105206109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:06.106379 containerd[1451]: time="2026-01-24T00:53:06.106322470Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.469122247s" Jan 24 00:53:06.106379 containerd[1451]: time="2026-01-24T00:53:06.106371781Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 24 00:53:08.506670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:08.515244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:08.553956 systemd[1]: Reloading requested from client PID 2028 ('systemctl') (unit session-7.scope)... Jan 24 00:53:08.554003 systemd[1]: Reloading... Jan 24 00:53:08.636993 zram_generator::config[2070]: No configuration found. Jan 24 00:53:08.749682 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:53:08.819630 systemd[1]: Reloading finished in 265 ms. Jan 24 00:53:08.886988 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:53:08.887158 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:53:08.887561 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:08.899311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:09.058968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:09.075364 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:53:09.124297 kubelet[2116]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:53:09.124297 kubelet[2116]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:53:09.124665 kubelet[2116]: I0124 00:53:09.124307 2116 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:53:09.445064 kubelet[2116]: I0124 00:53:09.444811 2116 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:53:09.445064 kubelet[2116]: I0124 00:53:09.444944 2116 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:53:09.445064 kubelet[2116]: I0124 00:53:09.444975 2116 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:53:09.445064 kubelet[2116]: I0124 00:53:09.444984 2116 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:53:09.447374 kubelet[2116]: I0124 00:53:09.445927 2116 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:53:09.521365 kubelet[2116]: E0124 00:53:09.521261 2116 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:53:09.521612 kubelet[2116]: I0124 00:53:09.521510 2116 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:53:09.525222 kubelet[2116]: E0124 00:53:09.525131 2116 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:53:09.525222 kubelet[2116]: I0124 00:53:09.525210 2116 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 24 00:53:09.531454 kubelet[2116]: I0124 00:53:09.531363 2116 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:53:09.532306 kubelet[2116]: I0124 00:53:09.532205 2116 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:53:09.532448 kubelet[2116]: I0124 00:53:09.532276 2116 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:53:09.532448 kubelet[2116]: I0124 00:53:09.532443 2116 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:53:09.532585 kubelet[2116]: I0124 00:53:09.532454 2116 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:53:09.532585 kubelet[2116]: I0124 00:53:09.532544 2116 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:53:09.536280 kubelet[2116]: I0124 00:53:09.536204 2116 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:53:09.537765 kubelet[2116]: I0124 00:53:09.537660 2116 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:53:09.537812 kubelet[2116]: I0124 00:53:09.537793 2116 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:53:09.537812 kubelet[2116]: I0124 00:53:09.537909 2116 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:53:09.538015 kubelet[2116]: I0124 00:53:09.537932 2116 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:53:09.538651 kubelet[2116]: E0124 00:53:09.538566 2116 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:53:09.538699 kubelet[2116]: E0124 00:53:09.538666 2116 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:53:09.542902 kubelet[2116]: I0124 00:53:09.540575 2116 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:53:09.542902 kubelet[2116]: I0124 00:53:09.541082 2116 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:53:09.542902 kubelet[2116]: I0124 00:53:09.541118 2116 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:53:09.542902 kubelet[2116]: W0124 00:53:09.541182 2116 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:53:09.545459 kubelet[2116]: I0124 00:53:09.545443 2116 server.go:1262] "Started kubelet" Jan 24 00:53:09.545590 kubelet[2116]: I0124 00:53:09.545486 2116 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:53:09.545709 kubelet[2116]: I0124 00:53:09.545679 2116 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:53:09.545804 kubelet[2116]: I0124 00:53:09.545783 2116 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:53:09.546311 kubelet[2116]: I0124 00:53:09.546291 2116 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:53:09.546552 kubelet[2116]: I0124 00:53:09.546348 2116 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:53:09.546711 kubelet[2116]: I0124 00:53:09.546643 2116 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:53:09.547956 kubelet[2116]: I0124 00:53:09.546547 2116 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:53:09.550536 kubelet[2116]: E0124 00:53:09.548979 2116 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.97:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.97:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d84919bcbd6e4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:53:09.545424612 +0000 UTC m=+0.465103214,LastTimestamp:2026-01-24 00:53:09.545424612 +0000 UTC m=+0.465103214,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:53:09.551151 kubelet[2116]: I0124 00:53:09.551132 2116 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:53:09.551312 kubelet[2116]: I0124 00:53:09.551296 2116 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:53:09.551442 kubelet[2116]: I0124 00:53:09.551427 2116 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:53:09.551514 kubelet[2116]: E0124 00:53:09.551448 2116 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:53:09.551657 kubelet[2116]: I0124 00:53:09.551595 2116 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:53:09.551775 kubelet[2116]: I0124 00:53:09.551715 2116 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:53:09.552189 kubelet[2116]: E0124 00:53:09.552162 2116 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:53:09.552917 kubelet[2116]: E0124 00:53:09.552571 2116 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:53:09.552917 kubelet[2116]: E0124 00:53:09.552695 2116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="200ms" Jan 24 00:53:09.553085 kubelet[2116]: I0124 00:53:09.553039 2116 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:53:09.568015 kubelet[2116]: I0124 00:53:09.567975 2116 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:53:09.568015 kubelet[2116]: I0124 00:53:09.568014 2116 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:53:09.568123 kubelet[2116]: I0124 00:53:09.568033 2116 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:53:09.570630 kubelet[2116]: I0124 00:53:09.570578 2116 policy_none.go:49] "None policy: Start" Jan 24 00:53:09.570672 kubelet[2116]: I0124 00:53:09.570625 2116 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:53:09.570672 kubelet[2116]: I0124 00:53:09.570665 2116 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:53:09.573053 kubelet[2116]: I0124 00:53:09.573003 2116 policy_none.go:47] "Start" Jan 24 00:53:09.575284 kubelet[2116]: I0124 00:53:09.575178 2116 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:53:09.579320 kubelet[2116]: I0124 00:53:09.578189 2116 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:53:09.579320 kubelet[2116]: I0124 00:53:09.578234 2116 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:53:09.579320 kubelet[2116]: I0124 00:53:09.578260 2116 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:53:09.579320 kubelet[2116]: E0124 00:53:09.578311 2116 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:53:09.579599 kubelet[2116]: E0124 00:53:09.579350 2116 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:53:09.581719 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:53:09.599105 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:53:09.602523 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:53:09.612112 kubelet[2116]: E0124 00:53:09.611906 2116 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:53:09.612252 kubelet[2116]: I0124 00:53:09.612202 2116 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:53:09.612281 kubelet[2116]: I0124 00:53:09.612245 2116 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:53:09.612960 kubelet[2116]: I0124 00:53:09.612519 2116 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:53:09.613333 kubelet[2116]: E0124 00:53:09.613281 2116 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:53:09.613333 kubelet[2116]: E0124 00:53:09.613329 2116 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 24 00:53:09.693294 systemd[1]: Created slice kubepods-burstable-pod5ab690d64e0eefb99e5088f36ebefd50.slice - libcontainer container kubepods-burstable-pod5ab690d64e0eefb99e5088f36ebefd50.slice. Jan 24 00:53:09.713691 kubelet[2116]: I0124 00:53:09.713530 2116 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:53:09.714420 kubelet[2116]: E0124 00:53:09.713938 2116 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 24 00:53:09.715688 kubelet[2116]: E0124 00:53:09.715358 2116 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:09.717618 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 24 00:53:09.736262 kubelet[2116]: E0124 00:53:09.736213 2116 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:09.739026 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 24 00:53:09.741078 kubelet[2116]: E0124 00:53:09.741031 2116 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:09.753520 kubelet[2116]: E0124 00:53:09.753458 2116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="400ms" Jan 24 00:53:09.853308 kubelet[2116]: I0124 00:53:09.853255 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ab690d64e0eefb99e5088f36ebefd50-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ab690d64e0eefb99e5088f36ebefd50\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:09.853308 kubelet[2116]: I0124 00:53:09.853318 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:09.853308 kubelet[2116]: I0124 00:53:09.853351 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:09.853513 kubelet[2116]: I0124 00:53:09.853378 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:09.853513 kubelet[2116]: I0124 00:53:09.853400 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ab690d64e0eefb99e5088f36ebefd50-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ab690d64e0eefb99e5088f36ebefd50\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:09.853513 kubelet[2116]: I0124 00:53:09.853422 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ab690d64e0eefb99e5088f36ebefd50-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5ab690d64e0eefb99e5088f36ebefd50\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:09.853513 kubelet[2116]: I0124 00:53:09.853449 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:09.853513 kubelet[2116]: I0124 00:53:09.853473 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:09.853616 kubelet[2116]: I0124 00:53:09.853497 2116 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:09.916038 kubelet[2116]: I0124 00:53:09.915943 2116 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:53:09.916330 kubelet[2116]: E0124 00:53:09.916269 2116 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 24 00:53:10.019374 kubelet[2116]: E0124 00:53:10.019206 2116 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:10.020735 containerd[1451]: time="2026-01-24T00:53:10.020700693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5ab690d64e0eefb99e5088f36ebefd50,Namespace:kube-system,Attempt:0,}" Jan 24 00:53:10.040103 kubelet[2116]: E0124 00:53:10.039979 2116 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:10.040607 containerd[1451]: time="2026-01-24T00:53:10.040541760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 24 00:53:10.044082 kubelet[2116]: E0124 00:53:10.044020 2116 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:10.044539 containerd[1451]: time="2026-01-24T00:53:10.044480669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 24 00:53:10.154100 kubelet[2116]: E0124 00:53:10.154008 2116 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="800ms" Jan 24 00:53:10.318171 kubelet[2116]: I0124 00:53:10.318015 2116 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:53:10.318500 kubelet[2116]: E0124 00:53:10.318460 2116 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 24 00:53:10.390927 kubelet[2116]: E0124 00:53:10.390740 2116 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:53:10.431486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3350479198.mount: Deactivated successfully. Jan 24 00:53:10.440284 containerd[1451]: time="2026-01-24T00:53:10.440166026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:53:10.444334 containerd[1451]: time="2026-01-24T00:53:10.444208651Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:53:10.445595 containerd[1451]: time="2026-01-24T00:53:10.445494975Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:53:10.446459 containerd[1451]: time="2026-01-24T00:53:10.446422190Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:53:10.447705 containerd[1451]: time="2026-01-24T00:53:10.447627595Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:53:10.449463 containerd[1451]: time="2026-01-24T00:53:10.449350652Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:53:10.451033 containerd[1451]: time="2026-01-24T00:53:10.450051782Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:53:10.454647 containerd[1451]: time="2026-01-24T00:53:10.454578454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:53:10.456182 containerd[1451]: time="2026-01-24T00:53:10.456102394Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 415.476707ms" Jan 24 00:53:10.457554 containerd[1451]: time="2026-01-24T00:53:10.457451374Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 436.677183ms" Jan 24 00:53:10.461224 containerd[1451]: time="2026-01-24T00:53:10.461075536Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 416.516651ms" Jan 24 00:53:10.539355 kubelet[2116]: E0124 00:53:10.539233 2116 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:53:10.573792 containerd[1451]: time="2026-01-24T00:53:10.573081424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:53:10.573792 containerd[1451]: time="2026-01-24T00:53:10.573143710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:53:10.573792 containerd[1451]: time="2026-01-24T00:53:10.573161704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:10.573792 containerd[1451]: time="2026-01-24T00:53:10.573243597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:10.574981 containerd[1451]: time="2026-01-24T00:53:10.574503132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:53:10.574981 containerd[1451]: time="2026-01-24T00:53:10.574556111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:53:10.574981 containerd[1451]: time="2026-01-24T00:53:10.574574796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:10.574981 containerd[1451]: time="2026-01-24T00:53:10.574726309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:10.590494 containerd[1451]: time="2026-01-24T00:53:10.590395747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:53:10.590596 containerd[1451]: time="2026-01-24T00:53:10.590544705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:53:10.590672 containerd[1451]: time="2026-01-24T00:53:10.590631697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:10.590942 containerd[1451]: time="2026-01-24T00:53:10.590820990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:10.598166 systemd[1]: Started cri-containerd-ec8183488a381335b13d5549ae315786497d2b31915035d83322875390e50bc3.scope - libcontainer container ec8183488a381335b13d5549ae315786497d2b31915035d83322875390e50bc3. Jan 24 00:53:10.601960 systemd[1]: Started cri-containerd-20138550e88f2772eb04181e07bbd538637187db3eb0eddac326c1a209e74e20.scope - libcontainer container 20138550e88f2772eb04181e07bbd538637187db3eb0eddac326c1a209e74e20. Jan 24 00:53:10.630258 systemd[1]: Started cri-containerd-7240cf131d4b355a91862de9fe4b5c252ba917890889f05fc7d2ad0364a2dde9.scope - libcontainer container 7240cf131d4b355a91862de9fe4b5c252ba917890889f05fc7d2ad0364a2dde9. Jan 24 00:53:10.652699 containerd[1451]: time="2026-01-24T00:53:10.652100345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec8183488a381335b13d5549ae315786497d2b31915035d83322875390e50bc3\"" Jan 24 00:53:10.655198 kubelet[2116]: E0124 00:53:10.655171 2116 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:10.660624 containerd[1451]: time="2026-01-24T00:53:10.660510543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5ab690d64e0eefb99e5088f36ebefd50,Namespace:kube-system,Attempt:0,} returns sandbox id \"20138550e88f2772eb04181e07bbd538637187db3eb0eddac326c1a209e74e20\"" Jan 24 00:53:10.661929 kubelet[2116]: E0124 00:53:10.661809 2116 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:10.664078 containerd[1451]: time="2026-01-24T00:53:10.662819262Z" level=info msg="CreateContainer within sandbox \"ec8183488a381335b13d5549ae315786497d2b31915035d83322875390e50bc3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:53:10.666448 containerd[1451]: time="2026-01-24T00:53:10.666365521Z" level=info msg="CreateContainer within sandbox \"20138550e88f2772eb04181e07bbd538637187db3eb0eddac326c1a209e74e20\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:53:10.679616 containerd[1451]: time="2026-01-24T00:53:10.679515451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"7240cf131d4b355a91862de9fe4b5c252ba917890889f05fc7d2ad0364a2dde9\"" Jan 24 00:53:10.680794 kubelet[2116]: E0124 00:53:10.680759 2116 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:10.685533 containerd[1451]: time="2026-01-24T00:53:10.685485026Z" level=info msg="CreateContainer within sandbox \"7240cf131d4b355a91862de9fe4b5c252ba917890889f05fc7d2ad0364a2dde9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:53:10.692153 containerd[1451]: time="2026-01-24T00:53:10.692049459Z" level=info msg="CreateContainer within sandbox \"20138550e88f2772eb04181e07bbd538637187db3eb0eddac326c1a209e74e20\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b74536e90a68c773958d832480a2fb64e6536146ddafa94313dbcfad60a25e5c\"" Jan 24 00:53:10.693052 containerd[1451]: time="2026-01-24T00:53:10.693008792Z" level=info msg="StartContainer for \"b74536e90a68c773958d832480a2fb64e6536146ddafa94313dbcfad60a25e5c\"" Jan 24 00:53:10.693302 containerd[1451]: time="2026-01-24T00:53:10.693174819Z" level=info msg="CreateContainer within sandbox \"ec8183488a381335b13d5549ae315786497d2b31915035d83322875390e50bc3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"57b10c41baa3d7b318743d39a91157781efc70bc531eec8d0eff6aab905d40df\"" Jan 24 00:53:10.693501 containerd[1451]: time="2026-01-24T00:53:10.693469509Z" level=info msg="StartContainer for \"57b10c41baa3d7b318743d39a91157781efc70bc531eec8d0eff6aab905d40df\"" Jan 24 00:53:10.706332 containerd[1451]: time="2026-01-24T00:53:10.706115746Z" level=info msg="CreateContainer within sandbox \"7240cf131d4b355a91862de9fe4b5c252ba917890889f05fc7d2ad0364a2dde9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7e3968894040a6c23fba7252bb832a068720f7faea5388acd4baf94f607980c2\"" Jan 24 00:53:10.707723 containerd[1451]: time="2026-01-24T00:53:10.706917778Z" level=info msg="StartContainer for \"7e3968894040a6c23fba7252bb832a068720f7faea5388acd4baf94f607980c2\"" Jan 24 00:53:10.712695 kubelet[2116]: E0124 00:53:10.712621 2116 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:53:10.733169 systemd[1]: Started cri-containerd-b74536e90a68c773958d832480a2fb64e6536146ddafa94313dbcfad60a25e5c.scope - libcontainer container b74536e90a68c773958d832480a2fb64e6536146ddafa94313dbcfad60a25e5c. Jan 24 00:53:10.737394 systemd[1]: Started cri-containerd-57b10c41baa3d7b318743d39a91157781efc70bc531eec8d0eff6aab905d40df.scope - libcontainer container 57b10c41baa3d7b318743d39a91157781efc70bc531eec8d0eff6aab905d40df. Jan 24 00:53:10.748028 systemd[1]: Started cri-containerd-7e3968894040a6c23fba7252bb832a068720f7faea5388acd4baf94f607980c2.scope - libcontainer container 7e3968894040a6c23fba7252bb832a068720f7faea5388acd4baf94f607980c2. Jan 24 00:53:11.008068 containerd[1451]: time="2026-01-24T00:53:11.007944139Z" level=info msg="StartContainer for \"57b10c41baa3d7b318743d39a91157781efc70bc531eec8d0eff6aab905d40df\" returns successfully" Jan 24 00:53:11.008168 containerd[1451]: time="2026-01-24T00:53:11.008110539Z" level=info msg="StartContainer for \"7e3968894040a6c23fba7252bb832a068720f7faea5388acd4baf94f607980c2\" returns successfully" Jan 24 00:53:11.008168 containerd[1451]: time="2026-01-24T00:53:11.008141557Z" level=info msg="StartContainer for \"b74536e90a68c773958d832480a2fb64e6536146ddafa94313dbcfad60a25e5c\" returns successfully" Jan 24 00:53:11.123437 kubelet[2116]: I0124 00:53:11.122538 2116 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:53:11.590364 kubelet[2116]: E0124 00:53:11.590190 2116 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:11.590364 kubelet[2116]: E0124 00:53:11.590297 2116 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:11.594259 kubelet[2116]: E0124 00:53:11.593988 2116 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:11.594259 kubelet[2116]: E0124 00:53:11.594078 2116 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:11.597993 kubelet[2116]: E0124 00:53:11.597823 2116 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:53:11.605167 kubelet[2116]: E0124 00:53:11.605091 2116 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:12.414571 kubelet[2116]: E0124 00:53:12.414499 2116 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 24 00:53:12.499060 kubelet[2116]: I0124 00:53:12.498983 2116 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:53:12.499060 kubelet[2116]: E0124 00:53:12.499051 2116 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 24 00:53:12.540551 kubelet[2116]: I0124 00:53:12.540054 2116 apiserver.go:52] "Watching apiserver" Jan 24 00:53:12.551537 kubelet[2116]: I0124 00:53:12.551454 2116 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:53:12.554159 kubelet[2116]: I0124 00:53:12.553806 2116 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:12.560989 kubelet[2116]: E0124 00:53:12.560840 2116 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:12.560989 kubelet[2116]: I0124 00:53:12.560960 2116 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:12.562405 kubelet[2116]: E0124 00:53:12.562334 2116 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:12.562405 kubelet[2116]: I0124 00:53:12.562383 2116 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:12.563638 kubelet[2116]: E0124 00:53:12.563567 2116 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:12.597920 kubelet[2116]: I0124 00:53:12.597825 2116 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:12.598261 kubelet[2116]: I0124 00:53:12.597975 2116 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:12.599389 kubelet[2116]: E0124 00:53:12.599348 2116 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:12.599570 kubelet[2116]: E0124 00:53:12.599534 2116 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:12.599708 kubelet[2116]: E0124 00:53:12.599668 2116 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:12.599978 kubelet[2116]: E0124 00:53:12.599905 2116 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:14.518815 systemd[1]: Reloading requested from client PID 2402 ('systemctl') (unit session-7.scope)... Jan 24 00:53:14.518954 systemd[1]: Reloading... Jan 24 00:53:14.615938 zram_generator::config[2447]: No configuration found. Jan 24 00:53:14.725501 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:53:14.813508 systemd[1]: Reloading finished in 294 ms. Jan 24 00:53:14.865821 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:14.873261 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:53:14.873535 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:14.873615 systemd[1]: kubelet.service: Consumed 1.015s CPU time, 129.1M memory peak, 0B memory swap peak. Jan 24 00:53:14.892247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:53:15.058442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:53:15.065242 (kubelet)[2486]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:53:15.124621 kubelet[2486]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:53:15.124621 kubelet[2486]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:53:15.125168 kubelet[2486]: I0124 00:53:15.124734 2486 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:53:15.135111 kubelet[2486]: I0124 00:53:15.135045 2486 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:53:15.135111 kubelet[2486]: I0124 00:53:15.135084 2486 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:53:15.135111 kubelet[2486]: I0124 00:53:15.135109 2486 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:53:15.135111 kubelet[2486]: I0124 00:53:15.135116 2486 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:53:15.135334 kubelet[2486]: I0124 00:53:15.135293 2486 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:53:15.136931 kubelet[2486]: I0124 00:53:15.136814 2486 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 24 00:53:15.139031 kubelet[2486]: I0124 00:53:15.138925 2486 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:53:15.141711 kubelet[2486]: E0124 00:53:15.141674 2486 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:53:15.141758 kubelet[2486]: I0124 00:53:15.141726 2486 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 24 00:53:15.149408 kubelet[2486]: I0124 00:53:15.149333 2486 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:53:15.149769 kubelet[2486]: I0124 00:53:15.149688 2486 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:53:15.149938 kubelet[2486]: I0124 00:53:15.149727 2486 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:53:15.150047 kubelet[2486]: I0124 00:53:15.149843 2486 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:53:15.150047 kubelet[2486]: I0124 00:53:15.149959 2486 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:53:15.150047 kubelet[2486]: I0124 00:53:15.149981 2486 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:53:15.151140 kubelet[2486]: I0124 00:53:15.151061 2486 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:53:15.151348 kubelet[2486]: I0124 00:53:15.151275 2486 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:53:15.151348 kubelet[2486]: I0124 00:53:15.151309 2486 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:53:15.151348 kubelet[2486]: I0124 00:53:15.151328 2486 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:53:15.151348 kubelet[2486]: I0124 00:53:15.151346 2486 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:53:15.154636 kubelet[2486]: I0124 00:53:15.154520 2486 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:53:15.155226 kubelet[2486]: I0124 00:53:15.155211 2486 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:53:15.155312 kubelet[2486]: I0124 00:53:15.155301 2486 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:53:15.161491 kubelet[2486]: I0124 00:53:15.161374 2486 server.go:1262] "Started kubelet" Jan 24 00:53:15.161840 kubelet[2486]: I0124 00:53:15.161554 2486 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:53:15.161840 kubelet[2486]: I0124 00:53:15.161809 2486 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:53:15.163107 kubelet[2486]: I0124 00:53:15.162963 2486 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:53:15.163107 kubelet[2486]: I0124 00:53:15.163033 2486 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:53:15.166954 kubelet[2486]: I0124 00:53:15.165444 2486 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:53:15.166954 kubelet[2486]: I0124 00:53:15.165956 2486 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:53:15.170364 kubelet[2486]: I0124 00:53:15.169152 2486 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:53:15.171323 kubelet[2486]: I0124 00:53:15.171230 2486 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:53:15.171323 kubelet[2486]: I0124 00:53:15.171321 2486 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:53:15.171515 kubelet[2486]: I0124 00:53:15.171440 2486 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:53:15.174736 kubelet[2486]: E0124 00:53:15.174605 2486 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:53:15.181704 kubelet[2486]: I0124 00:53:15.181670 2486 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:53:15.181704 kubelet[2486]: I0124 00:53:15.181702 2486 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:53:15.182220 kubelet[2486]: I0124 00:53:15.181832 2486 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:53:15.196709 kubelet[2486]: I0124 00:53:15.196577 2486 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:53:15.198451 kubelet[2486]: I0124 00:53:15.198366 2486 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:53:15.198451 kubelet[2486]: I0124 00:53:15.198434 2486 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:53:15.198451 kubelet[2486]: I0124 00:53:15.198457 2486 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:53:15.198593 kubelet[2486]: E0124 00:53:15.198499 2486 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:53:15.233234 kubelet[2486]: I0124 00:53:15.233183 2486 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:53:15.233234 kubelet[2486]: I0124 00:53:15.233201 2486 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:53:15.233234 kubelet[2486]: I0124 00:53:15.233219 2486 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:53:15.233502 kubelet[2486]: I0124 00:53:15.233460 2486 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:53:15.233502 kubelet[2486]: I0124 00:53:15.233477 2486 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:53:15.233502 kubelet[2486]: I0124 00:53:15.233492 2486 policy_none.go:49] "None policy: Start" Jan 24 00:53:15.233502 kubelet[2486]: I0124 00:53:15.233501 2486 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:53:15.233669 kubelet[2486]: I0124 00:53:15.233512 2486 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:53:15.233669 kubelet[2486]: I0124 00:53:15.233590 2486 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 24 00:53:15.233669 kubelet[2486]: I0124 00:53:15.233598 2486 policy_none.go:47] "Start" Jan 24 00:53:15.239093 kubelet[2486]: E0124 00:53:15.239018 2486 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:53:15.239275 kubelet[2486]: I0124 00:53:15.239253 2486 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:53:15.239363 kubelet[2486]: I0124 00:53:15.239268 2486 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:53:15.239682 kubelet[2486]: I0124 00:53:15.239463 2486 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:53:15.241299 kubelet[2486]: E0124 00:53:15.241247 2486 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:53:15.300300 kubelet[2486]: I0124 00:53:15.300197 2486 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:15.300300 kubelet[2486]: I0124 00:53:15.300221 2486 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:15.300300 kubelet[2486]: I0124 00:53:15.300263 2486 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:15.347452 kubelet[2486]: I0124 00:53:15.347290 2486 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:53:15.357265 kubelet[2486]: I0124 00:53:15.357192 2486 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 24 00:53:15.357373 kubelet[2486]: I0124 00:53:15.357310 2486 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:53:15.473459 kubelet[2486]: I0124 00:53:15.473354 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ab690d64e0eefb99e5088f36ebefd50-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ab690d64e0eefb99e5088f36ebefd50\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:15.473459 kubelet[2486]: I0124 00:53:15.473402 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ab690d64e0eefb99e5088f36ebefd50-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ab690d64e0eefb99e5088f36ebefd50\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:15.473459 kubelet[2486]: I0124 00:53:15.473422 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ab690d64e0eefb99e5088f36ebefd50-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5ab690d64e0eefb99e5088f36ebefd50\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:15.473459 kubelet[2486]: I0124 00:53:15.473439 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:15.473459 kubelet[2486]: I0124 00:53:15.473452 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:15.473713 kubelet[2486]: I0124 00:53:15.473465 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:15.473713 kubelet[2486]: I0124 00:53:15.473478 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:15.473713 kubelet[2486]: I0124 00:53:15.473489 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:15.473713 kubelet[2486]: I0124 00:53:15.473502 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:15.520137 sudo[2527]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 24 00:53:15.520520 sudo[2527]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 24 00:53:15.610380 kubelet[2486]: E0124 00:53:15.610192 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:15.610380 kubelet[2486]: E0124 00:53:15.610224 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:15.610380 kubelet[2486]: E0124 00:53:15.610259 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:16.047451 sudo[2527]: pam_unix(sudo:session): session closed for user root Jan 24 00:53:16.153194 kubelet[2486]: I0124 00:53:16.153154 2486 apiserver.go:52] "Watching apiserver" Jan 24 00:53:16.171591 kubelet[2486]: I0124 00:53:16.171485 2486 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:53:16.223741 kubelet[2486]: I0124 00:53:16.223432 2486 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:16.223741 kubelet[2486]: I0124 00:53:16.223507 2486 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:16.224973 kubelet[2486]: I0124 00:53:16.224717 2486 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:16.249193 kubelet[2486]: E0124 00:53:16.249153 2486 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 00:53:16.249815 kubelet[2486]: E0124 00:53:16.249200 2486 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 00:53:16.251225 kubelet[2486]: E0124 00:53:16.251204 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:16.251365 kubelet[2486]: E0124 00:53:16.251273 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:16.251480 kubelet[2486]: E0124 00:53:16.249304 2486 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:53:16.251668 kubelet[2486]: E0124 00:53:16.251651 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:16.270179 kubelet[2486]: I0124 00:53:16.270014 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.269983024 podStartE2EDuration="1.269983024s" podCreationTimestamp="2026-01-24 00:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:53:16.269744629 +0000 UTC m=+1.199426490" watchObservedRunningTime="2026-01-24 00:53:16.269983024 +0000 UTC m=+1.199664885" Jan 24 00:53:16.287534 kubelet[2486]: I0124 00:53:16.287203 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.287191 podStartE2EDuration="1.287191s" podCreationTimestamp="2026-01-24 00:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:53:16.279614689 +0000 UTC m=+1.209296550" watchObservedRunningTime="2026-01-24 00:53:16.287191 +0000 UTC m=+1.216872862" Jan 24 00:53:16.287534 kubelet[2486]: I0124 00:53:16.287404 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.287400431 podStartE2EDuration="1.287400431s" podCreationTimestamp="2026-01-24 00:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:53:16.2870784 +0000 UTC m=+1.216760261" watchObservedRunningTime="2026-01-24 00:53:16.287400431 +0000 UTC m=+1.217082302" Jan 24 00:53:17.219371 kubelet[2486]: E0124 00:53:17.219307 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:17.220366 kubelet[2486]: E0124 00:53:17.220228 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:17.220507 kubelet[2486]: E0124 00:53:17.220476 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:17.475941 sudo[1631]: pam_unix(sudo:session): session closed for user root Jan 24 00:53:17.478270 sshd[1627]: pam_unix(sshd:session): session closed for user core Jan 24 00:53:17.482312 systemd[1]: sshd@6-10.0.0.97:22-10.0.0.1:56906.service: Deactivated successfully. Jan 24 00:53:17.484282 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:53:17.484582 systemd[1]: session-7.scope: Consumed 4.762s CPU time, 161.5M memory peak, 0B memory swap peak. Jan 24 00:53:17.485329 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:53:17.486777 systemd-logind[1439]: Removed session 7. Jan 24 00:53:18.221150 kubelet[2486]: E0124 00:53:18.221001 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:19.899745 kubelet[2486]: I0124 00:53:19.899678 2486 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:53:19.900326 containerd[1451]: time="2026-01-24T00:53:19.900195842Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:53:19.900580 kubelet[2486]: I0124 00:53:19.900421 2486 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:53:21.021960 systemd[1]: Created slice kubepods-besteffort-pod540e1ed2_4b03_4780_9270_a9544b7b1d26.slice - libcontainer container kubepods-besteffort-pod540e1ed2_4b03_4780_9270_a9544b7b1d26.slice. Jan 24 00:53:21.038030 systemd[1]: Created slice kubepods-burstable-pode2940c67_12c2_402c_93f1_4377a6b6351b.slice - libcontainer container kubepods-burstable-pode2940c67_12c2_402c_93f1_4377a6b6351b.slice. Jan 24 00:53:21.108690 kubelet[2486]: I0124 00:53:21.108596 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-hostproc\") pod \"cilium-xv2hr\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " pod="kube-system/cilium-xv2hr" Jan 24 00:53:21.109591 kubelet[2486]: I0124 00:53:21.108805 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-xtables-lock\") pod \"cilium-xv2hr\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " pod="kube-system/cilium-xv2hr" Jan 24 00:53:21.109591 kubelet[2486]: I0124 00:53:21.108841 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-host-proc-sys-kernel\") pod \"cilium-xv2hr\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " pod="kube-system/cilium-xv2hr" Jan 24 00:53:21.109591 kubelet[2486]: I0124 00:53:21.109206 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87ml5\" (UniqueName: \"kubernetes.io/projected/e2940c67-12c2-402c-93f1-4377a6b6351b-kube-api-access-87ml5\") pod \"cilium-xv2hr\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " pod="kube-system/cilium-xv2hr" Jan 24 00:53:21.109591 kubelet[2486]: I0124 00:53:21.109234 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/540e1ed2-4b03-4780-9270-a9544b7b1d26-lib-modules\") pod \"kube-proxy-5jf5g\" (UID: \"540e1ed2-4b03-4780-9270-a9544b7b1d26\") " pod="kube-system/kube-proxy-5jf5g" Jan 24 00:53:21.109591 kubelet[2486]: I0124 00:53:21.109257 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-cilium-cgroup\") pod \"cilium-xv2hr\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " pod="kube-system/cilium-xv2hr" Jan 24 00:53:21.111522 kubelet[2486]: I0124 00:53:21.109279 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2940c67-12c2-402c-93f1-4377a6b6351b-clustermesh-secrets\") pod \"cilium-xv2hr\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " pod="kube-system/cilium-xv2hr" Jan 24 00:53:21.113466 kubelet[2486]: I0124 00:53:21.113141 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-host-proc-sys-net\") pod \"cilium-xv2hr\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " pod="kube-system/cilium-xv2hr" Jan 24 00:53:21.113466 kubelet[2486]: I0124 00:53:21.113185 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/540e1ed2-4b03-4780-9270-a9544b7b1d26-kube-proxy\") pod \"kube-proxy-5jf5g\" (UID: \"540e1ed2-4b03-4780-9270-a9544b7b1d26\") " pod="kube-system/kube-proxy-5jf5g" Jan 24 00:53:21.113466 kubelet[2486]: I0124 00:53:21.113211 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-cni-path\") pod \"cilium-xv2hr\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " pod="kube-system/cilium-xv2hr" Jan 24 00:53:21.113466 kubelet[2486]: I0124 00:53:21.113236 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-etc-cni-netd\") pod \"cilium-xv2hr\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " pod="kube-system/cilium-xv2hr" Jan 24 00:53:21.113466 kubelet[2486]: I0124 00:53:21.113257 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2940c67-12c2-402c-93f1-4377a6b6351b-cilium-config-path\") pod \"cilium-xv2hr\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " pod="kube-system/cilium-xv2hr" Jan 24 00:53:21.113466 kubelet[2486]: I0124 00:53:21.113276 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2940c67-12c2-402c-93f1-4377a6b6351b-hubble-tls\") pod \"cilium-xv2hr\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " pod="kube-system/cilium-xv2hr" Jan 24 00:53:21.113698 kubelet[2486]: I0124 00:53:21.113297 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwhnq\" (UniqueName: \"kubernetes.io/projected/540e1ed2-4b03-4780-9270-a9544b7b1d26-kube-api-access-lwhnq\") pod \"kube-proxy-5jf5g\" (UID: \"540e1ed2-4b03-4780-9270-a9544b7b1d26\") " pod="kube-system/kube-proxy-5jf5g" Jan 24 00:53:21.113698 kubelet[2486]: I0124 00:53:21.113310 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-cilium-run\") pod \"cilium-xv2hr\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " pod="kube-system/cilium-xv2hr" Jan 24 00:53:21.113698 kubelet[2486]: I0124 00:53:21.113325 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-bpf-maps\") pod \"cilium-xv2hr\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " pod="kube-system/cilium-xv2hr" Jan 24 00:53:21.113698 kubelet[2486]: I0124 00:53:21.113364 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-lib-modules\") pod \"cilium-xv2hr\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " pod="kube-system/cilium-xv2hr" Jan 24 00:53:21.113698 kubelet[2486]: I0124 00:53:21.113408 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/540e1ed2-4b03-4780-9270-a9544b7b1d26-xtables-lock\") pod \"kube-proxy-5jf5g\" (UID: \"540e1ed2-4b03-4780-9270-a9544b7b1d26\") " pod="kube-system/kube-proxy-5jf5g" Jan 24 00:53:21.121646 systemd[1]: Created slice kubepods-besteffort-podd740a83b_996a_4d6c_8de9_107b47be9b41.slice - libcontainer container kubepods-besteffort-podd740a83b_996a_4d6c_8de9_107b47be9b41.slice. Jan 24 00:53:21.216370 kubelet[2486]: I0124 00:53:21.214817 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d740a83b-996a-4d6c-8de9-107b47be9b41-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-w9b2g\" (UID: \"d740a83b-996a-4d6c-8de9-107b47be9b41\") " pod="kube-system/cilium-operator-6f9c7c5859-w9b2g" Jan 24 00:53:21.216370 kubelet[2486]: I0124 00:53:21.215029 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br6zw\" (UniqueName: \"kubernetes.io/projected/d740a83b-996a-4d6c-8de9-107b47be9b41-kube-api-access-br6zw\") pod \"cilium-operator-6f9c7c5859-w9b2g\" (UID: \"d740a83b-996a-4d6c-8de9-107b47be9b41\") " pod="kube-system/cilium-operator-6f9c7c5859-w9b2g" Jan 24 00:53:21.338073 kubelet[2486]: E0124 00:53:21.337910 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:21.339013 containerd[1451]: time="2026-01-24T00:53:21.338809307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jf5g,Uid:540e1ed2-4b03-4780-9270-a9544b7b1d26,Namespace:kube-system,Attempt:0,}" Jan 24 00:53:21.347022 kubelet[2486]: E0124 00:53:21.346975 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:21.347638 containerd[1451]: time="2026-01-24T00:53:21.347570462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xv2hr,Uid:e2940c67-12c2-402c-93f1-4377a6b6351b,Namespace:kube-system,Attempt:0,}" Jan 24 00:53:21.374013 containerd[1451]: time="2026-01-24T00:53:21.373693346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:53:21.374013 containerd[1451]: time="2026-01-24T00:53:21.373735405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:53:21.374013 containerd[1451]: time="2026-01-24T00:53:21.373745293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:21.374013 containerd[1451]: time="2026-01-24T00:53:21.373810274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:21.393483 containerd[1451]: time="2026-01-24T00:53:21.393146254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:53:21.393483 containerd[1451]: time="2026-01-24T00:53:21.393205324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:53:21.393483 containerd[1451]: time="2026-01-24T00:53:21.393218359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:21.393483 containerd[1451]: time="2026-01-24T00:53:21.393292687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:21.405127 systemd[1]: Started cri-containerd-7466f02a3a115fa6c1999f9692e22c78f480dd4505de8f125b2614e64b19919c.scope - libcontainer container 7466f02a3a115fa6c1999f9692e22c78f480dd4505de8f125b2614e64b19919c. Jan 24 00:53:21.427189 systemd[1]: Started cri-containerd-791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a.scope - libcontainer container 791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a. Jan 24 00:53:21.431354 kubelet[2486]: E0124 00:53:21.431191 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:21.433403 containerd[1451]: time="2026-01-24T00:53:21.433196338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-w9b2g,Uid:d740a83b-996a-4d6c-8de9-107b47be9b41,Namespace:kube-system,Attempt:0,}" Jan 24 00:53:21.462507 containerd[1451]: time="2026-01-24T00:53:21.462476201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jf5g,Uid:540e1ed2-4b03-4780-9270-a9544b7b1d26,Namespace:kube-system,Attempt:0,} returns sandbox id \"7466f02a3a115fa6c1999f9692e22c78f480dd4505de8f125b2614e64b19919c\"" Jan 24 00:53:21.463789 kubelet[2486]: E0124 00:53:21.463617 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:21.471462 containerd[1451]: time="2026-01-24T00:53:21.471216465Z" level=info msg="CreateContainer within sandbox \"7466f02a3a115fa6c1999f9692e22c78f480dd4505de8f125b2614e64b19919c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:53:21.473392 containerd[1451]: time="2026-01-24T00:53:21.473032564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xv2hr,Uid:e2940c67-12c2-402c-93f1-4377a6b6351b,Namespace:kube-system,Attempt:0,} returns sandbox id \"791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a\"" Jan 24 00:53:21.475537 kubelet[2486]: E0124 00:53:21.475350 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:21.479072 containerd[1451]: time="2026-01-24T00:53:21.478994583Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 24 00:53:21.495031 containerd[1451]: time="2026-01-24T00:53:21.485452196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:53:21.495031 containerd[1451]: time="2026-01-24T00:53:21.485533829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:53:21.495031 containerd[1451]: time="2026-01-24T00:53:21.485650947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:21.495031 containerd[1451]: time="2026-01-24T00:53:21.485761193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:21.499742 containerd[1451]: time="2026-01-24T00:53:21.499713159Z" level=info msg="CreateContainer within sandbox \"7466f02a3a115fa6c1999f9692e22c78f480dd4505de8f125b2614e64b19919c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"03afce76937870468ebd95ceff79bd73bb3b054c2f25dcf9d8c723346a35515f\"" Jan 24 00:53:21.501814 containerd[1451]: time="2026-01-24T00:53:21.500549966Z" level=info msg="StartContainer for \"03afce76937870468ebd95ceff79bd73bb3b054c2f25dcf9d8c723346a35515f\"" Jan 24 00:53:21.521114 systemd[1]: Started cri-containerd-29e79dfdb45f9c736d45f1c8516f688162f281ad3d27e3b1eb5e8be5145fe9bc.scope - libcontainer container 29e79dfdb45f9c736d45f1c8516f688162f281ad3d27e3b1eb5e8be5145fe9bc. Jan 24 00:53:21.547151 systemd[1]: Started cri-containerd-03afce76937870468ebd95ceff79bd73bb3b054c2f25dcf9d8c723346a35515f.scope - libcontainer container 03afce76937870468ebd95ceff79bd73bb3b054c2f25dcf9d8c723346a35515f. Jan 24 00:53:21.588820 containerd[1451]: time="2026-01-24T00:53:21.588642754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-w9b2g,Uid:d740a83b-996a-4d6c-8de9-107b47be9b41,Namespace:kube-system,Attempt:0,} returns sandbox id \"29e79dfdb45f9c736d45f1c8516f688162f281ad3d27e3b1eb5e8be5145fe9bc\"" Jan 24 00:53:21.591469 kubelet[2486]: E0124 00:53:21.591402 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:21.594139 containerd[1451]: time="2026-01-24T00:53:21.594049146Z" level=info msg="StartContainer for \"03afce76937870468ebd95ceff79bd73bb3b054c2f25dcf9d8c723346a35515f\" returns successfully" Jan 24 00:53:22.234670 kubelet[2486]: E0124 00:53:22.234611 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:22.297288 kubelet[2486]: E0124 00:53:22.297214 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:22.313579 kubelet[2486]: I0124 00:53:22.313524 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5jf5g" podStartSLOduration=2.313505113 podStartE2EDuration="2.313505113s" podCreationTimestamp="2026-01-24 00:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:53:22.246813111 +0000 UTC m=+7.176494972" watchObservedRunningTime="2026-01-24 00:53:22.313505113 +0000 UTC m=+7.243186973" Jan 24 00:53:22.997612 kubelet[2486]: E0124 00:53:22.997549 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:23.237084 kubelet[2486]: E0124 00:53:23.236973 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:23.237515 kubelet[2486]: E0124 00:53:23.237366 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:24.238759 kubelet[2486]: E0124 00:53:24.238716 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:24.239236 kubelet[2486]: E0124 00:53:24.238976 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:25.927432 kubelet[2486]: E0124 00:53:25.927317 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:32.412993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2557639300.mount: Deactivated successfully. Jan 24 00:53:34.232196 containerd[1451]: time="2026-01-24T00:53:34.232042932Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:34.233661 containerd[1451]: time="2026-01-24T00:53:34.233614436Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 24 00:53:34.234698 containerd[1451]: time="2026-01-24T00:53:34.234645339Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:34.236263 containerd[1451]: time="2026-01-24T00:53:34.236219197Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.757160705s" Jan 24 00:53:34.236328 containerd[1451]: time="2026-01-24T00:53:34.236263729Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 24 00:53:34.239550 containerd[1451]: time="2026-01-24T00:53:34.239444069Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 24 00:53:34.243905 containerd[1451]: time="2026-01-24T00:53:34.243705472Z" level=info msg="CreateContainer within sandbox \"791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 00:53:34.282129 containerd[1451]: time="2026-01-24T00:53:34.282057052Z" level=info msg="CreateContainer within sandbox \"791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"91ec8dd3f37f508ea0fa6d47f84dba5a0083bc22c6eff7a40c8e5eb7c85dbc06\"" Jan 24 00:53:34.282990 containerd[1451]: time="2026-01-24T00:53:34.282788529Z" level=info msg="StartContainer for \"91ec8dd3f37f508ea0fa6d47f84dba5a0083bc22c6eff7a40c8e5eb7c85dbc06\"" Jan 24 00:53:34.362202 systemd[1]: Started cri-containerd-91ec8dd3f37f508ea0fa6d47f84dba5a0083bc22c6eff7a40c8e5eb7c85dbc06.scope - libcontainer container 91ec8dd3f37f508ea0fa6d47f84dba5a0083bc22c6eff7a40c8e5eb7c85dbc06. Jan 24 00:53:34.397963 containerd[1451]: time="2026-01-24T00:53:34.397811515Z" level=info msg="StartContainer for \"91ec8dd3f37f508ea0fa6d47f84dba5a0083bc22c6eff7a40c8e5eb7c85dbc06\" returns successfully" Jan 24 00:53:34.413304 systemd[1]: cri-containerd-91ec8dd3f37f508ea0fa6d47f84dba5a0083bc22c6eff7a40c8e5eb7c85dbc06.scope: Deactivated successfully. Jan 24 00:53:34.546694 containerd[1451]: time="2026-01-24T00:53:34.546258626Z" level=info msg="shim disconnected" id=91ec8dd3f37f508ea0fa6d47f84dba5a0083bc22c6eff7a40c8e5eb7c85dbc06 namespace=k8s.io Jan 24 00:53:34.546694 containerd[1451]: time="2026-01-24T00:53:34.546328886Z" level=warning msg="cleaning up after shim disconnected" id=91ec8dd3f37f508ea0fa6d47f84dba5a0083bc22c6eff7a40c8e5eb7c85dbc06 namespace=k8s.io Jan 24 00:53:34.546694 containerd[1451]: time="2026-01-24T00:53:34.546343864Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:53:35.001193 update_engine[1441]: I20260124 00:53:35.001048 1441 update_attempter.cc:509] Updating boot flags... Jan 24 00:53:35.041017 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2970) Jan 24 00:53:35.090981 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2972) Jan 24 00:53:35.133073 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2972) Jan 24 00:53:35.258104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91ec8dd3f37f508ea0fa6d47f84dba5a0083bc22c6eff7a40c8e5eb7c85dbc06-rootfs.mount: Deactivated successfully. Jan 24 00:53:35.265074 kubelet[2486]: E0124 00:53:35.264965 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:35.270628 containerd[1451]: time="2026-01-24T00:53:35.270539141Z" level=info msg="CreateContainer within sandbox \"791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 00:53:35.291540 containerd[1451]: time="2026-01-24T00:53:35.291447656Z" level=info msg="CreateContainer within sandbox \"791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"34546db71b83d52901b932c4d46e3e602d89b56c084b6fb87833dcd0697ed3f3\"" Jan 24 00:53:35.293714 containerd[1451]: time="2026-01-24T00:53:35.293643838Z" level=info msg="StartContainer for \"34546db71b83d52901b932c4d46e3e602d89b56c084b6fb87833dcd0697ed3f3\"" Jan 24 00:53:35.331063 systemd[1]: Started cri-containerd-34546db71b83d52901b932c4d46e3e602d89b56c084b6fb87833dcd0697ed3f3.scope - libcontainer container 34546db71b83d52901b932c4d46e3e602d89b56c084b6fb87833dcd0697ed3f3. Jan 24 00:53:35.360800 containerd[1451]: time="2026-01-24T00:53:35.360746427Z" level=info msg="StartContainer for \"34546db71b83d52901b932c4d46e3e602d89b56c084b6fb87833dcd0697ed3f3\" returns successfully" Jan 24 00:53:35.376307 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:53:35.376929 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:53:35.377083 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:53:35.384298 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:53:35.384601 systemd[1]: cri-containerd-34546db71b83d52901b932c4d46e3e602d89b56c084b6fb87833dcd0697ed3f3.scope: Deactivated successfully. Jan 24 00:53:35.406072 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:53:35.428016 containerd[1451]: time="2026-01-24T00:53:35.427960545Z" level=info msg="shim disconnected" id=34546db71b83d52901b932c4d46e3e602d89b56c084b6fb87833dcd0697ed3f3 namespace=k8s.io Jan 24 00:53:35.428016 containerd[1451]: time="2026-01-24T00:53:35.428014255Z" level=warning msg="cleaning up after shim disconnected" id=34546db71b83d52901b932c4d46e3e602d89b56c084b6fb87833dcd0697ed3f3 namespace=k8s.io Jan 24 00:53:35.428247 containerd[1451]: time="2026-01-24T00:53:35.428033470Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:53:36.258250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34546db71b83d52901b932c4d46e3e602d89b56c084b6fb87833dcd0697ed3f3-rootfs.mount: Deactivated successfully. Jan 24 00:53:36.269343 kubelet[2486]: E0124 00:53:36.269217 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:36.274570 containerd[1451]: time="2026-01-24T00:53:36.274489913Z" level=info msg="CreateContainer within sandbox \"791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 00:53:36.293655 containerd[1451]: time="2026-01-24T00:53:36.293483765Z" level=info msg="CreateContainer within sandbox \"791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1696c97a6d8287cdf3cdc47561adb5583948b9ae60fabc38abd6f00fc7e7e5e9\"" Jan 24 00:53:36.294134 containerd[1451]: time="2026-01-24T00:53:36.294105965Z" level=info msg="StartContainer for \"1696c97a6d8287cdf3cdc47561adb5583948b9ae60fabc38abd6f00fc7e7e5e9\"" Jan 24 00:53:36.331037 systemd[1]: Started cri-containerd-1696c97a6d8287cdf3cdc47561adb5583948b9ae60fabc38abd6f00fc7e7e5e9.scope - libcontainer container 1696c97a6d8287cdf3cdc47561adb5583948b9ae60fabc38abd6f00fc7e7e5e9. Jan 24 00:53:36.362914 systemd[1]: cri-containerd-1696c97a6d8287cdf3cdc47561adb5583948b9ae60fabc38abd6f00fc7e7e5e9.scope: Deactivated successfully. Jan 24 00:53:36.364361 containerd[1451]: time="2026-01-24T00:53:36.364260455Z" level=info msg="StartContainer for \"1696c97a6d8287cdf3cdc47561adb5583948b9ae60fabc38abd6f00fc7e7e5e9\" returns successfully" Jan 24 00:53:36.397091 containerd[1451]: time="2026-01-24T00:53:36.396945825Z" level=info msg="shim disconnected" id=1696c97a6d8287cdf3cdc47561adb5583948b9ae60fabc38abd6f00fc7e7e5e9 namespace=k8s.io Jan 24 00:53:36.397091 containerd[1451]: time="2026-01-24T00:53:36.397033378Z" level=warning msg="cleaning up after shim disconnected" id=1696c97a6d8287cdf3cdc47561adb5583948b9ae60fabc38abd6f00fc7e7e5e9 namespace=k8s.io Jan 24 00:53:36.397091 containerd[1451]: time="2026-01-24T00:53:36.397049628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:53:37.257549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1696c97a6d8287cdf3cdc47561adb5583948b9ae60fabc38abd6f00fc7e7e5e9-rootfs.mount: Deactivated successfully. Jan 24 00:53:37.274224 kubelet[2486]: E0124 00:53:37.274142 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:37.286701 containerd[1451]: time="2026-01-24T00:53:37.286617325Z" level=info msg="CreateContainer within sandbox \"791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 00:53:37.310227 containerd[1451]: time="2026-01-24T00:53:37.310150599Z" level=info msg="CreateContainer within sandbox \"791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2144f46ee85795bcd0d8910863bf03dcb061676f5646d599324feead7cffbfcb\"" Jan 24 00:53:37.311075 containerd[1451]: time="2026-01-24T00:53:37.311011028Z" level=info msg="StartContainer for \"2144f46ee85795bcd0d8910863bf03dcb061676f5646d599324feead7cffbfcb\"" Jan 24 00:53:37.355115 systemd[1]: Started cri-containerd-2144f46ee85795bcd0d8910863bf03dcb061676f5646d599324feead7cffbfcb.scope - libcontainer container 2144f46ee85795bcd0d8910863bf03dcb061676f5646d599324feead7cffbfcb. Jan 24 00:53:37.378350 systemd[1]: cri-containerd-2144f46ee85795bcd0d8910863bf03dcb061676f5646d599324feead7cffbfcb.scope: Deactivated successfully. Jan 24 00:53:37.381173 containerd[1451]: time="2026-01-24T00:53:37.381122852Z" level=info msg="StartContainer for \"2144f46ee85795bcd0d8910863bf03dcb061676f5646d599324feead7cffbfcb\" returns successfully" Jan 24 00:53:37.413166 containerd[1451]: time="2026-01-24T00:53:37.413118643Z" level=info msg="shim disconnected" id=2144f46ee85795bcd0d8910863bf03dcb061676f5646d599324feead7cffbfcb namespace=k8s.io Jan 24 00:53:37.413492 containerd[1451]: time="2026-01-24T00:53:37.413418281Z" level=warning msg="cleaning up after shim disconnected" id=2144f46ee85795bcd0d8910863bf03dcb061676f5646d599324feead7cffbfcb namespace=k8s.io Jan 24 00:53:37.413492 containerd[1451]: time="2026-01-24T00:53:37.413475897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:53:38.257677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2144f46ee85795bcd0d8910863bf03dcb061676f5646d599324feead7cffbfcb-rootfs.mount: Deactivated successfully. Jan 24 00:53:38.279731 kubelet[2486]: E0124 00:53:38.279677 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:38.291760 containerd[1451]: time="2026-01-24T00:53:38.291605626Z" level=info msg="CreateContainer within sandbox \"791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 00:53:38.312083 containerd[1451]: time="2026-01-24T00:53:38.311997875Z" level=info msg="CreateContainer within sandbox \"791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"264f4b2eb15f4e5d7c6516ac6143d70113dec19caa6ca2426df0f899b4ac67b8\"" Jan 24 00:53:38.313056 containerd[1451]: time="2026-01-24T00:53:38.312991579Z" level=info msg="StartContainer for \"264f4b2eb15f4e5d7c6516ac6143d70113dec19caa6ca2426df0f899b4ac67b8\"" Jan 24 00:53:38.356160 systemd[1]: Started cri-containerd-264f4b2eb15f4e5d7c6516ac6143d70113dec19caa6ca2426df0f899b4ac67b8.scope - libcontainer container 264f4b2eb15f4e5d7c6516ac6143d70113dec19caa6ca2426df0f899b4ac67b8. Jan 24 00:53:38.390051 containerd[1451]: time="2026-01-24T00:53:38.389980282Z" level=info msg="StartContainer for \"264f4b2eb15f4e5d7c6516ac6143d70113dec19caa6ca2426df0f899b4ac67b8\" returns successfully" Jan 24 00:53:38.544303 kubelet[2486]: I0124 00:53:38.544122 2486 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 24 00:53:38.582762 systemd[1]: Created slice kubepods-burstable-pod48f3ad0f_10a2_420b_b6dd_d2c1a25342c8.slice - libcontainer container kubepods-burstable-pod48f3ad0f_10a2_420b_b6dd_d2c1a25342c8.slice. Jan 24 00:53:38.592362 systemd[1]: Created slice kubepods-burstable-pod5de1e352_91d7_4ab0_9136_08250e268679.slice - libcontainer container kubepods-burstable-pod5de1e352_91d7_4ab0_9136_08250e268679.slice. Jan 24 00:53:38.645968 kubelet[2486]: I0124 00:53:38.645936 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8lqj\" (UniqueName: \"kubernetes.io/projected/48f3ad0f-10a2-420b-b6dd-d2c1a25342c8-kube-api-access-x8lqj\") pod \"coredns-66bc5c9577-9lzv7\" (UID: \"48f3ad0f-10a2-420b-b6dd-d2c1a25342c8\") " pod="kube-system/coredns-66bc5c9577-9lzv7" Jan 24 00:53:38.646252 kubelet[2486]: I0124 00:53:38.646121 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5de1e352-91d7-4ab0-9136-08250e268679-config-volume\") pod \"coredns-66bc5c9577-bl6sl\" (UID: \"5de1e352-91d7-4ab0-9136-08250e268679\") " pod="kube-system/coredns-66bc5c9577-bl6sl" Jan 24 00:53:38.646252 kubelet[2486]: I0124 00:53:38.646202 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q96vw\" (UniqueName: \"kubernetes.io/projected/5de1e352-91d7-4ab0-9136-08250e268679-kube-api-access-q96vw\") pod \"coredns-66bc5c9577-bl6sl\" (UID: \"5de1e352-91d7-4ab0-9136-08250e268679\") " pod="kube-system/coredns-66bc5c9577-bl6sl" Jan 24 00:53:38.646252 kubelet[2486]: I0124 00:53:38.646219 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48f3ad0f-10a2-420b-b6dd-d2c1a25342c8-config-volume\") pod \"coredns-66bc5c9577-9lzv7\" (UID: \"48f3ad0f-10a2-420b-b6dd-d2c1a25342c8\") " pod="kube-system/coredns-66bc5c9577-9lzv7" Jan 24 00:53:38.891418 kubelet[2486]: E0124 00:53:38.891270 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:38.893422 containerd[1451]: time="2026-01-24T00:53:38.893308057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9lzv7,Uid:48f3ad0f-10a2-420b-b6dd-d2c1a25342c8,Namespace:kube-system,Attempt:0,}" Jan 24 00:53:38.899506 kubelet[2486]: E0124 00:53:38.899189 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:38.900389 containerd[1451]: time="2026-01-24T00:53:38.900178658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bl6sl,Uid:5de1e352-91d7-4ab0-9136-08250e268679,Namespace:kube-system,Attempt:0,}" Jan 24 00:53:39.286034 kubelet[2486]: E0124 00:53:39.285975 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:39.302924 kubelet[2486]: I0124 00:53:39.302743 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xv2hr" podStartSLOduration=6.54180571 podStartE2EDuration="19.302723955s" podCreationTimestamp="2026-01-24 00:53:20 +0000 UTC" firstStartedPulling="2026-01-24 00:53:21.477656803 +0000 UTC m=+6.407338663" lastFinishedPulling="2026-01-24 00:53:34.238575026 +0000 UTC m=+19.168256908" observedRunningTime="2026-01-24 00:53:39.302452125 +0000 UTC m=+24.232133996" watchObservedRunningTime="2026-01-24 00:53:39.302723955 +0000 UTC m=+24.232405816" Jan 24 00:53:40.288668 kubelet[2486]: E0124 00:53:40.288552 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:41.171142 systemd[1]: Started sshd@7-10.0.0.97:22-10.0.0.1:50202.service - OpenSSH per-connection server daemon (10.0.0.1:50202). Jan 24 00:53:41.226616 sshd[3309]: Accepted publickey for core from 10.0.0.1 port 50202 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:53:41.228388 sshd[3309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:53:41.233898 systemd-logind[1439]: New session 8 of user core. Jan 24 00:53:41.246022 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:53:41.291375 kubelet[2486]: E0124 00:53:41.291320 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:41.381562 sshd[3309]: pam_unix(sshd:session): session closed for user core Jan 24 00:53:41.385736 systemd[1]: sshd@7-10.0.0.97:22-10.0.0.1:50202.service: Deactivated successfully. Jan 24 00:53:41.387687 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:53:41.388784 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:53:41.390513 systemd-logind[1439]: Removed session 8. Jan 24 00:53:42.540436 containerd[1451]: time="2026-01-24T00:53:42.540368905Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:42.541445 containerd[1451]: time="2026-01-24T00:53:42.541397201Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 24 00:53:42.542926 containerd[1451]: time="2026-01-24T00:53:42.542767373Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:53:42.545133 containerd[1451]: time="2026-01-24T00:53:42.545074038Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 8.305565179s" Jan 24 00:53:42.545182 containerd[1451]: time="2026-01-24T00:53:42.545142174Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 24 00:53:42.552220 containerd[1451]: time="2026-01-24T00:53:42.552157348Z" level=info msg="CreateContainer within sandbox \"29e79dfdb45f9c736d45f1c8516f688162f281ad3d27e3b1eb5e8be5145fe9bc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 24 00:53:42.567529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4136231872.mount: Deactivated successfully. Jan 24 00:53:42.569629 containerd[1451]: time="2026-01-24T00:53:42.569500812Z" level=info msg="CreateContainer within sandbox \"29e79dfdb45f9c736d45f1c8516f688162f281ad3d27e3b1eb5e8be5145fe9bc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339\"" Jan 24 00:53:42.570357 containerd[1451]: time="2026-01-24T00:53:42.570175247Z" level=info msg="StartContainer for \"dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339\"" Jan 24 00:53:42.613133 systemd[1]: Started cri-containerd-dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339.scope - libcontainer container dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339. Jan 24 00:53:42.682219 containerd[1451]: time="2026-01-24T00:53:42.682154073Z" level=info msg="StartContainer for \"dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339\" returns successfully" Jan 24 00:53:43.300700 kubelet[2486]: E0124 00:53:43.300643 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:44.303542 kubelet[2486]: E0124 00:53:44.303462 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:45.456508 systemd-networkd[1381]: cilium_host: Link UP Jan 24 00:53:45.456729 systemd-networkd[1381]: cilium_net: Link UP Jan 24 00:53:45.458478 systemd-networkd[1381]: cilium_net: Gained carrier Jan 24 00:53:45.458791 systemd-networkd[1381]: cilium_host: Gained carrier Jan 24 00:53:45.459153 systemd-networkd[1381]: cilium_net: Gained IPv6LL Jan 24 00:53:45.459446 systemd-networkd[1381]: cilium_host: Gained IPv6LL Jan 24 00:53:45.630260 systemd-networkd[1381]: cilium_vxlan: Link UP Jan 24 00:53:45.630271 systemd-networkd[1381]: cilium_vxlan: Gained carrier Jan 24 00:53:45.921952 kernel: NET: Registered PF_ALG protocol family Jan 24 00:53:46.393488 systemd[1]: Started sshd@8-10.0.0.97:22-10.0.0.1:43194.service - OpenSSH per-connection server daemon (10.0.0.1:43194). Jan 24 00:53:46.433612 sshd[3589]: Accepted publickey for core from 10.0.0.1 port 43194 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:53:46.435399 sshd[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:53:46.440713 systemd-logind[1439]: New session 9 of user core. Jan 24 00:53:46.452445 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:53:46.603827 sshd[3589]: pam_unix(sshd:session): session closed for user core Jan 24 00:53:46.609350 systemd[1]: sshd@8-10.0.0.97:22-10.0.0.1:43194.service: Deactivated successfully. Jan 24 00:53:46.611746 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:53:46.613374 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:53:46.614835 systemd-logind[1439]: Removed session 9. Jan 24 00:53:46.899507 systemd-networkd[1381]: lxc_health: Link UP Jan 24 00:53:46.910512 systemd-networkd[1381]: lxc_health: Gained carrier Jan 24 00:53:47.347824 kubelet[2486]: E0124 00:53:47.347697 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:47.369190 kubelet[2486]: I0124 00:53:47.368257 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-w9b2g" podStartSLOduration=5.414433912 podStartE2EDuration="26.368235841s" podCreationTimestamp="2026-01-24 00:53:21 +0000 UTC" firstStartedPulling="2026-01-24 00:53:21.592287658 +0000 UTC m=+6.521969518" lastFinishedPulling="2026-01-24 00:53:42.546089587 +0000 UTC m=+27.475771447" observedRunningTime="2026-01-24 00:53:43.314381467 +0000 UTC m=+28.244063357" watchObservedRunningTime="2026-01-24 00:53:47.368235841 +0000 UTC m=+32.297917702" Jan 24 00:53:47.486750 systemd-networkd[1381]: lxc9a71597b2fa3: Link UP Jan 24 00:53:47.514311 kernel: eth0: renamed from tmp0e492 Jan 24 00:53:47.513935 systemd-networkd[1381]: lxcba867235037f: Link UP Jan 24 00:53:47.519941 kernel: eth0: renamed from tmp3821a Jan 24 00:53:47.530254 systemd-networkd[1381]: lxcba867235037f: Gained carrier Jan 24 00:53:47.530663 systemd-networkd[1381]: lxc9a71597b2fa3: Gained carrier Jan 24 00:53:47.629217 systemd-networkd[1381]: cilium_vxlan: Gained IPv6LL Jan 24 00:53:48.460252 systemd-networkd[1381]: lxc_health: Gained IPv6LL Jan 24 00:53:48.972287 systemd-networkd[1381]: lxc9a71597b2fa3: Gained IPv6LL Jan 24 00:53:48.972940 systemd-networkd[1381]: lxcba867235037f: Gained IPv6LL Jan 24 00:53:50.937539 kubelet[2486]: I0124 00:53:50.937449 2486 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:53:50.938283 kubelet[2486]: E0124 00:53:50.938039 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:51.321700 kubelet[2486]: E0124 00:53:51.320360 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:51.467217 containerd[1451]: time="2026-01-24T00:53:51.465572191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:53:51.467683 containerd[1451]: time="2026-01-24T00:53:51.467199947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:53:51.467683 containerd[1451]: time="2026-01-24T00:53:51.467220245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:51.467683 containerd[1451]: time="2026-01-24T00:53:51.467325532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:51.468651 containerd[1451]: time="2026-01-24T00:53:51.468389393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:53:51.468651 containerd[1451]: time="2026-01-24T00:53:51.468476615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:53:51.468651 containerd[1451]: time="2026-01-24T00:53:51.468493827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:51.468810 containerd[1451]: time="2026-01-24T00:53:51.468618570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:53:51.509153 systemd[1]: Started cri-containerd-0e492ed6d2012b2d2702fd3aa1630fa0be9e95caa273001339cb5bab34278063.scope - libcontainer container 0e492ed6d2012b2d2702fd3aa1630fa0be9e95caa273001339cb5bab34278063. Jan 24 00:53:51.511763 systemd[1]: Started cri-containerd-3821a0e803eada6e77c2db8681dfab8ea63d4d2c64bd340d259ddb216f7c5832.scope - libcontainer container 3821a0e803eada6e77c2db8681dfab8ea63d4d2c64bd340d259ddb216f7c5832. Jan 24 00:53:51.529349 systemd-resolved[1383]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:53:51.536006 systemd-resolved[1383]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:53:51.580282 containerd[1451]: time="2026-01-24T00:53:51.579464221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bl6sl,Uid:5de1e352-91d7-4ab0-9136-08250e268679,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e492ed6d2012b2d2702fd3aa1630fa0be9e95caa273001339cb5bab34278063\"" Jan 24 00:53:51.581352 kubelet[2486]: E0124 00:53:51.581255 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:51.584712 containerd[1451]: time="2026-01-24T00:53:51.584488323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9lzv7,Uid:48f3ad0f-10a2-420b-b6dd-d2c1a25342c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"3821a0e803eada6e77c2db8681dfab8ea63d4d2c64bd340d259ddb216f7c5832\"" Jan 24 00:53:51.587139 kubelet[2486]: E0124 00:53:51.587095 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:51.590652 containerd[1451]: time="2026-01-24T00:53:51.590592612Z" level=info msg="CreateContainer within sandbox \"0e492ed6d2012b2d2702fd3aa1630fa0be9e95caa273001339cb5bab34278063\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:53:51.594142 containerd[1451]: time="2026-01-24T00:53:51.594073492Z" level=info msg="CreateContainer within sandbox \"3821a0e803eada6e77c2db8681dfab8ea63d4d2c64bd340d259ddb216f7c5832\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:53:51.625296 systemd[1]: Started sshd@9-10.0.0.97:22-10.0.0.1:43198.service - OpenSSH per-connection server daemon (10.0.0.1:43198). Jan 24 00:53:51.654482 containerd[1451]: time="2026-01-24T00:53:51.654216101Z" level=info msg="CreateContainer within sandbox \"3821a0e803eada6e77c2db8681dfab8ea63d4d2c64bd340d259ddb216f7c5832\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"33e660663e5d2c58a3abcd79fb32c5527e2306ecc0af390da1f36a8fe37138c4\"" Jan 24 00:53:51.656068 containerd[1451]: time="2026-01-24T00:53:51.655190985Z" level=info msg="StartContainer for \"33e660663e5d2c58a3abcd79fb32c5527e2306ecc0af390da1f36a8fe37138c4\"" Jan 24 00:53:51.669203 containerd[1451]: time="2026-01-24T00:53:51.669118330Z" level=info msg="CreateContainer within sandbox \"0e492ed6d2012b2d2702fd3aa1630fa0be9e95caa273001339cb5bab34278063\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"719201fe7464d8519af8d0fcb492cdaf8dceea495d9a4486ef5f761f0ff4a5ac\"" Jan 24 00:53:51.671167 containerd[1451]: time="2026-01-24T00:53:51.670078354Z" level=info msg="StartContainer for \"719201fe7464d8519af8d0fcb492cdaf8dceea495d9a4486ef5f761f0ff4a5ac\"" Jan 24 00:53:51.681125 sshd[3860]: Accepted publickey for core from 10.0.0.1 port 43198 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:53:51.683348 sshd[3860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:53:51.692145 systemd-logind[1439]: New session 10 of user core. Jan 24 00:53:51.696079 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:53:51.711294 systemd[1]: Started cri-containerd-33e660663e5d2c58a3abcd79fb32c5527e2306ecc0af390da1f36a8fe37138c4.scope - libcontainer container 33e660663e5d2c58a3abcd79fb32c5527e2306ecc0af390da1f36a8fe37138c4. Jan 24 00:53:51.727498 systemd[1]: Started cri-containerd-719201fe7464d8519af8d0fcb492cdaf8dceea495d9a4486ef5f761f0ff4a5ac.scope - libcontainer container 719201fe7464d8519af8d0fcb492cdaf8dceea495d9a4486ef5f761f0ff4a5ac. Jan 24 00:53:51.795460 containerd[1451]: time="2026-01-24T00:53:51.795336083Z" level=info msg="StartContainer for \"33e660663e5d2c58a3abcd79fb32c5527e2306ecc0af390da1f36a8fe37138c4\" returns successfully" Jan 24 00:53:51.808719 containerd[1451]: time="2026-01-24T00:53:51.808599787Z" level=info msg="StartContainer for \"719201fe7464d8519af8d0fcb492cdaf8dceea495d9a4486ef5f761f0ff4a5ac\" returns successfully" Jan 24 00:53:51.878735 sshd[3860]: pam_unix(sshd:session): session closed for user core Jan 24 00:53:51.884786 systemd-logind[1439]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:53:51.889191 systemd[1]: sshd@9-10.0.0.97:22-10.0.0.1:43198.service: Deactivated successfully. Jan 24 00:53:51.891564 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:53:51.893085 systemd-logind[1439]: Removed session 10. Jan 24 00:53:52.361560 kubelet[2486]: E0124 00:53:52.360676 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:52.367437 kubelet[2486]: E0124 00:53:52.367287 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:52.381325 kubelet[2486]: I0124 00:53:52.381260 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bl6sl" podStartSLOduration=31.381244459 podStartE2EDuration="31.381244459s" podCreationTimestamp="2026-01-24 00:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:53:52.380656724 +0000 UTC m=+37.310338605" watchObservedRunningTime="2026-01-24 00:53:52.381244459 +0000 UTC m=+37.310926321" Jan 24 00:53:52.400003 kubelet[2486]: I0124 00:53:52.399681 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9lzv7" podStartSLOduration=31.399668311 podStartE2EDuration="31.399668311s" podCreationTimestamp="2026-01-24 00:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:53:52.399081411 +0000 UTC m=+37.328763272" watchObservedRunningTime="2026-01-24 00:53:52.399668311 +0000 UTC m=+37.329350172" Jan 24 00:53:53.370678 kubelet[2486]: E0124 00:53:53.370543 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:53.371560 kubelet[2486]: E0124 00:53:53.371155 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:54.373931 kubelet[2486]: E0124 00:53:54.373772 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:54.374432 kubelet[2486]: E0124 00:53:54.374230 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:53:56.890726 systemd[1]: Started sshd@10-10.0.0.97:22-10.0.0.1:33118.service - OpenSSH per-connection server daemon (10.0.0.1:33118). Jan 24 00:53:56.938945 sshd[3965]: Accepted publickey for core from 10.0.0.1 port 33118 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:53:56.940717 sshd[3965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:53:56.946771 systemd-logind[1439]: New session 11 of user core. Jan 24 00:53:56.958088 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:53:57.096210 sshd[3965]: pam_unix(sshd:session): session closed for user core Jan 24 00:53:57.100584 systemd[1]: sshd@10-10.0.0.97:22-10.0.0.1:33118.service: Deactivated successfully. Jan 24 00:53:57.102485 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:53:57.103546 systemd-logind[1439]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:53:57.105295 systemd-logind[1439]: Removed session 11. Jan 24 00:54:02.112010 systemd[1]: Started sshd@11-10.0.0.97:22-10.0.0.1:33130.service - OpenSSH per-connection server daemon (10.0.0.1:33130). Jan 24 00:54:02.148668 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 33130 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:02.150721 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:02.157835 systemd-logind[1439]: New session 12 of user core. Jan 24 00:54:02.165263 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:54:02.305388 sshd[3980]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:02.323178 systemd[1]: sshd@11-10.0.0.97:22-10.0.0.1:33130.service: Deactivated successfully. Jan 24 00:54:02.326081 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:54:02.328727 systemd-logind[1439]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:54:02.337368 systemd[1]: Started sshd@12-10.0.0.97:22-10.0.0.1:33136.service - OpenSSH per-connection server daemon (10.0.0.1:33136). Jan 24 00:54:02.339159 systemd-logind[1439]: Removed session 12. Jan 24 00:54:02.377768 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 33136 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:02.379455 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:02.386076 systemd-logind[1439]: New session 13 of user core. Jan 24 00:54:02.393406 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:54:02.595588 sshd[3996]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:02.610088 systemd[1]: sshd@12-10.0.0.97:22-10.0.0.1:33136.service: Deactivated successfully. Jan 24 00:54:02.615079 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:54:02.619063 systemd-logind[1439]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:54:02.630495 systemd[1]: Started sshd@13-10.0.0.97:22-10.0.0.1:33142.service - OpenSSH per-connection server daemon (10.0.0.1:33142). Jan 24 00:54:02.636134 systemd-logind[1439]: Removed session 13. Jan 24 00:54:02.681413 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 33142 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:02.683554 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:02.690649 systemd-logind[1439]: New session 14 of user core. Jan 24 00:54:02.701165 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:54:02.868395 sshd[4008]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:02.872316 systemd[1]: sshd@13-10.0.0.97:22-10.0.0.1:33142.service: Deactivated successfully. Jan 24 00:54:02.874763 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:54:02.876682 systemd-logind[1439]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:54:02.878212 systemd-logind[1439]: Removed session 14. Jan 24 00:54:07.883948 systemd[1]: Started sshd@14-10.0.0.97:22-10.0.0.1:51480.service - OpenSSH per-connection server daemon (10.0.0.1:51480). Jan 24 00:54:07.925158 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 51480 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:07.927817 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:07.934924 systemd-logind[1439]: New session 15 of user core. Jan 24 00:54:07.945292 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:54:08.078601 sshd[4022]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:08.083635 systemd[1]: sshd@14-10.0.0.97:22-10.0.0.1:51480.service: Deactivated successfully. Jan 24 00:54:08.086358 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:54:08.087647 systemd-logind[1439]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:54:08.089126 systemd-logind[1439]: Removed session 15. Jan 24 00:54:13.094592 systemd[1]: Started sshd@15-10.0.0.97:22-10.0.0.1:51488.service - OpenSSH per-connection server daemon (10.0.0.1:51488). Jan 24 00:54:13.135465 sshd[4038]: Accepted publickey for core from 10.0.0.1 port 51488 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:13.138122 sshd[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:13.143730 systemd-logind[1439]: New session 16 of user core. Jan 24 00:54:13.153216 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:54:13.283654 sshd[4038]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:13.297697 systemd[1]: sshd@15-10.0.0.97:22-10.0.0.1:51488.service: Deactivated successfully. Jan 24 00:54:13.299727 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:54:13.301986 systemd-logind[1439]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:54:13.307480 systemd[1]: Started sshd@16-10.0.0.97:22-10.0.0.1:51494.service - OpenSSH per-connection server daemon (10.0.0.1:51494). Jan 24 00:54:13.309090 systemd-logind[1439]: Removed session 16. Jan 24 00:54:13.339603 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 51494 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:13.341450 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:13.347008 systemd-logind[1439]: New session 17 of user core. Jan 24 00:54:13.357158 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:54:13.616416 sshd[4054]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:13.629581 systemd[1]: sshd@16-10.0.0.97:22-10.0.0.1:51494.service: Deactivated successfully. Jan 24 00:54:13.631638 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:54:13.633751 systemd-logind[1439]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:54:13.644228 systemd[1]: Started sshd@17-10.0.0.97:22-10.0.0.1:51498.service - OpenSSH per-connection server daemon (10.0.0.1:51498). Jan 24 00:54:13.645583 systemd-logind[1439]: Removed session 17. Jan 24 00:54:13.685609 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 51498 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:13.687388 sshd[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:13.693794 systemd-logind[1439]: New session 18 of user core. Jan 24 00:54:13.704205 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:54:14.355067 sshd[4067]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:14.364730 systemd[1]: sshd@17-10.0.0.97:22-10.0.0.1:51498.service: Deactivated successfully. Jan 24 00:54:14.369110 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:54:14.370770 systemd-logind[1439]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:54:14.384105 systemd[1]: Started sshd@18-10.0.0.97:22-10.0.0.1:51508.service - OpenSSH per-connection server daemon (10.0.0.1:51508). Jan 24 00:54:14.389552 systemd-logind[1439]: Removed session 18. Jan 24 00:54:14.428825 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 51508 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:14.431107 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:14.437387 systemd-logind[1439]: New session 19 of user core. Jan 24 00:54:14.451184 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:54:14.754370 sshd[4086]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:14.769769 systemd[1]: sshd@18-10.0.0.97:22-10.0.0.1:51508.service: Deactivated successfully. Jan 24 00:54:14.772211 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:54:14.774179 systemd-logind[1439]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:54:14.785609 systemd[1]: Started sshd@19-10.0.0.97:22-10.0.0.1:44212.service - OpenSSH per-connection server daemon (10.0.0.1:44212). Jan 24 00:54:14.787959 systemd-logind[1439]: Removed session 19. Jan 24 00:54:14.825376 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 44212 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:14.827460 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:14.835768 systemd-logind[1439]: New session 20 of user core. Jan 24 00:54:14.845281 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:54:14.979398 sshd[4098]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:14.983694 systemd[1]: sshd@19-10.0.0.97:22-10.0.0.1:44212.service: Deactivated successfully. Jan 24 00:54:14.986556 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:54:14.988763 systemd-logind[1439]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:54:14.991663 systemd-logind[1439]: Removed session 20. Jan 24 00:54:19.998050 systemd[1]: Started sshd@20-10.0.0.97:22-10.0.0.1:44218.service - OpenSSH per-connection server daemon (10.0.0.1:44218). Jan 24 00:54:20.049211 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 44218 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:20.051526 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:20.065514 systemd-logind[1439]: New session 21 of user core. Jan 24 00:54:20.085328 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:54:20.215579 sshd[4116]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:20.221255 systemd[1]: sshd@20-10.0.0.97:22-10.0.0.1:44218.service: Deactivated successfully. Jan 24 00:54:20.223498 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:54:20.224584 systemd-logind[1439]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:54:20.226600 systemd-logind[1439]: Removed session 21. Jan 24 00:54:23.200299 kubelet[2486]: E0124 00:54:23.200234 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:25.229054 systemd[1]: Started sshd@21-10.0.0.97:22-10.0.0.1:60348.service - OpenSSH per-connection server daemon (10.0.0.1:60348). Jan 24 00:54:25.270599 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 60348 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:25.272719 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:25.278821 systemd-logind[1439]: New session 22 of user core. Jan 24 00:54:25.294255 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:54:25.436250 sshd[4134]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:25.439833 systemd[1]: sshd@21-10.0.0.97:22-10.0.0.1:60348.service: Deactivated successfully. Jan 24 00:54:25.442185 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:54:25.444844 systemd-logind[1439]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:54:25.446607 systemd-logind[1439]: Removed session 22. Jan 24 00:54:28.200021 kubelet[2486]: E0124 00:54:28.199978 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:30.449154 systemd[1]: Started sshd@22-10.0.0.97:22-10.0.0.1:60364.service - OpenSSH per-connection server daemon (10.0.0.1:60364). Jan 24 00:54:30.490147 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 60364 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:30.492567 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:30.498915 systemd-logind[1439]: New session 23 of user core. Jan 24 00:54:30.511225 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:54:30.656191 sshd[4148]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:30.672806 systemd[1]: sshd@22-10.0.0.97:22-10.0.0.1:60364.service: Deactivated successfully. Jan 24 00:54:30.674923 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:54:30.677091 systemd-logind[1439]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:54:30.682432 systemd[1]: Started sshd@23-10.0.0.97:22-10.0.0.1:60374.service - OpenSSH per-connection server daemon (10.0.0.1:60374). Jan 24 00:54:30.683812 systemd-logind[1439]: Removed session 23. Jan 24 00:54:30.729950 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 60374 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:30.732246 sshd[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:30.738637 systemd-logind[1439]: New session 24 of user core. Jan 24 00:54:30.747171 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:54:31.199956 kubelet[2486]: E0124 00:54:31.199836 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:32.223202 containerd[1451]: time="2026-01-24T00:54:32.223124883Z" level=info msg="StopContainer for \"dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339\" with timeout 30 (s)" Jan 24 00:54:32.225124 containerd[1451]: time="2026-01-24T00:54:32.224012143Z" level=info msg="Stop container \"dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339\" with signal terminated" Jan 24 00:54:32.247958 systemd[1]: cri-containerd-dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339.scope: Deactivated successfully. Jan 24 00:54:32.285181 containerd[1451]: time="2026-01-24T00:54:32.285129315Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:54:32.290001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339-rootfs.mount: Deactivated successfully. Jan 24 00:54:32.295806 containerd[1451]: time="2026-01-24T00:54:32.295750976Z" level=info msg="StopContainer for \"264f4b2eb15f4e5d7c6516ac6143d70113dec19caa6ca2426df0f899b4ac67b8\" with timeout 2 (s)" Jan 24 00:54:32.296290 containerd[1451]: time="2026-01-24T00:54:32.296175722Z" level=info msg="Stop container \"264f4b2eb15f4e5d7c6516ac6143d70113dec19caa6ca2426df0f899b4ac67b8\" with signal terminated" Jan 24 00:54:32.297022 containerd[1451]: time="2026-01-24T00:54:32.296646852Z" level=info msg="shim disconnected" id=dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339 namespace=k8s.io Jan 24 00:54:32.297022 containerd[1451]: time="2026-01-24T00:54:32.296694280Z" level=warning msg="cleaning up after shim disconnected" id=dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339 namespace=k8s.io Jan 24 00:54:32.297022 containerd[1451]: time="2026-01-24T00:54:32.296710870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:54:32.305968 systemd-networkd[1381]: lxc_health: Link DOWN Jan 24 00:54:32.305977 systemd-networkd[1381]: lxc_health: Lost carrier Jan 24 00:54:32.332168 containerd[1451]: time="2026-01-24T00:54:32.332121203Z" level=info msg="StopContainer for \"dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339\" returns successfully" Jan 24 00:54:32.332497 systemd[1]: cri-containerd-264f4b2eb15f4e5d7c6516ac6143d70113dec19caa6ca2426df0f899b4ac67b8.scope: Deactivated successfully. Jan 24 00:54:32.333247 systemd[1]: cri-containerd-264f4b2eb15f4e5d7c6516ac6143d70113dec19caa6ca2426df0f899b4ac67b8.scope: Consumed 8.586s CPU time. Jan 24 00:54:32.339284 containerd[1451]: time="2026-01-24T00:54:32.339122018Z" level=info msg="StopPodSandbox for \"29e79dfdb45f9c736d45f1c8516f688162f281ad3d27e3b1eb5e8be5145fe9bc\"" Jan 24 00:54:32.339284 containerd[1451]: time="2026-01-24T00:54:32.339188089Z" level=info msg="Container to stop \"dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:54:32.341313 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29e79dfdb45f9c736d45f1c8516f688162f281ad3d27e3b1eb5e8be5145fe9bc-shm.mount: Deactivated successfully. Jan 24 00:54:32.350559 systemd[1]: cri-containerd-29e79dfdb45f9c736d45f1c8516f688162f281ad3d27e3b1eb5e8be5145fe9bc.scope: Deactivated successfully. Jan 24 00:54:32.374114 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-264f4b2eb15f4e5d7c6516ac6143d70113dec19caa6ca2426df0f899b4ac67b8-rootfs.mount: Deactivated successfully. Jan 24 00:54:32.382586 containerd[1451]: time="2026-01-24T00:54:32.382514293Z" level=info msg="shim disconnected" id=264f4b2eb15f4e5d7c6516ac6143d70113dec19caa6ca2426df0f899b4ac67b8 namespace=k8s.io Jan 24 00:54:32.383327 containerd[1451]: time="2026-01-24T00:54:32.382733849Z" level=warning msg="cleaning up after shim disconnected" id=264f4b2eb15f4e5d7c6516ac6143d70113dec19caa6ca2426df0f899b4ac67b8 namespace=k8s.io Jan 24 00:54:32.383327 containerd[1451]: time="2026-01-24T00:54:32.382752363Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:54:32.393126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29e79dfdb45f9c736d45f1c8516f688162f281ad3d27e3b1eb5e8be5145fe9bc-rootfs.mount: Deactivated successfully. Jan 24 00:54:32.401921 containerd[1451]: time="2026-01-24T00:54:32.400543729Z" level=info msg="shim disconnected" id=29e79dfdb45f9c736d45f1c8516f688162f281ad3d27e3b1eb5e8be5145fe9bc namespace=k8s.io Jan 24 00:54:32.401921 containerd[1451]: time="2026-01-24T00:54:32.400666095Z" level=warning msg="cleaning up after shim disconnected" id=29e79dfdb45f9c736d45f1c8516f688162f281ad3d27e3b1eb5e8be5145fe9bc namespace=k8s.io Jan 24 00:54:32.401921 containerd[1451]: time="2026-01-24T00:54:32.400679790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:54:32.412433 containerd[1451]: time="2026-01-24T00:54:32.412370948Z" level=info msg="StopContainer for \"264f4b2eb15f4e5d7c6516ac6143d70113dec19caa6ca2426df0f899b4ac67b8\" returns successfully" Jan 24 00:54:32.413118 containerd[1451]: time="2026-01-24T00:54:32.413021571Z" level=info msg="StopPodSandbox for \"791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a\"" Jan 24 00:54:32.413118 containerd[1451]: time="2026-01-24T00:54:32.413109804Z" level=info msg="Container to stop \"91ec8dd3f37f508ea0fa6d47f84dba5a0083bc22c6eff7a40c8e5eb7c85dbc06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:54:32.413253 containerd[1451]: time="2026-01-24T00:54:32.413126655Z" level=info msg="Container to stop \"34546db71b83d52901b932c4d46e3e602d89b56c084b6fb87833dcd0697ed3f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:54:32.413253 containerd[1451]: time="2026-01-24T00:54:32.413141161Z" level=info msg="Container to stop \"264f4b2eb15f4e5d7c6516ac6143d70113dec19caa6ca2426df0f899b4ac67b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:54:32.413253 containerd[1451]: time="2026-01-24T00:54:32.413155719Z" level=info msg="Container to stop \"1696c97a6d8287cdf3cdc47561adb5583948b9ae60fabc38abd6f00fc7e7e5e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:54:32.413253 containerd[1451]: time="2026-01-24T00:54:32.413169745Z" level=info msg="Container to stop \"2144f46ee85795bcd0d8910863bf03dcb061676f5646d599324feead7cffbfcb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 00:54:32.421120 systemd[1]: cri-containerd-791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a.scope: Deactivated successfully. Jan 24 00:54:32.435347 containerd[1451]: time="2026-01-24T00:54:32.435294788Z" level=info msg="TearDown network for sandbox \"29e79dfdb45f9c736d45f1c8516f688162f281ad3d27e3b1eb5e8be5145fe9bc\" successfully" Jan 24 00:54:32.435347 containerd[1451]: time="2026-01-24T00:54:32.435345732Z" level=info msg="StopPodSandbox for \"29e79dfdb45f9c736d45f1c8516f688162f281ad3d27e3b1eb5e8be5145fe9bc\" returns successfully" Jan 24 00:54:32.454685 containerd[1451]: time="2026-01-24T00:54:32.454581545Z" level=info msg="shim disconnected" id=791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a namespace=k8s.io Jan 24 00:54:32.454685 containerd[1451]: time="2026-01-24T00:54:32.454670991Z" level=warning msg="cleaning up after shim disconnected" id=791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a namespace=k8s.io Jan 24 00:54:32.454685 containerd[1451]: time="2026-01-24T00:54:32.454685427Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:54:32.479616 containerd[1451]: time="2026-01-24T00:54:32.479377538Z" level=info msg="TearDown network for sandbox \"791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a\" successfully" Jan 24 00:54:32.479616 containerd[1451]: time="2026-01-24T00:54:32.479431869Z" level=info msg="StopPodSandbox for \"791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a\" returns successfully" Jan 24 00:54:32.484452 kubelet[2486]: I0124 00:54:32.484288 2486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d740a83b-996a-4d6c-8de9-107b47be9b41-cilium-config-path\") pod \"d740a83b-996a-4d6c-8de9-107b47be9b41\" (UID: \"d740a83b-996a-4d6c-8de9-107b47be9b41\") " Jan 24 00:54:32.485169 kubelet[2486]: I0124 00:54:32.484375 2486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br6zw\" (UniqueName: \"kubernetes.io/projected/d740a83b-996a-4d6c-8de9-107b47be9b41-kube-api-access-br6zw\") pod \"d740a83b-996a-4d6c-8de9-107b47be9b41\" (UID: \"d740a83b-996a-4d6c-8de9-107b47be9b41\") " Jan 24 00:54:32.490716 kubelet[2486]: I0124 00:54:32.490564 2486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d740a83b-996a-4d6c-8de9-107b47be9b41-kube-api-access-br6zw" (OuterVolumeSpecName: "kube-api-access-br6zw") pod "d740a83b-996a-4d6c-8de9-107b47be9b41" (UID: "d740a83b-996a-4d6c-8de9-107b47be9b41"). InnerVolumeSpecName "kube-api-access-br6zw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:54:32.492992 kubelet[2486]: I0124 00:54:32.492532 2486 scope.go:117] "RemoveContainer" containerID="dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339" Jan 24 00:54:32.496177 containerd[1451]: time="2026-01-24T00:54:32.495785604Z" level=info msg="RemoveContainer for \"dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339\"" Jan 24 00:54:32.496394 kubelet[2486]: I0124 00:54:32.496297 2486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d740a83b-996a-4d6c-8de9-107b47be9b41-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d740a83b-996a-4d6c-8de9-107b47be9b41" (UID: "d740a83b-996a-4d6c-8de9-107b47be9b41"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:54:32.499254 kubelet[2486]: I0124 00:54:32.499206 2486 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a" Jan 24 00:54:32.503490 containerd[1451]: time="2026-01-24T00:54:32.503305012Z" level=info msg="RemoveContainer for \"dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339\" returns successfully" Jan 24 00:54:32.504108 kubelet[2486]: I0124 00:54:32.504020 2486 scope.go:117] "RemoveContainer" containerID="dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339" Jan 24 00:54:32.513835 containerd[1451]: time="2026-01-24T00:54:32.513616846Z" level=error msg="ContainerStatus for \"dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339\": not found" Jan 24 00:54:32.531216 kubelet[2486]: E0124 00:54:32.530967 2486 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339\": not found" containerID="dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339" Jan 24 00:54:32.531216 kubelet[2486]: I0124 00:54:32.531097 2486 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339"} err="failed to get container status \"dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbaa94a53c1801ec496b50501015eebfcfe1a773103fa789e2f05e392d78c339\": not found" Jan 24 00:54:32.586231 kubelet[2486]: I0124 00:54:32.586102 2486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-xtables-lock\") pod \"e2940c67-12c2-402c-93f1-4377a6b6351b\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " Jan 24 00:54:32.586231 kubelet[2486]: I0124 00:54:32.586161 2486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-host-proc-sys-net\") pod \"e2940c67-12c2-402c-93f1-4377a6b6351b\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " Jan 24 00:54:32.586231 kubelet[2486]: I0124 00:54:32.586185 2486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-hostproc\") pod \"e2940c67-12c2-402c-93f1-4377a6b6351b\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " Jan 24 00:54:32.586231 kubelet[2486]: I0124 00:54:32.586207 2486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87ml5\" (UniqueName: \"kubernetes.io/projected/e2940c67-12c2-402c-93f1-4377a6b6351b-kube-api-access-87ml5\") pod \"e2940c67-12c2-402c-93f1-4377a6b6351b\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " Jan 24 00:54:32.586231 kubelet[2486]: I0124 00:54:32.586224 2486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2940c67-12c2-402c-93f1-4377a6b6351b-cilium-config-path\") pod \"e2940c67-12c2-402c-93f1-4377a6b6351b\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " Jan 24 00:54:32.586231 kubelet[2486]: I0124 00:54:32.586240 2486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-bpf-maps\") pod \"e2940c67-12c2-402c-93f1-4377a6b6351b\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " Jan 24 00:54:32.586547 kubelet[2486]: I0124 00:54:32.586254 2486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-etc-cni-netd\") pod \"e2940c67-12c2-402c-93f1-4377a6b6351b\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " Jan 24 00:54:32.586547 kubelet[2486]: I0124 00:54:32.586270 2486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-lib-modules\") pod \"e2940c67-12c2-402c-93f1-4377a6b6351b\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " Jan 24 00:54:32.586547 kubelet[2486]: I0124 00:54:32.586285 2486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-cilium-cgroup\") pod \"e2940c67-12c2-402c-93f1-4377a6b6351b\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " Jan 24 00:54:32.586547 kubelet[2486]: I0124 00:54:32.586304 2486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2940c67-12c2-402c-93f1-4377a6b6351b-clustermesh-secrets\") pod \"e2940c67-12c2-402c-93f1-4377a6b6351b\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " Jan 24 00:54:32.586547 kubelet[2486]: I0124 00:54:32.586318 2486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2940c67-12c2-402c-93f1-4377a6b6351b-hubble-tls\") pod \"e2940c67-12c2-402c-93f1-4377a6b6351b\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " Jan 24 00:54:32.586547 kubelet[2486]: I0124 00:54:32.586315 2486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-hostproc" (OuterVolumeSpecName: "hostproc") pod "e2940c67-12c2-402c-93f1-4377a6b6351b" (UID: "e2940c67-12c2-402c-93f1-4377a6b6351b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:54:32.586680 kubelet[2486]: I0124 00:54:32.586331 2486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-cilium-run\") pod \"e2940c67-12c2-402c-93f1-4377a6b6351b\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " Jan 24 00:54:32.586680 kubelet[2486]: I0124 00:54:32.586346 2486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-cni-path\") pod \"e2940c67-12c2-402c-93f1-4377a6b6351b\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " Jan 24 00:54:32.586680 kubelet[2486]: I0124 00:54:32.586369 2486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-cni-path" (OuterVolumeSpecName: "cni-path") pod "e2940c67-12c2-402c-93f1-4377a6b6351b" (UID: "e2940c67-12c2-402c-93f1-4377a6b6351b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:54:32.586680 kubelet[2486]: I0124 00:54:32.586378 2486 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-host-proc-sys-kernel\") pod \"e2940c67-12c2-402c-93f1-4377a6b6351b\" (UID: \"e2940c67-12c2-402c-93f1-4377a6b6351b\") " Jan 24 00:54:32.586680 kubelet[2486]: I0124 00:54:32.586430 2486 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:32.586680 kubelet[2486]: I0124 00:54:32.586449 2486 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:32.586680 kubelet[2486]: I0124 00:54:32.586463 2486 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d740a83b-996a-4d6c-8de9-107b47be9b41-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:32.586993 kubelet[2486]: I0124 00:54:32.586476 2486 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-br6zw\" (UniqueName: \"kubernetes.io/projected/d740a83b-996a-4d6c-8de9-107b47be9b41-kube-api-access-br6zw\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:32.586993 kubelet[2486]: I0124 00:54:32.586506 2486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e2940c67-12c2-402c-93f1-4377a6b6351b" (UID: "e2940c67-12c2-402c-93f1-4377a6b6351b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:54:32.586993 kubelet[2486]: I0124 00:54:32.586532 2486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e2940c67-12c2-402c-93f1-4377a6b6351b" (UID: "e2940c67-12c2-402c-93f1-4377a6b6351b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:54:32.586993 kubelet[2486]: I0124 00:54:32.586539 2486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e2940c67-12c2-402c-93f1-4377a6b6351b" (UID: "e2940c67-12c2-402c-93f1-4377a6b6351b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:54:32.586993 kubelet[2486]: I0124 00:54:32.586556 2486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e2940c67-12c2-402c-93f1-4377a6b6351b" (UID: "e2940c67-12c2-402c-93f1-4377a6b6351b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:54:32.587185 kubelet[2486]: I0124 00:54:32.586576 2486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e2940c67-12c2-402c-93f1-4377a6b6351b" (UID: "e2940c67-12c2-402c-93f1-4377a6b6351b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:54:32.587185 kubelet[2486]: I0124 00:54:32.586581 2486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e2940c67-12c2-402c-93f1-4377a6b6351b" (UID: "e2940c67-12c2-402c-93f1-4377a6b6351b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:54:32.587503 kubelet[2486]: I0124 00:54:32.587280 2486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e2940c67-12c2-402c-93f1-4377a6b6351b" (UID: "e2940c67-12c2-402c-93f1-4377a6b6351b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:54:32.589301 kubelet[2486]: I0124 00:54:32.589203 2486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e2940c67-12c2-402c-93f1-4377a6b6351b" (UID: "e2940c67-12c2-402c-93f1-4377a6b6351b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 00:54:32.590656 kubelet[2486]: I0124 00:54:32.590613 2486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2940c67-12c2-402c-93f1-4377a6b6351b-kube-api-access-87ml5" (OuterVolumeSpecName: "kube-api-access-87ml5") pod "e2940c67-12c2-402c-93f1-4377a6b6351b" (UID: "e2940c67-12c2-402c-93f1-4377a6b6351b"). InnerVolumeSpecName "kube-api-access-87ml5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:54:32.591098 kubelet[2486]: I0124 00:54:32.591062 2486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2940c67-12c2-402c-93f1-4377a6b6351b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e2940c67-12c2-402c-93f1-4377a6b6351b" (UID: "e2940c67-12c2-402c-93f1-4377a6b6351b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:54:32.591594 kubelet[2486]: I0124 00:54:32.591560 2486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2940c67-12c2-402c-93f1-4377a6b6351b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e2940c67-12c2-402c-93f1-4377a6b6351b" (UID: "e2940c67-12c2-402c-93f1-4377a6b6351b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:54:32.593270 kubelet[2486]: I0124 00:54:32.593203 2486 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2940c67-12c2-402c-93f1-4377a6b6351b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e2940c67-12c2-402c-93f1-4377a6b6351b" (UID: "e2940c67-12c2-402c-93f1-4377a6b6351b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:54:32.687359 kubelet[2486]: I0124 00:54:32.687261 2486 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:32.687359 kubelet[2486]: I0124 00:54:32.687336 2486 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:32.687359 kubelet[2486]: I0124 00:54:32.687352 2486 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:32.687359 kubelet[2486]: I0124 00:54:32.687365 2486 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2940c67-12c2-402c-93f1-4377a6b6351b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:32.687675 kubelet[2486]: I0124 00:54:32.687379 2486 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2940c67-12c2-402c-93f1-4377a6b6351b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:32.687675 kubelet[2486]: I0124 00:54:32.687393 2486 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:32.687675 kubelet[2486]: I0124 00:54:32.687403 2486 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:32.687675 kubelet[2486]: I0124 00:54:32.687416 2486 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:32.687675 kubelet[2486]: I0124 00:54:32.687428 2486 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:32.687675 kubelet[2486]: I0124 00:54:32.687440 2486 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-87ml5\" (UniqueName: \"kubernetes.io/projected/e2940c67-12c2-402c-93f1-4377a6b6351b-kube-api-access-87ml5\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:32.687675 kubelet[2486]: I0124 00:54:32.687452 2486 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2940c67-12c2-402c-93f1-4377a6b6351b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:32.687675 kubelet[2486]: I0124 00:54:32.687467 2486 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2940c67-12c2-402c-93f1-4377a6b6351b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 24 00:54:32.801594 systemd[1]: Removed slice kubepods-besteffort-podd740a83b_996a_4d6c_8de9_107b47be9b41.slice - libcontainer container kubepods-besteffort-podd740a83b_996a_4d6c_8de9_107b47be9b41.slice. Jan 24 00:54:33.203119 kubelet[2486]: I0124 00:54:33.202968 2486 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d740a83b-996a-4d6c-8de9-107b47be9b41" path="/var/lib/kubelet/pods/d740a83b-996a-4d6c-8de9-107b47be9b41/volumes" Jan 24 00:54:33.211406 systemd[1]: Removed slice kubepods-burstable-pode2940c67_12c2_402c_93f1_4377a6b6351b.slice - libcontainer container kubepods-burstable-pode2940c67_12c2_402c_93f1_4377a6b6351b.slice. Jan 24 00:54:33.211560 systemd[1]: kubepods-burstable-pode2940c67_12c2_402c_93f1_4377a6b6351b.slice: Consumed 8.715s CPU time. Jan 24 00:54:33.253713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a-rootfs.mount: Deactivated successfully. Jan 24 00:54:33.253987 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-791fe31d2acbf3cc3f16c3ed53e6d6725701b337f86f5524dfe9c51e2bdc3e0a-shm.mount: Deactivated successfully. Jan 24 00:54:33.254175 systemd[1]: var-lib-kubelet-pods-d740a83b\x2d996a\x2d4d6c\x2d8de9\x2d107b47be9b41-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbr6zw.mount: Deactivated successfully. Jan 24 00:54:33.254304 systemd[1]: var-lib-kubelet-pods-e2940c67\x2d12c2\x2d402c\x2d93f1\x2d4377a6b6351b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 24 00:54:33.254431 systemd[1]: var-lib-kubelet-pods-e2940c67\x2d12c2\x2d402c\x2d93f1\x2d4377a6b6351b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 24 00:54:33.254559 systemd[1]: var-lib-kubelet-pods-e2940c67\x2d12c2\x2d402c\x2d93f1\x2d4377a6b6351b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d87ml5.mount: Deactivated successfully. Jan 24 00:54:34.171186 sshd[4162]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:34.184156 systemd[1]: sshd@23-10.0.0.97:22-10.0.0.1:60374.service: Deactivated successfully. Jan 24 00:54:34.186263 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:54:34.188080 systemd-logind[1439]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:54:34.196329 systemd[1]: Started sshd@24-10.0.0.97:22-10.0.0.1:60390.service - OpenSSH per-connection server daemon (10.0.0.1:60390). Jan 24 00:54:34.197600 systemd-logind[1439]: Removed session 24. Jan 24 00:54:34.237395 sshd[4325]: Accepted publickey for core from 10.0.0.1 port 60390 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:34.239331 sshd[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:34.245796 systemd-logind[1439]: New session 25 of user core. Jan 24 00:54:34.256125 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 00:54:34.992558 sshd[4325]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:35.001707 systemd[1]: sshd@24-10.0.0.97:22-10.0.0.1:60390.service: Deactivated successfully. Jan 24 00:54:35.006352 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 00:54:35.008317 systemd-logind[1439]: Session 25 logged out. Waiting for processes to exit. Jan 24 00:54:35.022590 systemd[1]: Started sshd@25-10.0.0.97:22-10.0.0.1:40582.service - OpenSSH per-connection server daemon (10.0.0.1:40582). Jan 24 00:54:35.024942 systemd-logind[1439]: Removed session 25. Jan 24 00:54:35.062285 systemd[1]: Created slice kubepods-burstable-podf5d74d51_52c9_4131_a5fa_188081289ff4.slice - libcontainer container kubepods-burstable-podf5d74d51_52c9_4131_a5fa_188081289ff4.slice. Jan 24 00:54:35.066196 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 40582 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:35.070835 sshd[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:35.082138 systemd-logind[1439]: New session 26 of user core. Jan 24 00:54:35.100332 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 00:54:35.107641 kubelet[2486]: I0124 00:54:35.107559 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5d74d51-52c9-4131-a5fa-188081289ff4-hostproc\") pod \"cilium-f2lh8\" (UID: \"f5d74d51-52c9-4131-a5fa-188081289ff4\") " pod="kube-system/cilium-f2lh8" Jan 24 00:54:35.107641 kubelet[2486]: I0124 00:54:35.107614 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5d74d51-52c9-4131-a5fa-188081289ff4-cni-path\") pod \"cilium-f2lh8\" (UID: \"f5d74d51-52c9-4131-a5fa-188081289ff4\") " pod="kube-system/cilium-f2lh8" Jan 24 00:54:35.108525 kubelet[2486]: I0124 00:54:35.107702 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f5d74d51-52c9-4131-a5fa-188081289ff4-cilium-ipsec-secrets\") pod \"cilium-f2lh8\" (UID: \"f5d74d51-52c9-4131-a5fa-188081289ff4\") " pod="kube-system/cilium-f2lh8" Jan 24 00:54:35.108525 kubelet[2486]: I0124 00:54:35.107734 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5d74d51-52c9-4131-a5fa-188081289ff4-host-proc-sys-kernel\") pod \"cilium-f2lh8\" (UID: \"f5d74d51-52c9-4131-a5fa-188081289ff4\") " pod="kube-system/cilium-f2lh8" Jan 24 00:54:35.108525 kubelet[2486]: I0124 00:54:35.107748 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5d74d51-52c9-4131-a5fa-188081289ff4-hubble-tls\") pod \"cilium-f2lh8\" (UID: \"f5d74d51-52c9-4131-a5fa-188081289ff4\") " pod="kube-system/cilium-f2lh8" Jan 24 00:54:35.108525 kubelet[2486]: I0124 00:54:35.107762 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvn4\" (UniqueName: \"kubernetes.io/projected/f5d74d51-52c9-4131-a5fa-188081289ff4-kube-api-access-8zvn4\") pod \"cilium-f2lh8\" (UID: \"f5d74d51-52c9-4131-a5fa-188081289ff4\") " pod="kube-system/cilium-f2lh8" Jan 24 00:54:35.108525 kubelet[2486]: I0124 00:54:35.107781 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5d74d51-52c9-4131-a5fa-188081289ff4-etc-cni-netd\") pod \"cilium-f2lh8\" (UID: \"f5d74d51-52c9-4131-a5fa-188081289ff4\") " pod="kube-system/cilium-f2lh8" Jan 24 00:54:35.108763 kubelet[2486]: I0124 00:54:35.107794 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5d74d51-52c9-4131-a5fa-188081289ff4-cilium-config-path\") pod \"cilium-f2lh8\" (UID: \"f5d74d51-52c9-4131-a5fa-188081289ff4\") " pod="kube-system/cilium-f2lh8" Jan 24 00:54:35.108763 kubelet[2486]: I0124 00:54:35.107806 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5d74d51-52c9-4131-a5fa-188081289ff4-cilium-run\") pod \"cilium-f2lh8\" (UID: \"f5d74d51-52c9-4131-a5fa-188081289ff4\") " pod="kube-system/cilium-f2lh8" Jan 24 00:54:35.108763 kubelet[2486]: I0124 00:54:35.107840 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5d74d51-52c9-4131-a5fa-188081289ff4-bpf-maps\") pod \"cilium-f2lh8\" (UID: \"f5d74d51-52c9-4131-a5fa-188081289ff4\") " pod="kube-system/cilium-f2lh8" Jan 24 00:54:35.108763 kubelet[2486]: I0124 00:54:35.107956 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5d74d51-52c9-4131-a5fa-188081289ff4-xtables-lock\") pod \"cilium-f2lh8\" (UID: \"f5d74d51-52c9-4131-a5fa-188081289ff4\") " pod="kube-system/cilium-f2lh8" Jan 24 00:54:35.108763 kubelet[2486]: I0124 00:54:35.108055 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5d74d51-52c9-4131-a5fa-188081289ff4-cilium-cgroup\") pod \"cilium-f2lh8\" (UID: \"f5d74d51-52c9-4131-a5fa-188081289ff4\") " pod="kube-system/cilium-f2lh8" Jan 24 00:54:35.108763 kubelet[2486]: I0124 00:54:35.108094 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5d74d51-52c9-4131-a5fa-188081289ff4-lib-modules\") pod \"cilium-f2lh8\" (UID: \"f5d74d51-52c9-4131-a5fa-188081289ff4\") " pod="kube-system/cilium-f2lh8" Jan 24 00:54:35.109137 kubelet[2486]: I0124 00:54:35.108158 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5d74d51-52c9-4131-a5fa-188081289ff4-clustermesh-secrets\") pod \"cilium-f2lh8\" (UID: \"f5d74d51-52c9-4131-a5fa-188081289ff4\") " pod="kube-system/cilium-f2lh8" Jan 24 00:54:35.109137 kubelet[2486]: I0124 00:54:35.108186 2486 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5d74d51-52c9-4131-a5fa-188081289ff4-host-proc-sys-net\") pod \"cilium-f2lh8\" (UID: \"f5d74d51-52c9-4131-a5fa-188081289ff4\") " pod="kube-system/cilium-f2lh8" Jan 24 00:54:35.164955 sshd[4338]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:35.174087 systemd[1]: sshd@25-10.0.0.97:22-10.0.0.1:40582.service: Deactivated successfully. Jan 24 00:54:35.176803 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 00:54:35.179328 systemd-logind[1439]: Session 26 logged out. Waiting for processes to exit. Jan 24 00:54:35.188410 systemd[1]: Started sshd@26-10.0.0.97:22-10.0.0.1:40594.service - OpenSSH per-connection server daemon (10.0.0.1:40594). Jan 24 00:54:35.190262 systemd-logind[1439]: Removed session 26. Jan 24 00:54:35.203595 kubelet[2486]: I0124 00:54:35.203510 2486 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2940c67-12c2-402c-93f1-4377a6b6351b" path="/var/lib/kubelet/pods/e2940c67-12c2-402c-93f1-4377a6b6351b/volumes" Jan 24 00:54:35.246590 sshd[4346]: Accepted publickey for core from 10.0.0.1 port 40594 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:54:35.249730 sshd[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:54:35.264086 systemd-logind[1439]: New session 27 of user core. Jan 24 00:54:35.269128 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 24 00:54:35.269966 kubelet[2486]: E0124 00:54:35.269591 2486 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 24 00:54:35.372472 kubelet[2486]: E0124 00:54:35.372375 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:35.374985 containerd[1451]: time="2026-01-24T00:54:35.373114925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f2lh8,Uid:f5d74d51-52c9-4131-a5fa-188081289ff4,Namespace:kube-system,Attempt:0,}" Jan 24 00:54:35.455203 containerd[1451]: time="2026-01-24T00:54:35.452991624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:54:35.455203 containerd[1451]: time="2026-01-24T00:54:35.454919300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:54:35.455203 containerd[1451]: time="2026-01-24T00:54:35.454938957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:35.455203 containerd[1451]: time="2026-01-24T00:54:35.455087631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:54:35.489278 systemd[1]: Started cri-containerd-77194afcd2d8b3cbf3a365ab5c06200356735c7158e8672e531a2cabcb8724e3.scope - libcontainer container 77194afcd2d8b3cbf3a365ab5c06200356735c7158e8672e531a2cabcb8724e3. Jan 24 00:54:35.535225 kubelet[2486]: E0124 00:54:35.534526 2486 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5d74d51_52c9_4131_a5fa_188081289ff4.slice/cri-containerd-77194afcd2d8b3cbf3a365ab5c06200356735c7158e8672e531a2cabcb8724e3.scope\": RecentStats: unable to find data in memory cache]" Jan 24 00:54:35.539224 containerd[1451]: time="2026-01-24T00:54:35.539194721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f2lh8,Uid:f5d74d51-52c9-4131-a5fa-188081289ff4,Namespace:kube-system,Attempt:0,} returns sandbox id \"77194afcd2d8b3cbf3a365ab5c06200356735c7158e8672e531a2cabcb8724e3\"" Jan 24 00:54:35.540639 kubelet[2486]: E0124 00:54:35.540535 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:35.547757 containerd[1451]: time="2026-01-24T00:54:35.547580888Z" level=info msg="CreateContainer within sandbox \"77194afcd2d8b3cbf3a365ab5c06200356735c7158e8672e531a2cabcb8724e3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 00:54:35.574301 containerd[1451]: time="2026-01-24T00:54:35.574206672Z" level=info msg="CreateContainer within sandbox \"77194afcd2d8b3cbf3a365ab5c06200356735c7158e8672e531a2cabcb8724e3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"883dc24017bdd85c818a43a40c84d76b59d29f5f3c3326cda0ea6fa0e63f5da0\"" Jan 24 00:54:35.575276 containerd[1451]: time="2026-01-24T00:54:35.575184207Z" level=info msg="StartContainer for \"883dc24017bdd85c818a43a40c84d76b59d29f5f3c3326cda0ea6fa0e63f5da0\"" Jan 24 00:54:35.611073 systemd[1]: Started cri-containerd-883dc24017bdd85c818a43a40c84d76b59d29f5f3c3326cda0ea6fa0e63f5da0.scope - libcontainer container 883dc24017bdd85c818a43a40c84d76b59d29f5f3c3326cda0ea6fa0e63f5da0. Jan 24 00:54:35.645163 containerd[1451]: time="2026-01-24T00:54:35.645047567Z" level=info msg="StartContainer for \"883dc24017bdd85c818a43a40c84d76b59d29f5f3c3326cda0ea6fa0e63f5da0\" returns successfully" Jan 24 00:54:35.665062 systemd[1]: cri-containerd-883dc24017bdd85c818a43a40c84d76b59d29f5f3c3326cda0ea6fa0e63f5da0.scope: Deactivated successfully. Jan 24 00:54:35.707484 containerd[1451]: time="2026-01-24T00:54:35.707387393Z" level=info msg="shim disconnected" id=883dc24017bdd85c818a43a40c84d76b59d29f5f3c3326cda0ea6fa0e63f5da0 namespace=k8s.io Jan 24 00:54:35.707484 containerd[1451]: time="2026-01-24T00:54:35.707481338Z" level=warning msg="cleaning up after shim disconnected" id=883dc24017bdd85c818a43a40c84d76b59d29f5f3c3326cda0ea6fa0e63f5da0 namespace=k8s.io Jan 24 00:54:35.707712 containerd[1451]: time="2026-01-24T00:54:35.707495794Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:54:36.514403 kubelet[2486]: E0124 00:54:36.514031 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:36.521654 containerd[1451]: time="2026-01-24T00:54:36.521579883Z" level=info msg="CreateContainer within sandbox \"77194afcd2d8b3cbf3a365ab5c06200356735c7158e8672e531a2cabcb8724e3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 00:54:36.547086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1082870278.mount: Deactivated successfully. Jan 24 00:54:36.548238 containerd[1451]: time="2026-01-24T00:54:36.548159412Z" level=info msg="CreateContainer within sandbox \"77194afcd2d8b3cbf3a365ab5c06200356735c7158e8672e531a2cabcb8724e3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"be1fc9697cbe62e79aa08f36e823f10364aa2c93d53f8a94f1b11ad9713bbaf1\"" Jan 24 00:54:36.549166 containerd[1451]: time="2026-01-24T00:54:36.549083837Z" level=info msg="StartContainer for \"be1fc9697cbe62e79aa08f36e823f10364aa2c93d53f8a94f1b11ad9713bbaf1\"" Jan 24 00:54:36.599294 systemd[1]: Started cri-containerd-be1fc9697cbe62e79aa08f36e823f10364aa2c93d53f8a94f1b11ad9713bbaf1.scope - libcontainer container be1fc9697cbe62e79aa08f36e823f10364aa2c93d53f8a94f1b11ad9713bbaf1. Jan 24 00:54:36.634793 containerd[1451]: time="2026-01-24T00:54:36.634648827Z" level=info msg="StartContainer for \"be1fc9697cbe62e79aa08f36e823f10364aa2c93d53f8a94f1b11ad9713bbaf1\" returns successfully" Jan 24 00:54:36.644211 systemd[1]: cri-containerd-be1fc9697cbe62e79aa08f36e823f10364aa2c93d53f8a94f1b11ad9713bbaf1.scope: Deactivated successfully. Jan 24 00:54:36.691548 containerd[1451]: time="2026-01-24T00:54:36.691439191Z" level=info msg="shim disconnected" id=be1fc9697cbe62e79aa08f36e823f10364aa2c93d53f8a94f1b11ad9713bbaf1 namespace=k8s.io Jan 24 00:54:36.691548 containerd[1451]: time="2026-01-24T00:54:36.691518489Z" level=warning msg="cleaning up after shim disconnected" id=be1fc9697cbe62e79aa08f36e823f10364aa2c93d53f8a94f1b11ad9713bbaf1 namespace=k8s.io Jan 24 00:54:36.691548 containerd[1451]: time="2026-01-24T00:54:36.691535770Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:54:36.878656 kubelet[2486]: I0124 00:54:36.878339 2486 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T00:54:36Z","lastTransitionTime":"2026-01-24T00:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 24 00:54:37.216631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be1fc9697cbe62e79aa08f36e823f10364aa2c93d53f8a94f1b11ad9713bbaf1-rootfs.mount: Deactivated successfully. Jan 24 00:54:37.519201 kubelet[2486]: E0124 00:54:37.519025 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:37.531396 containerd[1451]: time="2026-01-24T00:54:37.531290407Z" level=info msg="CreateContainer within sandbox \"77194afcd2d8b3cbf3a365ab5c06200356735c7158e8672e531a2cabcb8724e3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 00:54:37.564746 containerd[1451]: time="2026-01-24T00:54:37.564643114Z" level=info msg="CreateContainer within sandbox \"77194afcd2d8b3cbf3a365ab5c06200356735c7158e8672e531a2cabcb8724e3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6e88e6bb00d12e973d3c87a25cbc6db8c305859f6797864fd950a8c0d6c964a6\"" Jan 24 00:54:37.565902 containerd[1451]: time="2026-01-24T00:54:37.565409272Z" level=info msg="StartContainer for \"6e88e6bb00d12e973d3c87a25cbc6db8c305859f6797864fd950a8c0d6c964a6\"" Jan 24 00:54:37.611323 systemd[1]: Started cri-containerd-6e88e6bb00d12e973d3c87a25cbc6db8c305859f6797864fd950a8c0d6c964a6.scope - libcontainer container 6e88e6bb00d12e973d3c87a25cbc6db8c305859f6797864fd950a8c0d6c964a6. Jan 24 00:54:37.649938 containerd[1451]: time="2026-01-24T00:54:37.649173569Z" level=info msg="StartContainer for \"6e88e6bb00d12e973d3c87a25cbc6db8c305859f6797864fd950a8c0d6c964a6\" returns successfully" Jan 24 00:54:37.651759 systemd[1]: cri-containerd-6e88e6bb00d12e973d3c87a25cbc6db8c305859f6797864fd950a8c0d6c964a6.scope: Deactivated successfully. Jan 24 00:54:37.694480 containerd[1451]: time="2026-01-24T00:54:37.694406916Z" level=info msg="shim disconnected" id=6e88e6bb00d12e973d3c87a25cbc6db8c305859f6797864fd950a8c0d6c964a6 namespace=k8s.io Jan 24 00:54:37.694480 containerd[1451]: time="2026-01-24T00:54:37.694468400Z" level=warning msg="cleaning up after shim disconnected" id=6e88e6bb00d12e973d3c87a25cbc6db8c305859f6797864fd950a8c0d6c964a6 namespace=k8s.io Jan 24 00:54:37.694480 containerd[1451]: time="2026-01-24T00:54:37.694477607Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:54:38.217230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e88e6bb00d12e973d3c87a25cbc6db8c305859f6797864fd950a8c0d6c964a6-rootfs.mount: Deactivated successfully. Jan 24 00:54:38.524844 kubelet[2486]: E0124 00:54:38.524649 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:38.531285 containerd[1451]: time="2026-01-24T00:54:38.530967950Z" level=info msg="CreateContainer within sandbox \"77194afcd2d8b3cbf3a365ab5c06200356735c7158e8672e531a2cabcb8724e3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 00:54:38.551697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2281546247.mount: Deactivated successfully. Jan 24 00:54:38.553411 containerd[1451]: time="2026-01-24T00:54:38.553075273Z" level=info msg="CreateContainer within sandbox \"77194afcd2d8b3cbf3a365ab5c06200356735c7158e8672e531a2cabcb8724e3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9042a5fc9e9fa967efe5242c7a391b7fdef8127274c8cc6b2239675593bfb16d\"" Jan 24 00:54:38.554227 containerd[1451]: time="2026-01-24T00:54:38.554089860Z" level=info msg="StartContainer for \"9042a5fc9e9fa967efe5242c7a391b7fdef8127274c8cc6b2239675593bfb16d\"" Jan 24 00:54:38.611205 systemd[1]: Started cri-containerd-9042a5fc9e9fa967efe5242c7a391b7fdef8127274c8cc6b2239675593bfb16d.scope - libcontainer container 9042a5fc9e9fa967efe5242c7a391b7fdef8127274c8cc6b2239675593bfb16d. Jan 24 00:54:38.648475 systemd[1]: cri-containerd-9042a5fc9e9fa967efe5242c7a391b7fdef8127274c8cc6b2239675593bfb16d.scope: Deactivated successfully. Jan 24 00:54:38.651694 containerd[1451]: time="2026-01-24T00:54:38.651645386Z" level=info msg="StartContainer for \"9042a5fc9e9fa967efe5242c7a391b7fdef8127274c8cc6b2239675593bfb16d\" returns successfully" Jan 24 00:54:38.692825 containerd[1451]: time="2026-01-24T00:54:38.692724729Z" level=info msg="shim disconnected" id=9042a5fc9e9fa967efe5242c7a391b7fdef8127274c8cc6b2239675593bfb16d namespace=k8s.io Jan 24 00:54:38.692825 containerd[1451]: time="2026-01-24T00:54:38.692810116Z" level=warning msg="cleaning up after shim disconnected" id=9042a5fc9e9fa967efe5242c7a391b7fdef8127274c8cc6b2239675593bfb16d namespace=k8s.io Jan 24 00:54:38.692825 containerd[1451]: time="2026-01-24T00:54:38.692820195Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:54:39.217556 systemd[1]: run-containerd-runc-k8s.io-9042a5fc9e9fa967efe5242c7a391b7fdef8127274c8cc6b2239675593bfb16d-runc.NAxO4S.mount: Deactivated successfully. Jan 24 00:54:39.217722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9042a5fc9e9fa967efe5242c7a391b7fdef8127274c8cc6b2239675593bfb16d-rootfs.mount: Deactivated successfully. Jan 24 00:54:39.533257 kubelet[2486]: E0124 00:54:39.532745 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:39.541955 containerd[1451]: time="2026-01-24T00:54:39.541825456Z" level=info msg="CreateContainer within sandbox \"77194afcd2d8b3cbf3a365ab5c06200356735c7158e8672e531a2cabcb8724e3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 00:54:39.587933 containerd[1451]: time="2026-01-24T00:54:39.587817627Z" level=info msg="CreateContainer within sandbox \"77194afcd2d8b3cbf3a365ab5c06200356735c7158e8672e531a2cabcb8724e3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ed7630c39f237d796106417d2a1e4782e942a63ccd97b8d8d84f9a2bf828a0af\"" Jan 24 00:54:39.588702 containerd[1451]: time="2026-01-24T00:54:39.588654417Z" level=info msg="StartContainer for \"ed7630c39f237d796106417d2a1e4782e942a63ccd97b8d8d84f9a2bf828a0af\"" Jan 24 00:54:39.634261 systemd[1]: Started cri-containerd-ed7630c39f237d796106417d2a1e4782e942a63ccd97b8d8d84f9a2bf828a0af.scope - libcontainer container ed7630c39f237d796106417d2a1e4782e942a63ccd97b8d8d84f9a2bf828a0af. Jan 24 00:54:39.696202 containerd[1451]: time="2026-01-24T00:54:39.695621359Z" level=info msg="StartContainer for \"ed7630c39f237d796106417d2a1e4782e942a63ccd97b8d8d84f9a2bf828a0af\" returns successfully" Jan 24 00:54:40.283947 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 24 00:54:40.539658 kubelet[2486]: E0124 00:54:40.539453 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:41.542090 kubelet[2486]: E0124 00:54:41.542005 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:43.838650 systemd[1]: run-containerd-runc-k8s.io-ed7630c39f237d796106417d2a1e4782e942a63ccd97b8d8d84f9a2bf828a0af-runc.MC8FQm.mount: Deactivated successfully. Jan 24 00:54:43.894640 systemd-networkd[1381]: lxc_health: Link UP Jan 24 00:54:43.911278 systemd-networkd[1381]: lxc_health: Gained carrier Jan 24 00:54:43.952910 kubelet[2486]: E0124 00:54:43.951227 2486 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52018->127.0.0.1:39217: write tcp 127.0.0.1:52018->127.0.0.1:39217: write: broken pipe Jan 24 00:54:45.100240 systemd-networkd[1381]: lxc_health: Gained IPv6LL Jan 24 00:54:45.372418 kubelet[2486]: E0124 00:54:45.371826 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:45.388154 kubelet[2486]: I0124 00:54:45.388028 2486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f2lh8" podStartSLOduration=10.388008773 podStartE2EDuration="10.388008773s" podCreationTimestamp="2026-01-24 00:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:54:40.566557013 +0000 UTC m=+85.496238894" watchObservedRunningTime="2026-01-24 00:54:45.388008773 +0000 UTC m=+90.317690664" Jan 24 00:54:45.552552 kubelet[2486]: E0124 00:54:45.552467 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:46.554992 kubelet[2486]: E0124 00:54:46.554363 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:47.200244 kubelet[2486]: E0124 00:54:47.199771 2486 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:54:50.358604 sshd[4346]: pam_unix(sshd:session): session closed for user core Jan 24 00:54:50.365148 systemd[1]: sshd@26-10.0.0.97:22-10.0.0.1:40594.service: Deactivated successfully. Jan 24 00:54:50.367085 systemd[1]: session-27.scope: Deactivated successfully. Jan 24 00:54:50.367886 systemd-logind[1439]: Session 27 logged out. Waiting for processes to exit. Jan 24 00:54:50.369315 systemd-logind[1439]: Removed session 27.