Jan 24 00:58:46.110187 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:58:46.110208 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:58:46.110220 kernel: BIOS-provided physical RAM map: Jan 24 00:58:46.110226 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 24 00:58:46.110231 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 24 00:58:46.110236 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 00:58:46.110242 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 24 00:58:46.110248 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 24 00:58:46.110253 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:58:46.110261 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 00:58:46.110267 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:58:46.110272 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 00:58:46.110278 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:58:46.110283 kernel: NX (Execute Disable) protection: active Jan 24 00:58:46.110290 kernel: APIC: Static calls initialized Jan 24 00:58:46.110298 kernel: SMBIOS 2.8 present. Jan 24 00:58:46.110304 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 24 00:58:46.110309 kernel: Hypervisor detected: KVM Jan 24 00:58:46.110315 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:58:46.110321 kernel: kvm-clock: using sched offset of 5146249228 cycles Jan 24 00:58:46.110327 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:58:46.110333 kernel: tsc: Detected 2445.426 MHz processor Jan 24 00:58:46.110339 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:58:46.110345 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:58:46.110354 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 24 00:58:46.110360 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 00:58:46.110366 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:58:46.110371 kernel: Using GB pages for direct mapping Jan 24 00:58:46.110377 kernel: ACPI: Early table checksum verification disabled Jan 24 00:58:46.110383 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 24 00:58:46.110389 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:58:46.110395 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:58:46.110400 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:58:46.110409 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 24 00:58:46.110415 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:58:46.110421 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:58:46.110426 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:58:46.110432 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:58:46.110438 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 24 00:58:46.110444 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 24 00:58:46.110454 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 24 00:58:46.110462 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 24 00:58:46.110468 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 24 00:58:46.110475 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 24 00:58:46.110481 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 24 00:58:46.110487 kernel: No NUMA configuration found Jan 24 00:58:46.110493 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 24 00:58:46.110501 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 24 00:58:46.110507 kernel: Zone ranges: Jan 24 00:58:46.110514 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:58:46.110520 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 24 00:58:46.110526 kernel: Normal empty Jan 24 00:58:46.110532 kernel: Movable zone start for each node Jan 24 00:58:46.110538 kernel: Early memory node ranges Jan 24 00:58:46.110544 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 00:58:46.110550 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 24 00:58:46.110556 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 24 00:58:46.110564 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:58:46.110570 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 00:58:46.110576 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 24 00:58:46.110583 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:58:46.110589 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:58:46.110595 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:58:46.110601 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:58:46.110607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:58:46.110613 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:58:46.110622 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:58:46.110629 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:58:46.110635 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:58:46.110641 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:58:46.110647 kernel: TSC deadline timer available Jan 24 00:58:46.110653 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 24 00:58:46.110659 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:58:46.110665 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:58:46.110671 kernel: kvm-guest: setup PV sched yield Jan 24 00:58:46.110679 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 00:58:46.110685 kernel: Booting paravirtualized kernel on KVM Jan 24 00:58:46.110692 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:58:46.110698 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 24 00:58:46.110704 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 24 00:58:46.110710 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 24 00:58:46.110716 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 24 00:58:46.110722 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:58:46.110728 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:58:46.110737 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:58:46.110743 kernel: random: crng init done Jan 24 00:58:46.110750 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:58:46.110756 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:58:46.110762 kernel: Fallback order for Node 0: 0 Jan 24 00:58:46.110768 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 24 00:58:46.110774 kernel: Policy zone: DMA32 Jan 24 00:58:46.110780 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:58:46.110786 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 136884K reserved, 0K cma-reserved) Jan 24 00:58:46.110795 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 24 00:58:46.110801 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:58:46.110807 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:58:46.110813 kernel: Dynamic Preempt: voluntary Jan 24 00:58:46.110820 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:58:46.110827 kernel: rcu: RCU event tracing is enabled. Jan 24 00:58:46.110833 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 24 00:58:46.110840 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:58:46.110846 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:58:46.110855 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:58:46.110861 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:58:46.110867 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 24 00:58:46.110873 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 24 00:58:46.110879 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:58:46.110885 kernel: Console: colour VGA+ 80x25 Jan 24 00:58:46.110891 kernel: printk: console [ttyS0] enabled Jan 24 00:58:46.110897 kernel: ACPI: Core revision 20230628 Jan 24 00:58:46.110937 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:58:46.110946 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:58:46.110952 kernel: x2apic enabled Jan 24 00:58:46.110959 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:58:46.110965 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:58:46.110971 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:58:46.110977 kernel: kvm-guest: setup PV IPIs Jan 24 00:58:46.110983 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:58:46.111000 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:58:46.111006 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 24 00:58:46.111013 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:58:46.111019 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:58:46.111026 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:58:46.111035 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:58:46.111042 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:58:46.111048 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:58:46.111055 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:58:46.111061 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:58:46.111071 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:58:46.111077 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:58:46.111084 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:58:46.111090 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:58:46.111097 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:58:46.111103 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:58:46.111110 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:58:46.111149 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:58:46.111160 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:58:46.111166 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 24 00:58:46.111173 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:58:46.111179 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:58:46.111185 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:58:46.111192 kernel: landlock: Up and running. Jan 24 00:58:46.111198 kernel: SELinux: Initializing. Jan 24 00:58:46.111205 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:58:46.111211 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:58:46.111220 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:58:46.111227 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:58:46.111233 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:58:46.111240 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 24 00:58:46.111246 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 24 00:58:46.111253 kernel: signal: max sigframe size: 1776 Jan 24 00:58:46.111259 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:58:46.111266 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:58:46.111272 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:58:46.111281 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:58:46.111288 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:58:46.111294 kernel: .... node #0, CPUs: #1 #2 #3 Jan 24 00:58:46.111300 kernel: smp: Brought up 1 node, 4 CPUs Jan 24 00:58:46.111306 kernel: smpboot: Max logical packages: 1 Jan 24 00:58:46.111327 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 24 00:58:46.111334 kernel: devtmpfs: initialized Jan 24 00:58:46.111340 kernel: x86/mm: Memory block size: 128MB Jan 24 00:58:46.111347 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:58:46.111356 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 24 00:58:46.111362 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:58:46.111369 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:58:46.111375 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:58:46.111381 kernel: audit: type=2000 audit(1769216324.537:1): state=initialized audit_enabled=0 res=1 Jan 24 00:58:46.111388 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:58:46.111394 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:58:46.111401 kernel: cpuidle: using governor menu Jan 24 00:58:46.111407 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:58:46.111416 kernel: dca service started, version 1.12.1 Jan 24 00:58:46.111423 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 00:58:46.111429 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:58:46.111435 kernel: PCI: Using configuration type 1 for base access Jan 24 00:58:46.111442 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:58:46.111448 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:58:46.111455 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:58:46.111462 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:58:46.111468 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:58:46.111477 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:58:46.111483 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:58:46.111489 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:58:46.111496 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:58:46.111502 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:58:46.111509 kernel: ACPI: Interpreter enabled Jan 24 00:58:46.111515 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:58:46.111521 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:58:46.111528 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:58:46.111537 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:58:46.111543 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:58:46.111550 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:58:46.111735 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:58:46.111961 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:58:46.112092 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:58:46.112102 kernel: PCI host bridge to bus 0000:00 Jan 24 00:58:46.112290 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:58:46.112410 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:58:46.112526 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:58:46.112635 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 24 00:58:46.112743 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:58:46.112856 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 24 00:58:46.113009 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:58:46.113214 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:58:46.113346 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 24 00:58:46.113467 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 24 00:58:46.113585 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 24 00:58:46.113702 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 24 00:58:46.113819 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:58:46.114018 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 24 00:58:46.114192 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 24 00:58:46.114320 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 24 00:58:46.114442 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 00:58:46.114568 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 24 00:58:46.114688 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 24 00:58:46.114806 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 24 00:58:46.114995 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 00:58:46.115176 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 24 00:58:46.115306 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 24 00:58:46.115425 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 24 00:58:46.115544 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 24 00:58:46.115661 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 24 00:58:46.115786 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:58:46.115960 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:58:46.116093 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:58:46.116276 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 24 00:58:46.116401 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 24 00:58:46.116528 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:58:46.116647 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 24 00:58:46.116661 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:58:46.116668 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:58:46.116675 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:58:46.116682 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:58:46.116688 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:58:46.116695 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:58:46.116701 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:58:46.116708 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:58:46.116714 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:58:46.116723 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:58:46.116730 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:58:46.116736 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:58:46.116742 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:58:46.116749 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:58:46.116755 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:58:46.116816 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:58:46.116824 kernel: iommu: Default domain type: Translated Jan 24 00:58:46.116831 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:58:46.116841 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:58:46.116848 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:58:46.116855 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 24 00:58:46.116861 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 24 00:58:46.117037 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:58:46.117223 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:58:46.117343 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:58:46.117353 kernel: vgaarb: loaded Jan 24 00:58:46.117359 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:58:46.117370 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:58:46.117377 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:58:46.117384 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:58:46.117390 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:58:46.117397 kernel: pnp: PnP ACPI init Jan 24 00:58:46.117578 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:58:46.117599 kernel: pnp: PnP ACPI: found 6 devices Jan 24 00:58:46.117609 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:58:46.117620 kernel: NET: Registered PF_INET protocol family Jan 24 00:58:46.117627 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:58:46.117634 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:58:46.117640 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:58:46.117647 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:58:46.117654 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:58:46.117660 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:58:46.117667 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:58:46.117674 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:58:46.117683 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:58:46.117689 kernel: NET: Registered PF_XDP protocol family Jan 24 00:58:46.117809 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:58:46.117980 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:58:46.118098 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:58:46.118258 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 24 00:58:46.118369 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:58:46.118478 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 24 00:58:46.118492 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:58:46.118499 kernel: Initialise system trusted keyrings Jan 24 00:58:46.118506 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:58:46.118512 kernel: Key type asymmetric registered Jan 24 00:58:46.118518 kernel: Asymmetric key parser 'x509' registered Jan 24 00:58:46.118525 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:58:46.118532 kernel: io scheduler mq-deadline registered Jan 24 00:58:46.118538 kernel: io scheduler kyber registered Jan 24 00:58:46.118545 kernel: io scheduler bfq registered Jan 24 00:58:46.118554 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:58:46.118561 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:58:46.118568 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:58:46.118575 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 24 00:58:46.118581 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:58:46.118588 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:58:46.118595 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:58:46.118602 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:58:46.118608 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:58:46.118738 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 24 00:58:46.118748 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:58:46.118859 kernel: rtc_cmos 00:04: registered as rtc0 Jan 24 00:58:46.119011 kernel: rtc_cmos 00:04: setting system clock to 2026-01-24T00:58:45 UTC (1769216325) Jan 24 00:58:46.119177 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:58:46.119189 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:58:46.119196 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:58:46.119202 kernel: Segment Routing with IPv6 Jan 24 00:58:46.119213 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:58:46.119220 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:58:46.119227 kernel: Key type dns_resolver registered Jan 24 00:58:46.119233 kernel: IPI shorthand broadcast: enabled Jan 24 00:58:46.119240 kernel: sched_clock: Marking stable (1004014824, 409049843)->(1773084689, -360020022) Jan 24 00:58:46.119246 kernel: registered taskstats version 1 Jan 24 00:58:46.119253 kernel: Loading compiled-in X.509 certificates Jan 24 00:58:46.119259 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:58:46.119266 kernel: Key type .fscrypt registered Jan 24 00:58:46.119275 kernel: Key type fscrypt-provisioning registered Jan 24 00:58:46.119282 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:58:46.119288 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:58:46.119295 kernel: ima: No architecture policies found Jan 24 00:58:46.119301 kernel: clk: Disabling unused clocks Jan 24 00:58:46.119307 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:58:46.119314 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:58:46.119320 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:58:46.119327 kernel: Run /init as init process Jan 24 00:58:46.119336 kernel: with arguments: Jan 24 00:58:46.119343 kernel: /init Jan 24 00:58:46.119349 kernel: with environment: Jan 24 00:58:46.119355 kernel: HOME=/ Jan 24 00:58:46.119362 kernel: TERM=linux Jan 24 00:58:46.119370 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:58:46.119378 systemd[1]: Detected virtualization kvm. Jan 24 00:58:46.119386 systemd[1]: Detected architecture x86-64. Jan 24 00:58:46.119395 systemd[1]: Running in initrd. Jan 24 00:58:46.119402 systemd[1]: No hostname configured, using default hostname. Jan 24 00:58:46.119409 systemd[1]: Hostname set to . Jan 24 00:58:46.119416 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:58:46.119423 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:58:46.119430 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:58:46.119437 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:58:46.119444 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:58:46.119454 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:58:46.119461 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:58:46.119468 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:58:46.119476 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:58:46.119483 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:58:46.119490 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:58:46.119500 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:58:46.119507 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:58:46.119514 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:58:46.119521 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:58:46.119540 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:58:46.119550 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:58:46.119558 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:58:46.119567 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:58:46.119575 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:58:46.119585 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:58:46.119592 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:58:46.119599 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:58:46.119606 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:58:46.119613 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:58:46.119620 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:58:46.119631 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:58:46.119638 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:58:46.119645 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:58:46.119652 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:58:46.119659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:58:46.119666 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:58:46.119697 systemd-journald[194]: Collecting audit messages is disabled. Jan 24 00:58:46.119718 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:58:46.119726 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:58:46.119734 systemd-journald[194]: Journal started Jan 24 00:58:46.119752 systemd-journald[194]: Runtime Journal (/run/log/journal/1f49d74050cf41c2b1144cb11ededc25) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:58:46.123503 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:58:46.126972 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:58:46.132358 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:58:46.280772 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:58:46.280798 kernel: Bridge firewalling registered Jan 24 00:58:46.138421 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:58:46.151989 systemd-modules-load[195]: Inserted module 'overlay' Jan 24 00:58:46.190506 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 24 00:58:46.288464 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:58:46.294514 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:58:46.301870 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:58:46.308364 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:58:46.316392 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:58:46.327878 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:58:46.334979 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:58:46.340807 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:58:46.347874 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:58:46.368483 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:58:46.373824 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:58:46.382051 dracut-cmdline[228]: dracut-dracut-053 Jan 24 00:58:46.385112 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:58:46.435743 systemd-resolved[229]: Positive Trust Anchors: Jan 24 00:58:46.435782 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:58:46.435808 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:58:46.442103 systemd-resolved[229]: Defaulting to hostname 'linux'. Jan 24 00:58:46.443410 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:58:46.460037 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:58:46.486184 kernel: SCSI subsystem initialized Jan 24 00:58:46.496170 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:58:46.508194 kernel: iscsi: registered transport (tcp) Jan 24 00:58:46.529600 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:58:46.529685 kernel: QLogic iSCSI HBA Driver Jan 24 00:58:46.581762 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:58:46.596306 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:58:46.624411 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:58:46.624472 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:58:46.627229 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:58:46.670210 kernel: raid6: avx2x4 gen() 34149 MB/s Jan 24 00:58:46.688192 kernel: raid6: avx2x2 gen() 31130 MB/s Jan 24 00:58:46.707302 kernel: raid6: avx2x1 gen() 26610 MB/s Jan 24 00:58:46.707328 kernel: raid6: using algorithm avx2x4 gen() 34149 MB/s Jan 24 00:58:46.727179 kernel: raid6: .... xor() 5077 MB/s, rmw enabled Jan 24 00:58:46.727220 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:58:46.747193 kernel: xor: automatically using best checksumming function avx Jan 24 00:58:46.893210 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:58:46.906837 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:58:46.918416 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:58:46.930346 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jan 24 00:58:46.934991 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:58:46.941311 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:58:46.963574 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Jan 24 00:58:47.007193 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:58:47.017370 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:58:47.090514 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:58:47.102481 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:58:47.117214 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:58:47.126245 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:58:47.133003 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:58:47.140451 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:58:47.156087 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 24 00:58:47.154724 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:58:47.170281 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 24 00:58:47.170805 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:58:47.185066 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:58:47.185111 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:58:47.185179 kernel: GPT:9289727 != 19775487 Jan 24 00:58:47.188493 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:58:47.188525 kernel: GPT:9289727 != 19775487 Jan 24 00:58:47.193530 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:58:47.193581 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:58:47.204225 kernel: libata version 3.00 loaded. Jan 24 00:58:47.207957 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:58:47.209063 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:58:47.225420 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:58:47.218891 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:58:47.257559 kernel: AES CTR mode by8 optimization enabled Jan 24 00:58:47.257590 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (478) Jan 24 00:58:47.257610 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:58:47.258082 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:58:47.258096 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:58:47.258314 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (464) Jan 24 00:58:47.258325 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:58:47.247885 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:58:47.268732 kernel: scsi host0: ahci Jan 24 00:58:47.268972 kernel: scsi host1: ahci Jan 24 00:58:47.269189 kernel: scsi host2: ahci Jan 24 00:58:47.269340 kernel: scsi host3: ahci Jan 24 00:58:47.248056 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:58:47.293286 kernel: scsi host4: ahci Jan 24 00:58:47.296347 kernel: scsi host5: ahci Jan 24 00:58:47.296566 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 24 00:58:47.296579 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 24 00:58:47.296589 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 24 00:58:47.296598 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 24 00:58:47.296607 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 24 00:58:47.296616 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 24 00:58:47.261841 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:58:47.293325 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:58:47.306708 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 24 00:58:47.307777 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 24 00:58:47.315804 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 24 00:58:47.438774 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:58:47.455728 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 24 00:58:47.463321 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:58:47.479384 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:58:47.483441 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:58:47.496878 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:58:47.496904 disk-uuid[554]: Primary Header is updated. Jan 24 00:58:47.496904 disk-uuid[554]: Secondary Entries is updated. Jan 24 00:58:47.496904 disk-uuid[554]: Secondary Header is updated. Jan 24 00:58:47.505284 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:58:47.505664 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:58:47.604913 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:58:47.605008 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:58:47.605023 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:58:47.605033 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 24 00:58:47.608176 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:58:47.608229 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 24 00:58:47.610672 kernel: ata3.00: applying bridge limits Jan 24 00:58:47.611208 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:58:47.613187 kernel: ata3.00: configured for UDMA/100 Jan 24 00:58:47.618205 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 24 00:58:47.674396 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 24 00:58:47.674736 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:58:47.689204 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 24 00:58:48.507834 disk-uuid[556]: The operation has completed successfully. Jan 24 00:58:48.510500 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 24 00:58:48.541048 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:58:48.541305 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:58:48.575426 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:58:48.582645 sh[595]: Success Jan 24 00:58:48.598178 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:58:48.639627 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:58:48.648037 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:58:48.655654 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:58:48.672765 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:58:48.672804 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:58:48.672815 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:58:48.677747 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:58:48.677766 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:58:48.685785 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:58:48.690586 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:58:48.706329 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:58:48.708650 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:58:48.734002 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:58:48.734061 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:58:48.734072 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:58:48.741181 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:58:48.752773 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:58:48.758491 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:58:48.764052 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:58:48.776323 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:58:48.832976 ignition[705]: Ignition 2.19.0 Jan 24 00:58:48.833009 ignition[705]: Stage: fetch-offline Jan 24 00:58:48.833070 ignition[705]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:58:48.833086 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:58:48.833276 ignition[705]: parsed url from cmdline: "" Jan 24 00:58:48.833283 ignition[705]: no config URL provided Jan 24 00:58:48.833294 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:58:48.833308 ignition[705]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:58:48.833341 ignition[705]: op(1): [started] loading QEMU firmware config module Jan 24 00:58:48.833349 ignition[705]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 24 00:58:48.840658 ignition[705]: op(1): [finished] loading QEMU firmware config module Jan 24 00:58:48.840679 ignition[705]: QEMU firmware config was not found. Ignoring... Jan 24 00:58:48.856322 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:58:48.870279 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:58:48.891906 systemd-networkd[783]: lo: Link UP Jan 24 00:58:48.891956 systemd-networkd[783]: lo: Gained carrier Jan 24 00:58:48.893774 systemd-networkd[783]: Enumeration completed Jan 24 00:58:48.894156 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:58:48.894905 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:58:48.894910 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:58:48.896471 systemd-networkd[783]: eth0: Link UP Jan 24 00:58:48.896475 systemd-networkd[783]: eth0: Gained carrier Jan 24 00:58:48.896482 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:58:48.900790 systemd[1]: Reached target network.target - Network. Jan 24 00:58:48.921201 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:58:49.037991 ignition[705]: parsing config with SHA512: d375459731136d1514d614c1b4ce6f9862a9cca11296168c46bfc9a87667f528ed67ebcdfbb3519f3331ed01973c5d17c93b7bbd0e041c26d1cf144e30ceddbe Jan 24 00:58:49.042111 unknown[705]: fetched base config from "system" Jan 24 00:58:49.042180 unknown[705]: fetched user config from "qemu" Jan 24 00:58:49.042651 ignition[705]: fetch-offline: fetch-offline passed Jan 24 00:58:49.045046 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:58:49.042723 ignition[705]: Ignition finished successfully Jan 24 00:58:49.049651 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 24 00:58:49.062403 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:58:49.078748 ignition[787]: Ignition 2.19.0 Jan 24 00:58:49.078771 ignition[787]: Stage: kargs Jan 24 00:58:49.078961 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:58:49.082688 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:58:49.078974 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:58:49.079926 ignition[787]: kargs: kargs passed Jan 24 00:58:49.080002 ignition[787]: Ignition finished successfully Jan 24 00:58:49.102293 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:58:49.117326 ignition[796]: Ignition 2.19.0 Jan 24 00:58:49.118982 ignition[796]: Stage: disks Jan 24 00:58:49.119204 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:58:49.119224 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:58:49.121737 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:58:49.119986 ignition[796]: disks: disks passed Jan 24 00:58:49.128801 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:58:49.120026 ignition[796]: Ignition finished successfully Jan 24 00:58:49.134052 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:58:49.137268 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:58:49.139884 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:58:49.144190 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:58:49.164393 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:58:49.177372 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:58:49.181767 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:58:49.201280 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:58:49.298190 kernel: EXT4-fs (vda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:58:49.298304 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:58:49.301728 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:58:49.316247 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:58:49.319359 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:58:49.331724 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Jan 24 00:58:49.331750 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:58:49.331760 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:58:49.324037 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:58:49.348431 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:58:49.348448 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:58:49.324076 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:58:49.324097 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:58:49.343040 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:58:49.348479 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:58:49.369302 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:58:49.405007 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:58:49.409766 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:58:49.416759 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:58:49.420795 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:58:49.521205 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:58:49.532338 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:58:49.538322 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:58:49.549258 kernel: BTRFS info (device vda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:58:49.566035 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:58:49.581470 ignition[929]: INFO : Ignition 2.19.0 Jan 24 00:58:49.581470 ignition[929]: INFO : Stage: mount Jan 24 00:58:49.586080 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:58:49.586080 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:58:49.594803 ignition[929]: INFO : mount: mount passed Jan 24 00:58:49.596820 ignition[929]: INFO : Ignition finished successfully Jan 24 00:58:49.601347 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:58:49.612342 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:58:49.668614 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:58:49.684338 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:58:49.696795 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Jan 24 00:58:49.696829 kernel: BTRFS info (device vda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:58:49.696840 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:58:49.701026 kernel: BTRFS info (device vda6): using free space tree Jan 24 00:58:49.706186 kernel: BTRFS info (device vda6): auto enabling async discard Jan 24 00:58:49.708103 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:58:49.730873 ignition[958]: INFO : Ignition 2.19.0 Jan 24 00:58:49.730873 ignition[958]: INFO : Stage: files Jan 24 00:58:49.737584 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:58:49.737584 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:58:49.737584 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:58:49.737584 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:58:49.737584 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:58:49.737584 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:58:49.737584 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:58:49.737584 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:58:49.736890 unknown[958]: wrote ssh authorized keys file for user: core Jan 24 00:58:49.768580 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 24 00:58:49.768580 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 24 00:58:49.768580 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:58:49.768580 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:58:49.789695 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 24 00:58:49.881457 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:58:49.881457 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 24 00:58:49.891607 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 24 00:58:50.038433 systemd-networkd[783]: eth0: Gained IPv6LL Jan 24 00:58:50.098220 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 24 00:58:50.254475 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 24 00:58:50.254475 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:58:50.263647 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:58:50.263647 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:58:50.263647 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:58:50.263647 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:58:50.263647 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:58:50.263647 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:58:50.263647 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:58:50.263647 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:58:50.263647 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:58:50.263647 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:58:50.263647 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:58:50.263647 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:58:50.263647 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:58:50.408992 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 24 00:58:50.842548 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:58:50.842548 ignition[958]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 24 00:58:50.852433 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 24 00:58:50.852433 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 24 00:58:50.852433 ignition[958]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 24 00:58:50.852433 ignition[958]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 24 00:58:50.852433 ignition[958]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:58:50.852433 ignition[958]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:58:50.852433 ignition[958]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 24 00:58:50.852433 ignition[958]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 24 00:58:50.852433 ignition[958]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:58:50.852433 ignition[958]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 24 00:58:50.852433 ignition[958]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 24 00:58:50.852433 ignition[958]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 24 00:58:50.921454 ignition[958]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:58:50.921454 ignition[958]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 24 00:58:50.921454 ignition[958]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 24 00:58:50.921454 ignition[958]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:58:50.921454 ignition[958]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:58:50.921454 ignition[958]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:58:50.921454 ignition[958]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:58:50.921454 ignition[958]: INFO : files: files passed Jan 24 00:58:50.921454 ignition[958]: INFO : Ignition finished successfully Jan 24 00:58:50.879274 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:58:50.903362 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:58:50.911541 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:58:50.982242 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jan 24 00:58:50.917805 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:58:50.993893 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:58:50.993893 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:58:50.917936 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:58:51.005007 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:58:50.932984 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:58:50.939358 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:58:50.946856 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:58:50.977716 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:58:50.977853 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:58:50.982360 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:58:50.988435 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:58:50.991418 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:58:51.010356 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:58:51.026109 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:58:51.040291 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:58:51.052552 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:58:51.055743 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:58:51.061527 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:58:51.066702 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:58:51.066824 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:58:51.072733 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:58:51.077320 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:58:51.082651 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:58:51.087924 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:58:51.093298 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:58:51.098991 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:58:51.103232 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:58:51.103630 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:58:51.104068 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:58:51.175651 ignition[1012]: INFO : Ignition 2.19.0 Jan 24 00:58:51.175651 ignition[1012]: INFO : Stage: umount Jan 24 00:58:51.175651 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:58:51.175651 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 24 00:58:51.175651 ignition[1012]: INFO : umount: umount passed Jan 24 00:58:51.175651 ignition[1012]: INFO : Ignition finished successfully Jan 24 00:58:51.104909 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:58:51.105764 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:58:51.105900 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:58:51.107152 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:58:51.107534 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:58:51.107922 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:58:51.108069 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:58:51.108798 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:58:51.108910 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:58:51.110087 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:58:51.110242 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:58:51.110591 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:58:51.110944 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:58:51.115302 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:58:51.115759 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:58:51.116200 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:58:51.116578 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:58:51.116685 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:58:51.117062 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:58:51.117199 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:58:51.117466 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:58:51.117581 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:58:51.117888 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:58:51.118027 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:58:51.158420 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:58:51.161916 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:58:51.162168 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:58:51.172384 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:58:51.175557 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:58:51.175729 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:58:51.177284 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:58:51.177474 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:58:51.187489 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:58:51.187665 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:58:51.192823 systemd[1]: Stopped target network.target - Network. Jan 24 00:58:51.197793 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:58:51.197878 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:58:51.203809 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:58:51.203860 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:58:51.208413 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:58:51.423763 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 24 00:58:51.208461 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:58:51.213450 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:58:51.213499 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:58:51.216480 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:58:51.221292 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:58:51.227647 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:58:51.228668 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:58:51.228790 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:58:51.233335 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:58:51.233463 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:58:51.235819 systemd-networkd[783]: eth0: DHCPv6 lease lost Jan 24 00:58:51.239801 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:58:51.240011 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:58:51.247335 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:58:51.247383 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:58:51.267290 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:58:51.272264 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:58:51.272324 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:58:51.274742 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:58:51.274793 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:58:51.275757 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:58:51.275803 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:58:51.276755 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:58:51.276799 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:58:51.277750 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:58:51.278428 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:58:51.278528 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:58:51.280363 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:58:51.280446 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:58:51.298936 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:58:51.299100 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:58:51.313209 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:58:51.313392 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:58:51.318028 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:58:51.318079 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:58:51.323012 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:58:51.323051 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:58:51.329070 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:58:51.329164 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:58:51.335023 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:58:51.335073 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:58:51.341263 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:58:51.341312 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:58:51.361290 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:58:51.365228 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:58:51.365285 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:58:51.366770 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:58:51.366820 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:58:51.368568 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:58:51.368685 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:58:51.369057 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:58:51.370601 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:58:51.382379 systemd[1]: Switching root. Jan 24 00:58:51.549731 systemd-journald[194]: Journal stopped Jan 24 00:58:52.684313 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:58:52.684378 kernel: SELinux: policy capability open_perms=1 Jan 24 00:58:52.684390 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:58:52.684404 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:58:52.684420 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:58:52.684435 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:58:52.684449 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:58:52.684464 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:58:52.684474 kernel: audit: type=1403 audit(1769216331.666:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:58:52.684486 systemd[1]: Successfully loaded SELinux policy in 48.843ms. Jan 24 00:58:52.684503 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.584ms. Jan 24 00:58:52.684515 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:58:52.684528 systemd[1]: Detected virtualization kvm. Jan 24 00:58:52.684539 systemd[1]: Detected architecture x86-64. Jan 24 00:58:52.684550 systemd[1]: Detected first boot. Jan 24 00:58:52.684562 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:58:52.684573 zram_generator::config[1077]: No configuration found. Jan 24 00:58:52.684585 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:58:52.684595 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:58:52.684606 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 24 00:58:52.684620 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:58:52.684631 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:58:52.684642 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:58:52.684653 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:58:52.684664 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:58:52.684675 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:58:52.684686 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:58:52.684697 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:58:52.684710 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:58:52.684721 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:58:52.684732 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:58:52.684743 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:58:52.684757 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:58:52.684768 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:58:52.684779 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:58:52.684789 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:58:52.684801 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:58:52.684815 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:58:52.684826 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:58:52.684837 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:58:52.684848 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:58:52.684859 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:58:52.684870 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:58:52.684881 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:58:52.684892 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:58:52.684902 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:58:52.684915 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:58:52.684926 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:58:52.684937 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:58:52.684947 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:58:52.684958 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:58:52.684996 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:58:52.685007 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:58:52.685017 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:58:52.685032 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:58:52.685043 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:58:52.685054 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:58:52.685065 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:58:52.685076 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:58:52.685087 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:58:52.685098 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:58:52.685109 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:58:52.685155 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:58:52.685171 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:58:52.685182 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:58:52.685193 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:58:52.685218 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 24 00:58:52.685231 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 24 00:58:52.685242 kernel: fuse: init (API version 7.39) Jan 24 00:58:52.685253 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:58:52.685263 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:58:52.685277 kernel: loop: module loaded Jan 24 00:58:52.685305 systemd-journald[1165]: Collecting audit messages is disabled. Jan 24 00:58:52.685332 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:58:52.685343 systemd-journald[1165]: Journal started Jan 24 00:58:52.685385 systemd-journald[1165]: Runtime Journal (/run/log/journal/1f49d74050cf41c2b1144cb11ededc25) is 6.0M, max 48.4M, 42.3M free. Jan 24 00:58:52.695775 kernel: ACPI: bus type drm_connector registered Jan 24 00:58:52.695814 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:58:52.708297 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:58:52.714203 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:58:52.718798 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:58:52.722396 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:58:52.725472 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:58:52.728686 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:58:52.731580 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:58:52.734758 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:58:52.738861 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:58:52.741946 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:58:52.747339 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:58:52.751615 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:58:52.751840 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:58:52.756031 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:58:52.756341 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:58:52.759931 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:58:52.760218 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:58:52.764382 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:58:52.764618 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:58:52.768616 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:58:52.768829 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:58:52.772429 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:58:52.772670 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:58:52.776424 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:58:52.780377 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:58:52.784693 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:58:52.800477 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:58:52.819311 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:58:52.823723 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:58:52.826857 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:58:52.828723 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:58:52.833758 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:58:52.836701 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:58:52.838078 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:58:52.842260 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:58:52.846307 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:58:52.853287 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:58:52.858664 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:58:52.860408 systemd-journald[1165]: Time spent on flushing to /var/log/journal/1f49d74050cf41c2b1144cb11ededc25 is 13.960ms for 935 entries. Jan 24 00:58:52.860408 systemd-journald[1165]: System Journal (/var/log/journal/1f49d74050cf41c2b1144cb11ededc25) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:58:52.881772 systemd-journald[1165]: Received client request to flush runtime journal. Jan 24 00:58:52.864789 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:58:52.869915 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:58:52.873781 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:58:52.879855 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:58:52.889357 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:58:52.895631 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:58:52.900707 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:58:52.907949 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Jan 24 00:58:52.908384 udevadm[1224]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 24 00:58:52.908712 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Jan 24 00:58:52.915897 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:58:52.927338 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:58:52.957687 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:58:52.966249 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:58:52.983545 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 24 00:58:52.983577 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 24 00:58:52.988833 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:58:53.229817 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:58:53.244560 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:58:53.270603 systemd-udevd[1241]: Using default interface naming scheme 'v255'. Jan 24 00:58:53.292421 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:58:53.306349 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:58:53.314353 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:58:53.346274 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 24 00:58:53.354177 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1243) Jan 24 00:58:53.404846 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:58:53.430375 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 24 00:58:53.441334 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 24 00:58:53.456250 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:58:53.478327 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:58:53.478602 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:58:53.478820 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 24 00:58:53.478835 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:58:53.506173 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:58:53.518778 systemd-networkd[1251]: lo: Link UP Jan 24 00:58:53.518815 systemd-networkd[1251]: lo: Gained carrier Jan 24 00:58:53.520862 systemd-networkd[1251]: Enumeration completed Jan 24 00:58:53.521698 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:58:53.521703 systemd-networkd[1251]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:58:53.523063 systemd-networkd[1251]: eth0: Link UP Jan 24 00:58:53.523071 systemd-networkd[1251]: eth0: Gained carrier Jan 24 00:58:53.523083 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:58:53.527358 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:58:53.531365 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:58:53.542101 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:58:53.548242 systemd-networkd[1251]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 24 00:58:53.620105 kernel: kvm_amd: TSC scaling supported Jan 24 00:58:53.620200 kernel: kvm_amd: Nested Virtualization enabled Jan 24 00:58:53.620215 kernel: kvm_amd: Nested Paging enabled Jan 24 00:58:53.623285 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 24 00:58:53.623310 kernel: kvm_amd: PMU virtualization is disabled Jan 24 00:58:53.671188 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:58:53.698451 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:58:53.783323 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:58:53.787582 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:58:53.795831 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:58:53.826635 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:58:53.830907 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:58:53.844361 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:58:53.849731 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:58:53.884953 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:58:53.891552 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:58:53.896394 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:58:53.896456 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:58:53.901582 systemd[1]: Reached target machines.target - Containers. Jan 24 00:58:53.907484 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:58:53.925380 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:58:53.930254 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:58:53.933326 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:58:53.934905 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:58:53.940260 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:58:53.943564 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:58:53.948235 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:58:53.959182 kernel: loop0: detected capacity change from 0 to 224512 Jan 24 00:58:53.960726 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:58:53.975507 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:58:53.976625 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:58:53.986203 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:58:54.016181 kernel: loop1: detected capacity change from 0 to 142488 Jan 24 00:58:54.060165 kernel: loop2: detected capacity change from 0 to 140768 Jan 24 00:58:54.107185 kernel: loop3: detected capacity change from 0 to 224512 Jan 24 00:58:54.124183 kernel: loop4: detected capacity change from 0 to 142488 Jan 24 00:58:54.140170 kernel: loop5: detected capacity change from 0 to 140768 Jan 24 00:58:54.154963 (sd-merge)[1312]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 24 00:58:54.155669 (sd-merge)[1312]: Merged extensions into '/usr'. Jan 24 00:58:54.159635 systemd[1]: Reloading requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:58:54.159669 systemd[1]: Reloading... Jan 24 00:58:54.214270 zram_generator::config[1340]: No configuration found. Jan 24 00:58:54.264485 ldconfig[1295]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:58:54.344851 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:58:54.408801 systemd[1]: Reloading finished in 248 ms. Jan 24 00:58:54.426894 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:58:54.430444 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:58:54.451427 systemd[1]: Starting ensure-sysext.service... Jan 24 00:58:54.454730 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:58:54.459764 systemd[1]: Reloading requested from client PID 1384 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:58:54.459800 systemd[1]: Reloading... Jan 24 00:58:54.489086 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:58:54.489488 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:58:54.490498 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:58:54.490752 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Jan 24 00:58:54.490825 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Jan 24 00:58:54.497080 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:58:54.497298 systemd-tmpfiles[1385]: Skipping /boot Jan 24 00:58:54.501734 zram_generator::config[1413]: No configuration found. Jan 24 00:58:54.512060 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:58:54.512104 systemd-tmpfiles[1385]: Skipping /boot Jan 24 00:58:54.628745 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:58:54.693474 systemd[1]: Reloading finished in 233 ms. Jan 24 00:58:54.723955 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:58:54.737472 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:58:54.742325 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:58:54.747585 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:58:54.758702 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:58:54.768286 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:58:54.777362 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:58:54.777556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:58:54.779337 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:58:54.785385 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:58:54.796530 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:58:54.799474 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:58:54.799574 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:58:54.801363 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:58:54.807601 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:58:54.807817 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:58:54.812550 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:58:54.812795 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:58:54.818776 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:58:54.819324 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:58:54.821285 augenrules[1486]: No rules Jan 24 00:58:54.823497 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:58:54.835910 systemd[1]: Finished ensure-sysext.service. Jan 24 00:58:54.838874 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:58:54.846377 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:58:54.851570 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:58:54.851862 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:58:54.860445 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:58:54.861889 systemd-resolved[1468]: Positive Trust Anchors: Jan 24 00:58:54.861898 systemd-resolved[1468]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:58:54.861925 systemd-resolved[1468]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:58:54.865561 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:58:54.865706 systemd-resolved[1468]: Defaulting to hostname 'linux'. Jan 24 00:58:54.870233 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:58:54.877334 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:58:54.880909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:58:54.883225 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:58:54.889342 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:58:54.892590 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:58:54.892656 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:58:54.893199 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:58:54.897714 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:58:54.897950 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:58:54.902657 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:58:54.902879 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:58:54.907412 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:58:54.907633 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:58:54.912098 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:58:54.912703 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:58:54.918176 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:58:54.926774 systemd[1]: Reached target network.target - Network. Jan 24 00:58:54.929293 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:58:54.932382 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:58:54.932470 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:58:54.994519 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:58:54.998907 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:58:55.647513 systemd-timesyncd[1510]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 24 00:58:55.647518 systemd-resolved[1468]: Clock change detected. Flushing caches. Jan 24 00:58:55.647576 systemd-timesyncd[1510]: Initial clock synchronization to Sat 2026-01-24 00:58:55.647408 UTC. Jan 24 00:58:55.650014 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:58:55.653413 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:58:55.656618 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:58:55.660149 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:58:55.660177 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:58:55.664052 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:58:55.666947 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:58:55.669938 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:58:55.673094 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:58:55.676472 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:58:55.682008 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:58:55.686432 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:58:55.692071 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:58:55.696136 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:58:55.699371 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:58:55.702286 systemd[1]: System is tainted: cgroupsv1 Jan 24 00:58:55.702362 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:58:55.702396 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:58:55.703918 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:58:55.709214 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:58:55.713614 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:58:55.719067 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:58:55.722978 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:58:55.727020 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:58:55.731004 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:58:55.738330 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:58:55.742290 jq[1526]: false Jan 24 00:58:55.744052 dbus-daemon[1525]: [system] SELinux support is enabled Jan 24 00:58:55.747001 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:58:55.754920 extend-filesystems[1528]: Found loop3 Jan 24 00:58:55.764743 extend-filesystems[1528]: Found loop4 Jan 24 00:58:55.764743 extend-filesystems[1528]: Found loop5 Jan 24 00:58:55.764743 extend-filesystems[1528]: Found sr0 Jan 24 00:58:55.764743 extend-filesystems[1528]: Found vda Jan 24 00:58:55.764743 extend-filesystems[1528]: Found vda1 Jan 24 00:58:55.764743 extend-filesystems[1528]: Found vda2 Jan 24 00:58:55.764743 extend-filesystems[1528]: Found vda3 Jan 24 00:58:55.764743 extend-filesystems[1528]: Found usr Jan 24 00:58:55.764743 extend-filesystems[1528]: Found vda4 Jan 24 00:58:55.764743 extend-filesystems[1528]: Found vda6 Jan 24 00:58:55.764743 extend-filesystems[1528]: Found vda7 Jan 24 00:58:55.764743 extend-filesystems[1528]: Found vda9 Jan 24 00:58:55.764743 extend-filesystems[1528]: Checking size of /dev/vda9 Jan 24 00:58:55.823127 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 24 00:58:55.823433 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1248) Jan 24 00:58:55.765042 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:58:55.823941 extend-filesystems[1528]: Resized partition /dev/vda9 Jan 24 00:58:55.770018 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:58:55.826520 extend-filesystems[1550]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:58:55.792294 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:58:55.827950 jq[1554]: true Jan 24 00:58:55.801928 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:58:55.812329 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:58:55.825805 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:58:55.826146 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:58:55.826496 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:58:55.826862 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:58:55.833609 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:58:55.833984 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:58:55.847111 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 24 00:58:55.847158 update_engine[1551]: I20260124 00:58:55.846655 1551 main.cc:92] Flatcar Update Engine starting Jan 24 00:58:55.849944 (ntainerd)[1560]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:58:55.867085 update_engine[1551]: I20260124 00:58:55.849032 1551 update_check_scheduler.cc:74] Next update check in 2m50s Jan 24 00:58:55.867132 jq[1559]: true Jan 24 00:58:55.873906 extend-filesystems[1550]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 24 00:58:55.873906 extend-filesystems[1550]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 24 00:58:55.873906 extend-filesystems[1550]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 24 00:58:55.873392 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:58:55.903095 tar[1557]: linux-amd64/LICENSE Jan 24 00:58:55.903095 tar[1557]: linux-amd64/helm Jan 24 00:58:55.903372 extend-filesystems[1528]: Resized filesystem in /dev/vda9 Jan 24 00:58:55.873394 systemd-logind[1545]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:58:55.873415 systemd-logind[1545]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:58:55.873774 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:58:55.875927 systemd-logind[1545]: New seat seat0. Jan 24 00:58:55.886666 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:58:55.898023 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:58:55.907251 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:58:55.907385 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:58:55.916425 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:58:55.916632 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:58:55.923634 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:58:55.931799 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:58:55.934964 bash[1588]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:58:55.941631 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:58:55.954735 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:58:55.970876 locksmithd[1589]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:58:56.046163 containerd[1560]: time="2026-01-24T00:58:56.046087509Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:58:56.062676 systemd-networkd[1251]: eth0: Gained IPv6LL Jan 24 00:58:56.066926 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:58:56.074193 containerd[1560]: time="2026-01-24T00:58:56.069449865Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:58:56.070975 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:58:56.075462 containerd[1560]: time="2026-01-24T00:58:56.074667464Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:58:56.075462 containerd[1560]: time="2026-01-24T00:58:56.074746842Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:58:56.075462 containerd[1560]: time="2026-01-24T00:58:56.074765236Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:58:56.075462 containerd[1560]: time="2026-01-24T00:58:56.075046471Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:58:56.075462 containerd[1560]: time="2026-01-24T00:58:56.075063864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:58:56.075462 containerd[1560]: time="2026-01-24T00:58:56.075127242Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:58:56.075462 containerd[1560]: time="2026-01-24T00:58:56.075139054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:58:56.075462 containerd[1560]: time="2026-01-24T00:58:56.075348685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:58:56.075462 containerd[1560]: time="2026-01-24T00:58:56.075362421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:58:56.075462 containerd[1560]: time="2026-01-24T00:58:56.075373462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:58:56.075462 containerd[1560]: time="2026-01-24T00:58:56.075382147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:58:56.075651 containerd[1560]: time="2026-01-24T00:58:56.075466605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:58:56.075765 containerd[1560]: time="2026-01-24T00:58:56.075745246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:58:56.076048 containerd[1560]: time="2026-01-24T00:58:56.076028444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:58:56.076118 containerd[1560]: time="2026-01-24T00:58:56.076104806Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:58:56.076251 containerd[1560]: time="2026-01-24T00:58:56.076236582Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:58:56.076345 containerd[1560]: time="2026-01-24T00:58:56.076331920Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:58:56.081564 containerd[1560]: time="2026-01-24T00:58:56.081546010Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:58:56.081676 containerd[1560]: time="2026-01-24T00:58:56.081662718Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:58:56.081907 containerd[1560]: time="2026-01-24T00:58:56.081891725Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:58:56.081995 containerd[1560]: time="2026-01-24T00:58:56.081981112Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:58:56.082104 containerd[1560]: time="2026-01-24T00:58:56.082089083Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:58:56.082132 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 24 00:58:56.084752 containerd[1560]: time="2026-01-24T00:58:56.082360880Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:58:56.084752 containerd[1560]: time="2026-01-24T00:58:56.083197052Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:58:56.084752 containerd[1560]: time="2026-01-24T00:58:56.083435867Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:58:56.084752 containerd[1560]: time="2026-01-24T00:58:56.083492863Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:58:56.084752 containerd[1560]: time="2026-01-24T00:58:56.083506378Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:58:56.084752 containerd[1560]: time="2026-01-24T00:58:56.083524944Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:58:56.084752 containerd[1560]: time="2026-01-24T00:58:56.083536585Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:58:56.084752 containerd[1560]: time="2026-01-24T00:58:56.083548377Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:58:56.084752 containerd[1560]: time="2026-01-24T00:58:56.083560450Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:58:56.084752 containerd[1560]: time="2026-01-24T00:58:56.083573204Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:58:56.084752 containerd[1560]: time="2026-01-24T00:58:56.083583934Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:58:56.084752 containerd[1560]: time="2026-01-24T00:58:56.083594453Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:58:56.084752 containerd[1560]: time="2026-01-24T00:58:56.083605053Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:58:56.084752 containerd[1560]: time="2026-01-24T00:58:56.083621614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085014 containerd[1560]: time="2026-01-24T00:58:56.083633226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085014 containerd[1560]: time="2026-01-24T00:58:56.083644116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085014 containerd[1560]: time="2026-01-24T00:58:56.083655067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085014 containerd[1560]: time="2026-01-24T00:58:56.083665596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085014 containerd[1560]: time="2026-01-24T00:58:56.083676556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085014 containerd[1560]: time="2026-01-24T00:58:56.083687146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085014 containerd[1560]: time="2026-01-24T00:58:56.083741348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085014 containerd[1560]: time="2026-01-24T00:58:56.083752969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085014 containerd[1560]: time="2026-01-24T00:58:56.083766133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085014 containerd[1560]: time="2026-01-24T00:58:56.083777675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085014 containerd[1560]: time="2026-01-24T00:58:56.083787313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085014 containerd[1560]: time="2026-01-24T00:58:56.083797853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085014 containerd[1560]: time="2026-01-24T00:58:56.083811599Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:58:56.085014 containerd[1560]: time="2026-01-24T00:58:56.083867202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085014 containerd[1560]: time="2026-01-24T00:58:56.083879526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085237 containerd[1560]: time="2026-01-24T00:58:56.083888322Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:58:56.085237 containerd[1560]: time="2026-01-24T00:58:56.083928898Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:58:56.085237 containerd[1560]: time="2026-01-24T00:58:56.083942082Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:58:56.085237 containerd[1560]: time="2026-01-24T00:58:56.083951510Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:58:56.085237 containerd[1560]: time="2026-01-24T00:58:56.083963292Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:58:56.085237 containerd[1560]: time="2026-01-24T00:58:56.083971477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085237 containerd[1560]: time="2026-01-24T00:58:56.083981536Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:58:56.085237 containerd[1560]: time="2026-01-24T00:58:56.083990172Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:58:56.085237 containerd[1560]: time="2026-01-24T00:58:56.083999419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:58:56.085375 containerd[1560]: time="2026-01-24T00:58:56.084207027Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:58:56.085375 containerd[1560]: time="2026-01-24T00:58:56.084253403Z" level=info msg="Connect containerd service" Jan 24 00:58:56.085375 containerd[1560]: time="2026-01-24T00:58:56.084290102Z" level=info msg="using legacy CRI server" Jan 24 00:58:56.085375 containerd[1560]: time="2026-01-24T00:58:56.084296363Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:58:56.085375 containerd[1560]: time="2026-01-24T00:58:56.084383035Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:58:56.085375 containerd[1560]: time="2026-01-24T00:58:56.084933392Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:58:56.085375 containerd[1560]: time="2026-01-24T00:58:56.085161418Z" level=info msg="Start subscribing containerd event" Jan 24 00:58:56.085375 containerd[1560]: time="2026-01-24T00:58:56.085197876Z" level=info msg="Start recovering state" Jan 24 00:58:56.085375 containerd[1560]: time="2026-01-24T00:58:56.085247920Z" level=info msg="Start event monitor" Jan 24 00:58:56.085375 containerd[1560]: time="2026-01-24T00:58:56.085261735Z" level=info msg="Start snapshots syncer" Jan 24 00:58:56.085375 containerd[1560]: time="2026-01-24T00:58:56.085269801Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:58:56.085375 containerd[1560]: time="2026-01-24T00:58:56.085276994Z" level=info msg="Start streaming server" Jan 24 00:58:56.085670 containerd[1560]: time="2026-01-24T00:58:56.085620094Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:58:56.085786 containerd[1560]: time="2026-01-24T00:58:56.085769683Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:58:56.090179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:58:56.093400 containerd[1560]: time="2026-01-24T00:58:56.093317735Z" level=info msg="containerd successfully booted in 0.048508s" Jan 24 00:58:56.098075 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:58:56.101238 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:58:56.131273 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 24 00:58:56.131602 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 24 00:58:56.135192 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:58:56.140501 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:58:56.314517 sshd_keygen[1553]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:58:56.342249 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:58:56.343560 tar[1557]: linux-amd64/README.md Jan 24 00:58:56.368146 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:58:56.371631 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:58:56.397369 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:58:56.397781 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:58:56.413251 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:58:56.424140 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:58:56.439206 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:58:56.443900 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:58:56.448337 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:58:56.888676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:58:56.892983 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:58:56.895346 (kubelet)[1663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:58:56.896994 systemd[1]: Startup finished in 7.078s (kernel) + 4.630s (userspace) = 11.709s. Jan 24 00:58:57.434161 kubelet[1663]: E0124 00:58:57.434030 1663 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:58:57.437573 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:58:57.438045 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:59:00.538637 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:59:00.549062 systemd[1]: Started sshd@0-10.0.0.151:22-10.0.0.1:59782.service - OpenSSH per-connection server daemon (10.0.0.1:59782). Jan 24 00:59:00.591756 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 59782 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:00.593977 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:00.604452 systemd-logind[1545]: New session 1 of user core. Jan 24 00:59:00.605653 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:59:00.614171 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:59:00.626868 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:59:00.641087 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:59:00.644425 (systemd)[1682]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:59:00.744414 systemd[1682]: Queued start job for default target default.target. Jan 24 00:59:00.744908 systemd[1682]: Created slice app.slice - User Application Slice. Jan 24 00:59:00.744953 systemd[1682]: Reached target paths.target - Paths. Jan 24 00:59:00.744967 systemd[1682]: Reached target timers.target - Timers. Jan 24 00:59:00.755970 systemd[1682]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:59:00.763898 systemd[1682]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:59:00.763982 systemd[1682]: Reached target sockets.target - Sockets. Jan 24 00:59:00.763996 systemd[1682]: Reached target basic.target - Basic System. Jan 24 00:59:00.764038 systemd[1682]: Reached target default.target - Main User Target. Jan 24 00:59:00.764074 systemd[1682]: Startup finished in 112ms. Jan 24 00:59:00.764550 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:59:00.766137 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:59:00.822047 systemd[1]: Started sshd@1-10.0.0.151:22-10.0.0.1:59786.service - OpenSSH per-connection server daemon (10.0.0.1:59786). Jan 24 00:59:00.856060 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 59786 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:00.857695 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:00.864529 systemd-logind[1545]: New session 2 of user core. Jan 24 00:59:00.874322 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:59:00.939962 sshd[1694]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:00.999532 systemd[1]: sshd@1-10.0.0.151:22-10.0.0.1:59786.service: Deactivated successfully. Jan 24 00:59:01.024858 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:59:01.031581 systemd-logind[1545]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:59:01.034228 systemd[1]: Started sshd@2-10.0.0.151:22-10.0.0.1:59788.service - OpenSSH per-connection server daemon (10.0.0.1:59788). Jan 24 00:59:01.056197 systemd-logind[1545]: Removed session 2. Jan 24 00:59:01.065936 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 59788 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:01.067632 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:01.072910 systemd-logind[1545]: New session 3 of user core. Jan 24 00:59:01.081279 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:59:01.135254 sshd[1702]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:01.144105 systemd[1]: Started sshd@3-10.0.0.151:22-10.0.0.1:59796.service - OpenSSH per-connection server daemon (10.0.0.1:59796). Jan 24 00:59:01.144684 systemd[1]: sshd@2-10.0.0.151:22-10.0.0.1:59788.service: Deactivated successfully. Jan 24 00:59:01.147278 systemd-logind[1545]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:59:01.147915 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:59:01.149336 systemd-logind[1545]: Removed session 3. Jan 24 00:59:01.175378 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 59796 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:01.177201 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:01.183033 systemd-logind[1545]: New session 4 of user core. Jan 24 00:59:01.193197 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:59:01.250811 sshd[1707]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:01.259088 systemd[1]: Started sshd@4-10.0.0.151:22-10.0.0.1:59804.service - OpenSSH per-connection server daemon (10.0.0.1:59804). Jan 24 00:59:01.259526 systemd[1]: sshd@3-10.0.0.151:22-10.0.0.1:59796.service: Deactivated successfully. Jan 24 00:59:01.262996 systemd-logind[1545]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:59:01.264003 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:59:01.265127 systemd-logind[1545]: Removed session 4. Jan 24 00:59:01.292197 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 59804 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:01.293620 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:01.299141 systemd-logind[1545]: New session 5 of user core. Jan 24 00:59:01.313138 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:59:01.376964 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:59:01.377412 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:59:01.397809 sudo[1722]: pam_unix(sudo:session): session closed for user root Jan 24 00:59:01.400070 sshd[1715]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:01.413202 systemd[1]: Started sshd@5-10.0.0.151:22-10.0.0.1:59816.service - OpenSSH per-connection server daemon (10.0.0.1:59816). Jan 24 00:59:01.414196 systemd[1]: sshd@4-10.0.0.151:22-10.0.0.1:59804.service: Deactivated successfully. Jan 24 00:59:01.416558 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:59:01.417574 systemd-logind[1545]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:59:01.419364 systemd-logind[1545]: Removed session 5. Jan 24 00:59:01.445794 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 59816 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:01.447621 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:01.452718 systemd-logind[1545]: New session 6 of user core. Jan 24 00:59:01.462156 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:59:01.519922 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:59:01.520303 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:59:01.525796 sudo[1732]: pam_unix(sudo:session): session closed for user root Jan 24 00:59:01.534880 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:59:01.535237 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:59:01.555087 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:59:01.557968 auditctl[1735]: No rules Jan 24 00:59:01.559163 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:59:01.559485 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:59:01.561600 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:59:01.598642 augenrules[1754]: No rules Jan 24 00:59:01.600484 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:59:01.602523 sudo[1731]: pam_unix(sudo:session): session closed for user root Jan 24 00:59:01.604671 sshd[1725]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:01.614050 systemd[1]: Started sshd@6-10.0.0.151:22-10.0.0.1:59822.service - OpenSSH per-connection server daemon (10.0.0.1:59822). Jan 24 00:59:01.614597 systemd[1]: sshd@5-10.0.0.151:22-10.0.0.1:59816.service: Deactivated successfully. Jan 24 00:59:01.616233 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:59:01.617213 systemd-logind[1545]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:59:01.619178 systemd-logind[1545]: Removed session 6. Jan 24 00:59:01.650308 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 59822 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:01.651676 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:01.656404 systemd-logind[1545]: New session 7 of user core. Jan 24 00:59:01.666125 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:59:01.723403 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:59:01.723991 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:59:02.015074 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:59:02.015322 (dockerd)[1786]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:59:02.317884 dockerd[1786]: time="2026-01-24T00:59:02.317640009Z" level=info msg="Starting up" Jan 24 00:59:02.630150 dockerd[1786]: time="2026-01-24T00:59:02.629969073Z" level=info msg="Loading containers: start." Jan 24 00:59:02.774930 kernel: Initializing XFRM netlink socket Jan 24 00:59:02.880000 systemd-networkd[1251]: docker0: Link UP Jan 24 00:59:02.904039 dockerd[1786]: time="2026-01-24T00:59:02.903877960Z" level=info msg="Loading containers: done." Jan 24 00:59:02.921584 dockerd[1786]: time="2026-01-24T00:59:02.921502667Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:59:02.921809 dockerd[1786]: time="2026-01-24T00:59:02.921688884Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:59:02.921949 dockerd[1786]: time="2026-01-24T00:59:02.921896061Z" level=info msg="Daemon has completed initialization" Jan 24 00:59:02.976342 dockerd[1786]: time="2026-01-24T00:59:02.976192202Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:59:02.976436 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:59:03.394692 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2648696645-merged.mount: Deactivated successfully. Jan 24 00:59:03.711954 containerd[1560]: time="2026-01-24T00:59:03.711587232Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 00:59:04.216390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819843073.mount: Deactivated successfully. Jan 24 00:59:05.328359 containerd[1560]: time="2026-01-24T00:59:05.328274933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:05.329135 containerd[1560]: time="2026-01-24T00:59:05.329047816Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 24 00:59:05.330406 containerd[1560]: time="2026-01-24T00:59:05.330336822Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:05.335130 containerd[1560]: time="2026-01-24T00:59:05.335072411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:05.337078 containerd[1560]: time="2026-01-24T00:59:05.337017562Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.625349359s" Jan 24 00:59:05.337143 containerd[1560]: time="2026-01-24T00:59:05.337083956Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 00:59:05.338105 containerd[1560]: time="2026-01-24T00:59:05.338051642Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:59:06.630494 containerd[1560]: time="2026-01-24T00:59:06.630352387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:06.631613 containerd[1560]: time="2026-01-24T00:59:06.631523975Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 24 00:59:06.633133 containerd[1560]: time="2026-01-24T00:59:06.633063718Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:06.636693 containerd[1560]: time="2026-01-24T00:59:06.636618383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:06.638392 containerd[1560]: time="2026-01-24T00:59:06.638275589Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.300183842s" Jan 24 00:59:06.638392 containerd[1560]: time="2026-01-24T00:59:06.638337615Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 00:59:06.639147 containerd[1560]: time="2026-01-24T00:59:06.639088742Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 00:59:07.688074 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:59:07.695978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:59:07.848622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:59:07.855068 (kubelet)[2010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:59:07.872997 containerd[1560]: time="2026-01-24T00:59:07.872928249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:07.873923 containerd[1560]: time="2026-01-24T00:59:07.873857906Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 24 00:59:07.875895 containerd[1560]: time="2026-01-24T00:59:07.875373512Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:07.878747 containerd[1560]: time="2026-01-24T00:59:07.878647513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:07.880017 containerd[1560]: time="2026-01-24T00:59:07.879951901Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.240807916s" Jan 24 00:59:07.880017 containerd[1560]: time="2026-01-24T00:59:07.879996925Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 00:59:07.880900 containerd[1560]: time="2026-01-24T00:59:07.880745572Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:59:07.916766 kubelet[2010]: E0124 00:59:07.916696 2010 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:59:07.922026 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:59:07.922349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:59:08.831335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4276545673.mount: Deactivated successfully. Jan 24 00:59:09.195008 containerd[1560]: time="2026-01-24T00:59:09.194719269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:09.196023 containerd[1560]: time="2026-01-24T00:59:09.195945413Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 24 00:59:09.197537 containerd[1560]: time="2026-01-24T00:59:09.197458324Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:09.199889 containerd[1560]: time="2026-01-24T00:59:09.199764151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:09.200424 containerd[1560]: time="2026-01-24T00:59:09.200362207Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.319588232s" Jan 24 00:59:09.200424 containerd[1560]: time="2026-01-24T00:59:09.200407191Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:59:09.201225 containerd[1560]: time="2026-01-24T00:59:09.201120262Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 00:59:09.659057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1650637994.mount: Deactivated successfully. Jan 24 00:59:10.497221 containerd[1560]: time="2026-01-24T00:59:10.497130084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:10.498017 containerd[1560]: time="2026-01-24T00:59:10.497886571Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 24 00:59:10.499307 containerd[1560]: time="2026-01-24T00:59:10.499269757Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:10.502555 containerd[1560]: time="2026-01-24T00:59:10.502503884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:10.504208 containerd[1560]: time="2026-01-24T00:59:10.504142493Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.302979992s" Jan 24 00:59:10.504208 containerd[1560]: time="2026-01-24T00:59:10.504189521Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 00:59:10.504908 containerd[1560]: time="2026-01-24T00:59:10.504874348Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:59:10.888044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176015640.mount: Deactivated successfully. Jan 24 00:59:10.894225 containerd[1560]: time="2026-01-24T00:59:10.894120890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:10.894921 containerd[1560]: time="2026-01-24T00:59:10.894863510Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 24 00:59:10.896199 containerd[1560]: time="2026-01-24T00:59:10.896112328Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:10.898284 containerd[1560]: time="2026-01-24T00:59:10.898238146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:10.899027 containerd[1560]: time="2026-01-24T00:59:10.898980003Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 394.069398ms" Jan 24 00:59:10.899027 containerd[1560]: time="2026-01-24T00:59:10.899019668Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:59:10.899661 containerd[1560]: time="2026-01-24T00:59:10.899601564Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 00:59:11.344376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3704591947.mount: Deactivated successfully. Jan 24 00:59:12.743665 containerd[1560]: time="2026-01-24T00:59:12.743592175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:12.744749 containerd[1560]: time="2026-01-24T00:59:12.744688732Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 24 00:59:12.746409 containerd[1560]: time="2026-01-24T00:59:12.746326619Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:12.749924 containerd[1560]: time="2026-01-24T00:59:12.749790039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:12.751134 containerd[1560]: time="2026-01-24T00:59:12.751096282Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.851455805s" Jan 24 00:59:12.751186 containerd[1560]: time="2026-01-24T00:59:12.751137048Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 00:59:14.757770 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:59:14.772074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:59:14.796898 systemd[1]: Reloading requested from client PID 2169 ('systemctl') (unit session-7.scope)... Jan 24 00:59:14.796930 systemd[1]: Reloading... Jan 24 00:59:14.862961 zram_generator::config[2207]: No configuration found. Jan 24 00:59:14.974261 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:59:15.041790 systemd[1]: Reloading finished in 244 ms. Jan 24 00:59:15.098069 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:59:15.098170 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:59:15.098507 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:59:15.101085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:59:15.247093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:59:15.251711 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:59:15.294341 kubelet[2269]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:59:15.294341 kubelet[2269]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:59:15.294341 kubelet[2269]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:59:15.294700 kubelet[2269]: I0124 00:59:15.294614 2269 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:59:15.617557 kubelet[2269]: I0124 00:59:15.617429 2269 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:59:15.617557 kubelet[2269]: I0124 00:59:15.617468 2269 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:59:15.617740 kubelet[2269]: I0124 00:59:15.617681 2269 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:59:15.640562 kubelet[2269]: E0124 00:59:15.640484 2269 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:59:15.641067 kubelet[2269]: I0124 00:59:15.641032 2269 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:59:15.646753 kubelet[2269]: E0124 00:59:15.646664 2269 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:59:15.646753 kubelet[2269]: I0124 00:59:15.646713 2269 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:59:15.653407 kubelet[2269]: I0124 00:59:15.653316 2269 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:59:15.654190 kubelet[2269]: I0124 00:59:15.654093 2269 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:59:15.654384 kubelet[2269]: I0124 00:59:15.654152 2269 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 24 00:59:15.655028 kubelet[2269]: I0124 00:59:15.654948 2269 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:59:15.655028 kubelet[2269]: I0124 00:59:15.654989 2269 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:59:15.655203 kubelet[2269]: I0124 00:59:15.655148 2269 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:59:15.657792 kubelet[2269]: I0124 00:59:15.657710 2269 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:59:15.657792 kubelet[2269]: I0124 00:59:15.657750 2269 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:59:15.657792 kubelet[2269]: I0124 00:59:15.657777 2269 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:59:15.657792 kubelet[2269]: I0124 00:59:15.657792 2269 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:59:15.662686 kubelet[2269]: W0124 00:59:15.662594 2269 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jan 24 00:59:15.662739 kubelet[2269]: E0124 00:59:15.662679 2269 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:59:15.664894 kubelet[2269]: W0124 00:59:15.662782 2269 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jan 24 00:59:15.664894 kubelet[2269]: E0124 00:59:15.662922 2269 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:59:15.664894 kubelet[2269]: I0124 00:59:15.664197 2269 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:59:15.664894 kubelet[2269]: I0124 00:59:15.664595 2269 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:59:15.664894 kubelet[2269]: W0124 00:59:15.664648 2269 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:59:15.666796 kubelet[2269]: I0124 00:59:15.666751 2269 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:59:15.666796 kubelet[2269]: I0124 00:59:15.666797 2269 server.go:1287] "Started kubelet" Jan 24 00:59:15.667569 kubelet[2269]: I0124 00:59:15.667425 2269 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:59:15.671104 kubelet[2269]: I0124 00:59:15.671050 2269 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:59:15.676148 kubelet[2269]: I0124 00:59:15.676088 2269 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:59:15.677898 kubelet[2269]: I0124 00:59:15.676572 2269 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:59:15.677898 kubelet[2269]: I0124 00:59:15.676572 2269 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:59:15.677898 kubelet[2269]: I0124 00:59:15.677020 2269 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:59:15.677898 kubelet[2269]: E0124 00:59:15.676946 2269 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:59:15.678064 kubelet[2269]: I0124 00:59:15.678000 2269 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:59:15.678106 kubelet[2269]: I0124 00:59:15.678093 2269 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:59:15.678169 kubelet[2269]: I0124 00:59:15.678148 2269 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:59:15.678748 kubelet[2269]: W0124 00:59:15.678631 2269 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jan 24 00:59:15.678805 kubelet[2269]: E0124 00:59:15.678769 2269 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:59:15.678943 kubelet[2269]: E0124 00:59:15.677694 2269 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.151:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.151:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d84e6da546225 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:59:15.666780709 +0000 UTC m=+0.410503052,LastTimestamp:2026-01-24 00:59:15.666780709 +0000 UTC m=+0.410503052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:59:15.679194 kubelet[2269]: E0124 00:59:15.679123 2269 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:59:15.679358 kubelet[2269]: E0124 00:59:15.679204 2269 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="200ms" Jan 24 00:59:15.679665 kubelet[2269]: I0124 00:59:15.679597 2269 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:59:15.679778 kubelet[2269]: I0124 00:59:15.679712 2269 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:59:15.681603 kubelet[2269]: I0124 00:59:15.681568 2269 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:59:15.702376 kubelet[2269]: I0124 00:59:15.702148 2269 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:59:15.704562 kubelet[2269]: I0124 00:59:15.704458 2269 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:59:15.704562 kubelet[2269]: I0124 00:59:15.704491 2269 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:59:15.704562 kubelet[2269]: I0124 00:59:15.704546 2269 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:59:15.704562 kubelet[2269]: I0124 00:59:15.704553 2269 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:59:15.704707 kubelet[2269]: E0124 00:59:15.704617 2269 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:59:15.708564 kubelet[2269]: W0124 00:59:15.708112 2269 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jan 24 00:59:15.708564 kubelet[2269]: E0124 00:59:15.708230 2269 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:59:15.722148 kubelet[2269]: I0124 00:59:15.722128 2269 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:59:15.722272 kubelet[2269]: I0124 00:59:15.722256 2269 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:59:15.722354 kubelet[2269]: I0124 00:59:15.722342 2269 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:59:15.780209 kubelet[2269]: E0124 00:59:15.780164 2269 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:59:15.795253 kubelet[2269]: I0124 00:59:15.795191 2269 policy_none.go:49] "None policy: Start" Jan 24 00:59:15.795253 kubelet[2269]: I0124 00:59:15.795233 2269 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:59:15.795253 kubelet[2269]: I0124 00:59:15.795248 2269 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:59:15.804927 kubelet[2269]: I0124 00:59:15.803192 2269 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:59:15.804927 kubelet[2269]: I0124 00:59:15.803461 2269 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:59:15.804927 kubelet[2269]: I0124 00:59:15.803473 2269 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:59:15.804927 kubelet[2269]: I0124 00:59:15.804668 2269 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:59:15.806338 kubelet[2269]: E0124 00:59:15.806315 2269 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:59:15.806476 kubelet[2269]: E0124 00:59:15.806465 2269 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 24 00:59:15.812732 kubelet[2269]: E0124 00:59:15.812671 2269 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:59:15.816454 kubelet[2269]: E0124 00:59:15.816421 2269 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:59:15.817594 kubelet[2269]: E0124 00:59:15.817553 2269 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:59:15.879427 kubelet[2269]: I0124 00:59:15.879126 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/804c9b5e5683790937127f2acb79d6c5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"804c9b5e5683790937127f2acb79d6c5\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:59:15.879427 kubelet[2269]: I0124 00:59:15.879193 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/804c9b5e5683790937127f2acb79d6c5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"804c9b5e5683790937127f2acb79d6c5\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:59:15.879427 kubelet[2269]: I0124 00:59:15.879213 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/804c9b5e5683790937127f2acb79d6c5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"804c9b5e5683790937127f2acb79d6c5\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:59:15.880719 kubelet[2269]: E0124 00:59:15.880556 2269 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="400ms" Jan 24 00:59:15.905401 kubelet[2269]: I0124 00:59:15.905354 2269 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:59:15.906074 kubelet[2269]: E0124 00:59:15.905927 2269 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Jan 24 00:59:15.980408 kubelet[2269]: I0124 00:59:15.980339 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:59:15.980408 kubelet[2269]: I0124 00:59:15.980402 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:59:15.980408 kubelet[2269]: I0124 00:59:15.980422 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:59:15.980408 kubelet[2269]: I0124 00:59:15.980438 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:59:15.980408 kubelet[2269]: I0124 00:59:15.980454 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:59:15.980756 kubelet[2269]: I0124 00:59:15.980537 2269 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:59:16.108363 kubelet[2269]: I0124 00:59:16.108327 2269 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:59:16.108972 kubelet[2269]: E0124 00:59:16.108798 2269 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Jan 24 00:59:16.114316 kubelet[2269]: E0124 00:59:16.114160 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:16.115045 containerd[1560]: time="2026-01-24T00:59:16.114991418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:804c9b5e5683790937127f2acb79d6c5,Namespace:kube-system,Attempt:0,}" Jan 24 00:59:16.117374 kubelet[2269]: E0124 00:59:16.117350 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:16.117860 containerd[1560]: time="2026-01-24T00:59:16.117768727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 24 00:59:16.118371 kubelet[2269]: E0124 00:59:16.118339 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:16.118921 containerd[1560]: time="2026-01-24T00:59:16.118789304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 24 00:59:16.281355 kubelet[2269]: E0124 00:59:16.281185 2269 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="800ms" Jan 24 00:59:16.511392 kubelet[2269]: I0124 00:59:16.511344 2269 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:59:16.511944 kubelet[2269]: E0124 00:59:16.511800 2269 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Jan 24 00:59:16.552006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013232558.mount: Deactivated successfully. Jan 24 00:59:16.559159 containerd[1560]: time="2026-01-24T00:59:16.559044326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:59:16.562228 containerd[1560]: time="2026-01-24T00:59:16.562168023Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:59:16.563087 containerd[1560]: time="2026-01-24T00:59:16.563048600Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:59:16.564200 containerd[1560]: time="2026-01-24T00:59:16.564117741Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:59:16.565636 containerd[1560]: time="2026-01-24T00:59:16.565542130Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:59:16.566205 containerd[1560]: time="2026-01-24T00:59:16.566102307Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:59:16.567172 containerd[1560]: time="2026-01-24T00:59:16.567133675Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:59:16.569544 containerd[1560]: time="2026-01-24T00:59:16.569494982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:59:16.573438 containerd[1560]: time="2026-01-24T00:59:16.573400677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 455.531113ms" Jan 24 00:59:16.578586 containerd[1560]: time="2026-01-24T00:59:16.578511645Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 459.534611ms" Jan 24 00:59:16.579394 containerd[1560]: time="2026-01-24T00:59:16.579342470Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 464.246266ms" Jan 24 00:59:16.686260 containerd[1560]: time="2026-01-24T00:59:16.686093352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:59:16.686260 containerd[1560]: time="2026-01-24T00:59:16.686174433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:59:16.686260 containerd[1560]: time="2026-01-24T00:59:16.686216191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:16.686460 containerd[1560]: time="2026-01-24T00:59:16.686354209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:16.689414 containerd[1560]: time="2026-01-24T00:59:16.688063339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:59:16.689414 containerd[1560]: time="2026-01-24T00:59:16.688106840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:59:16.689414 containerd[1560]: time="2026-01-24T00:59:16.688121036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:16.689414 containerd[1560]: time="2026-01-24T00:59:16.688192931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:16.696084 containerd[1560]: time="2026-01-24T00:59:16.695659788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:59:16.696084 containerd[1560]: time="2026-01-24T00:59:16.695704672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:59:16.696084 containerd[1560]: time="2026-01-24T00:59:16.695714541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:16.696084 containerd[1560]: time="2026-01-24T00:59:16.695801022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:16.706306 kubelet[2269]: W0124 00:59:16.706242 2269 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jan 24 00:59:16.706418 kubelet[2269]: E0124 00:59:16.706316 2269 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:59:16.759215 kubelet[2269]: W0124 00:59:16.759149 2269 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jan 24 00:59:16.759327 kubelet[2269]: E0124 00:59:16.759233 2269 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:59:16.765355 containerd[1560]: time="2026-01-24T00:59:16.765260956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0130aee5cf794c6ffb0ff23b91052daf7feaf198d170e56e5347fdaacecc87ce\"" Jan 24 00:59:16.766711 kubelet[2269]: E0124 00:59:16.766677 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:16.768931 containerd[1560]: time="2026-01-24T00:59:16.768909857Z" level=info msg="CreateContainer within sandbox \"0130aee5cf794c6ffb0ff23b91052daf7feaf198d170e56e5347fdaacecc87ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:59:16.772512 containerd[1560]: time="2026-01-24T00:59:16.772395893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"aefa47c117df1b214a2e423de51cd406f7760cf121830af17a71c59f8244e00a\"" Jan 24 00:59:16.772764 containerd[1560]: time="2026-01-24T00:59:16.772717956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:804c9b5e5683790937127f2acb79d6c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f789d944cfb10554370c9a4010a579138d6b4e5cd7a61060e5a05dac3e5526e6\"" Jan 24 00:59:16.773341 kubelet[2269]: E0124 00:59:16.773301 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:16.773786 kubelet[2269]: E0124 00:59:16.773744 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:16.776751 containerd[1560]: time="2026-01-24T00:59:16.775949222Z" level=info msg="CreateContainer within sandbox \"f789d944cfb10554370c9a4010a579138d6b4e5cd7a61060e5a05dac3e5526e6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:59:16.776999 containerd[1560]: time="2026-01-24T00:59:16.776970969Z" level=info msg="CreateContainer within sandbox \"aefa47c117df1b214a2e423de51cd406f7760cf121830af17a71c59f8244e00a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:59:16.787147 containerd[1560]: time="2026-01-24T00:59:16.787027825Z" level=info msg="CreateContainer within sandbox \"0130aee5cf794c6ffb0ff23b91052daf7feaf198d170e56e5347fdaacecc87ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"65de3b3ea668d4ee2cc73815bb17ebb866907d7cc07d8116440d619e039f3d4c\"" Jan 24 00:59:16.787937 containerd[1560]: time="2026-01-24T00:59:16.787585807Z" level=info msg="StartContainer for \"65de3b3ea668d4ee2cc73815bb17ebb866907d7cc07d8116440d619e039f3d4c\"" Jan 24 00:59:16.798951 containerd[1560]: time="2026-01-24T00:59:16.798864494Z" level=info msg="CreateContainer within sandbox \"aefa47c117df1b214a2e423de51cd406f7760cf121830af17a71c59f8244e00a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e1b31e3b797f0eafa175ace755cb6706b7cbf7876f668896d4a20d109cc0fb50\"" Jan 24 00:59:16.799314 containerd[1560]: time="2026-01-24T00:59:16.799293400Z" level=info msg="StartContainer for \"e1b31e3b797f0eafa175ace755cb6706b7cbf7876f668896d4a20d109cc0fb50\"" Jan 24 00:59:16.805941 containerd[1560]: time="2026-01-24T00:59:16.803376822Z" level=info msg="CreateContainer within sandbox \"f789d944cfb10554370c9a4010a579138d6b4e5cd7a61060e5a05dac3e5526e6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f0597d50119f1cc221d16236a3cb643609a1d784bc93c0048a6e6210ad63b3ab\"" Jan 24 00:59:16.805941 containerd[1560]: time="2026-01-24T00:59:16.803772611Z" level=info msg="StartContainer for \"f0597d50119f1cc221d16236a3cb643609a1d784bc93c0048a6e6210ad63b3ab\"" Jan 24 00:59:16.867800 kubelet[2269]: E0124 00:59:16.867725 2269 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.151:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.151:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d84e6da546225 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-24 00:59:15.666780709 +0000 UTC m=+0.410503052,LastTimestamp:2026-01-24 00:59:15.666780709 +0000 UTC m=+0.410503052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 24 00:59:16.886465 containerd[1560]: time="2026-01-24T00:59:16.886333401Z" level=info msg="StartContainer for \"65de3b3ea668d4ee2cc73815bb17ebb866907d7cc07d8116440d619e039f3d4c\" returns successfully" Jan 24 00:59:16.886614 containerd[1560]: time="2026-01-24T00:59:16.886493530Z" level=info msg="StartContainer for \"e1b31e3b797f0eafa175ace755cb6706b7cbf7876f668896d4a20d109cc0fb50\" returns successfully" Jan 24 00:59:16.898452 containerd[1560]: time="2026-01-24T00:59:16.898420502Z" level=info msg="StartContainer for \"f0597d50119f1cc221d16236a3cb643609a1d784bc93c0048a6e6210ad63b3ab\" returns successfully" Jan 24 00:59:16.927981 kubelet[2269]: W0124 00:59:16.927779 2269 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Jan 24 00:59:16.927981 kubelet[2269]: E0124 00:59:16.927920 2269 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:59:17.315655 kubelet[2269]: I0124 00:59:17.315550 2269 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:59:17.720456 kubelet[2269]: E0124 00:59:17.720348 2269 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:59:17.721712 kubelet[2269]: E0124 00:59:17.720562 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:17.725479 kubelet[2269]: E0124 00:59:17.725363 2269 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:59:17.725728 kubelet[2269]: E0124 00:59:17.725538 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:17.728706 kubelet[2269]: E0124 00:59:17.728622 2269 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:59:17.728758 kubelet[2269]: E0124 00:59:17.728726 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:18.035940 kubelet[2269]: E0124 00:59:18.034785 2269 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 24 00:59:18.130128 kubelet[2269]: I0124 00:59:18.130041 2269 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:59:18.130128 kubelet[2269]: E0124 00:59:18.130101 2269 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 24 00:59:18.147924 kubelet[2269]: E0124 00:59:18.146061 2269 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:59:18.247139 kubelet[2269]: E0124 00:59:18.247003 2269 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:59:18.348254 kubelet[2269]: E0124 00:59:18.347966 2269 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:59:18.449218 kubelet[2269]: E0124 00:59:18.449141 2269 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:59:18.549603 kubelet[2269]: E0124 00:59:18.549517 2269 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:59:18.650310 kubelet[2269]: E0124 00:59:18.650129 2269 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:59:18.729880 kubelet[2269]: E0124 00:59:18.729657 2269 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:59:18.729880 kubelet[2269]: E0124 00:59:18.729717 2269 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:59:18.729880 kubelet[2269]: E0124 00:59:18.729778 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:18.729880 kubelet[2269]: E0124 00:59:18.729870 2269 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 24 00:59:18.730347 kubelet[2269]: E0124 00:59:18.729939 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:18.730347 kubelet[2269]: E0124 00:59:18.729984 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:18.751190 kubelet[2269]: E0124 00:59:18.751135 2269 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:59:18.851811 kubelet[2269]: E0124 00:59:18.851706 2269 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:59:18.979457 kubelet[2269]: I0124 00:59:18.979371 2269 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:59:18.985556 kubelet[2269]: E0124 00:59:18.985483 2269 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 24 00:59:18.985556 kubelet[2269]: I0124 00:59:18.985531 2269 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:59:18.987391 kubelet[2269]: E0124 00:59:18.987354 2269 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:59:18.987462 kubelet[2269]: I0124 00:59:18.987398 2269 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:59:18.988764 kubelet[2269]: E0124 00:59:18.988720 2269 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 24 00:59:19.662727 kubelet[2269]: I0124 00:59:19.662634 2269 apiserver.go:52] "Watching apiserver" Jan 24 00:59:19.679301 kubelet[2269]: I0124 00:59:19.679228 2269 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:59:19.730299 kubelet[2269]: I0124 00:59:19.730192 2269 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:59:19.730787 kubelet[2269]: I0124 00:59:19.730553 2269 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:59:19.737452 kubelet[2269]: E0124 00:59:19.737413 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:19.739430 kubelet[2269]: E0124 00:59:19.739354 2269 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:20.227339 systemd[1]: Reloading requested from client PID 2548 ('systemctl') (unit session-7.scope)... Jan 24 00:59:20.227374 systemd[1]: Reloading... Jan 24 00:59:20.300961 zram_generator::config[2588]: No configuration found. Jan 24 00:59:20.412657 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:59:20.484976 systemd[1]: Reloading finished in 257 ms. Jan 24 00:59:20.518523 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:59:20.538260 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:59:20.538668 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:59:20.547226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:59:20.710527 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:59:20.716772 (kubelet)[2642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:59:20.789355 kubelet[2642]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:59:20.789355 kubelet[2642]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:59:20.789355 kubelet[2642]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:59:20.789735 kubelet[2642]: I0124 00:59:20.789546 2642 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:59:20.798291 kubelet[2642]: I0124 00:59:20.798212 2642 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:59:20.798291 kubelet[2642]: I0124 00:59:20.798250 2642 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:59:20.798486 kubelet[2642]: I0124 00:59:20.798456 2642 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:59:20.799710 kubelet[2642]: I0124 00:59:20.799636 2642 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:59:20.801613 kubelet[2642]: I0124 00:59:20.801548 2642 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:59:20.807059 kubelet[2642]: E0124 00:59:20.806996 2642 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:59:20.807059 kubelet[2642]: I0124 00:59:20.807023 2642 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:59:20.813138 kubelet[2642]: I0124 00:59:20.813104 2642 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:59:20.813758 kubelet[2642]: I0124 00:59:20.813681 2642 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:59:20.813997 kubelet[2642]: I0124 00:59:20.813726 2642 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 24 00:59:20.813997 kubelet[2642]: I0124 00:59:20.813980 2642 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:59:20.813997 kubelet[2642]: I0124 00:59:20.813990 2642 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:59:20.814314 kubelet[2642]: I0124 00:59:20.814035 2642 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:59:20.814314 kubelet[2642]: I0124 00:59:20.814179 2642 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:59:20.814314 kubelet[2642]: I0124 00:59:20.814198 2642 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:59:20.814314 kubelet[2642]: I0124 00:59:20.814212 2642 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:59:20.814314 kubelet[2642]: I0124 00:59:20.814221 2642 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:59:20.815274 kubelet[2642]: I0124 00:59:20.815023 2642 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:59:20.815395 kubelet[2642]: I0124 00:59:20.815349 2642 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:59:20.815789 kubelet[2642]: I0124 00:59:20.815714 2642 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:59:20.815789 kubelet[2642]: I0124 00:59:20.815778 2642 server.go:1287] "Started kubelet" Jan 24 00:59:20.817991 kubelet[2642]: I0124 00:59:20.817803 2642 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:59:20.826116 kubelet[2642]: I0124 00:59:20.823657 2642 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:59:20.826116 kubelet[2642]: I0124 00:59:20.825285 2642 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:59:20.827375 kubelet[2642]: I0124 00:59:20.826951 2642 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:59:20.827375 kubelet[2642]: E0124 00:59:20.827250 2642 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 24 00:59:20.829411 kubelet[2642]: I0124 00:59:20.829327 2642 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:59:20.829634 kubelet[2642]: I0124 00:59:20.829591 2642 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:59:20.829660 kubelet[2642]: I0124 00:59:20.829652 2642 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:59:20.829987 kubelet[2642]: I0124 00:59:20.829955 2642 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:59:20.830462 kubelet[2642]: I0124 00:59:20.830394 2642 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:59:20.835453 kubelet[2642]: I0124 00:59:20.835393 2642 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:59:20.835527 kubelet[2642]: I0124 00:59:20.835499 2642 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:59:20.839707 kubelet[2642]: I0124 00:59:20.839617 2642 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:59:20.840324 kubelet[2642]: E0124 00:59:20.840263 2642 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:59:20.846474 kubelet[2642]: I0124 00:59:20.846409 2642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:59:20.848304 kubelet[2642]: I0124 00:59:20.848239 2642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:59:20.848304 kubelet[2642]: I0124 00:59:20.848295 2642 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:59:20.848360 kubelet[2642]: I0124 00:59:20.848314 2642 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:59:20.848360 kubelet[2642]: I0124 00:59:20.848322 2642 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:59:20.848415 kubelet[2642]: E0124 00:59:20.848369 2642 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:59:20.896468 kubelet[2642]: I0124 00:59:20.896406 2642 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:59:20.896468 kubelet[2642]: I0124 00:59:20.896445 2642 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:59:20.896468 kubelet[2642]: I0124 00:59:20.896464 2642 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:59:20.896655 kubelet[2642]: I0124 00:59:20.896600 2642 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:59:20.896655 kubelet[2642]: I0124 00:59:20.896638 2642 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:59:20.896696 kubelet[2642]: I0124 00:59:20.896658 2642 policy_none.go:49] "None policy: Start" Jan 24 00:59:20.896696 kubelet[2642]: I0124 00:59:20.896669 2642 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:59:20.896696 kubelet[2642]: I0124 00:59:20.896680 2642 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:59:20.896805 kubelet[2642]: I0124 00:59:20.896776 2642 state_mem.go:75] "Updated machine memory state" Jan 24 00:59:20.898390 kubelet[2642]: I0124 00:59:20.898323 2642 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:59:20.898560 kubelet[2642]: I0124 00:59:20.898498 2642 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:59:20.898560 kubelet[2642]: I0124 00:59:20.898535 2642 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:59:20.900363 kubelet[2642]: I0124 00:59:20.899640 2642 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:59:20.901050 kubelet[2642]: E0124 00:59:20.901004 2642 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:59:20.949522 kubelet[2642]: I0124 00:59:20.949377 2642 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 24 00:59:20.949522 kubelet[2642]: I0124 00:59:20.949495 2642 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:59:20.949712 kubelet[2642]: I0124 00:59:20.949459 2642 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 24 00:59:20.958807 kubelet[2642]: E0124 00:59:20.958753 2642 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 24 00:59:20.958906 kubelet[2642]: E0124 00:59:20.958885 2642 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 00:59:21.006365 kubelet[2642]: I0124 00:59:21.006227 2642 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 24 00:59:21.015589 kubelet[2642]: I0124 00:59:21.015492 2642 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 24 00:59:21.015589 kubelet[2642]: I0124 00:59:21.015569 2642 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 24 00:59:21.030552 kubelet[2642]: I0124 00:59:21.030472 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:59:21.030552 kubelet[2642]: I0124 00:59:21.030512 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 24 00:59:21.030552 kubelet[2642]: I0124 00:59:21.030531 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/804c9b5e5683790937127f2acb79d6c5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"804c9b5e5683790937127f2acb79d6c5\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:59:21.030552 kubelet[2642]: I0124 00:59:21.030545 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/804c9b5e5683790937127f2acb79d6c5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"804c9b5e5683790937127f2acb79d6c5\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:59:21.030552 kubelet[2642]: I0124 00:59:21.030559 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/804c9b5e5683790937127f2acb79d6c5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"804c9b5e5683790937127f2acb79d6c5\") " pod="kube-system/kube-apiserver-localhost" Jan 24 00:59:21.030761 kubelet[2642]: I0124 00:59:21.030573 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:59:21.030761 kubelet[2642]: I0124 00:59:21.030587 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:59:21.030761 kubelet[2642]: I0124 00:59:21.030599 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:59:21.030761 kubelet[2642]: I0124 00:59:21.030613 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 24 00:59:21.227816 sudo[2676]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 24 00:59:21.228324 sudo[2676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 24 00:59:21.256773 kubelet[2642]: E0124 00:59:21.256552 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:21.260132 kubelet[2642]: E0124 00:59:21.260024 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:21.260392 kubelet[2642]: E0124 00:59:21.260322 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:21.815162 kubelet[2642]: I0124 00:59:21.815086 2642 apiserver.go:52] "Watching apiserver" Jan 24 00:59:21.830633 kubelet[2642]: I0124 00:59:21.830482 2642 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:59:21.857344 kubelet[2642]: I0124 00:59:21.857160 2642 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 24 00:59:21.880099 kubelet[2642]: E0124 00:59:21.858723 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:21.880099 kubelet[2642]: E0124 00:59:21.858975 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:21.881216 sudo[2676]: pam_unix(sudo:session): session closed for user root Jan 24 00:59:21.897303 kubelet[2642]: E0124 00:59:21.896201 2642 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 24 00:59:21.897303 kubelet[2642]: E0124 00:59:21.896419 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:21.938055 kubelet[2642]: I0124 00:59:21.937987 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.937968779 podStartE2EDuration="1.937968779s" podCreationTimestamp="2026-01-24 00:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:59:21.92343962 +0000 UTC m=+1.201136969" watchObservedRunningTime="2026-01-24 00:59:21.937968779 +0000 UTC m=+1.215666128" Jan 24 00:59:21.951989 kubelet[2642]: I0124 00:59:21.950307 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.95028767 podStartE2EDuration="2.95028767s" podCreationTimestamp="2026-01-24 00:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:59:21.938778601 +0000 UTC m=+1.216475950" watchObservedRunningTime="2026-01-24 00:59:21.95028767 +0000 UTC m=+1.227985020" Jan 24 00:59:22.858588 kubelet[2642]: E0124 00:59:22.858545 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:22.859433 kubelet[2642]: E0124 00:59:22.858875 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:23.500746 sudo[1768]: pam_unix(sudo:session): session closed for user root Jan 24 00:59:23.503185 sshd[1760]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:23.508561 systemd[1]: sshd@6-10.0.0.151:22-10.0.0.1:59822.service: Deactivated successfully. Jan 24 00:59:23.512650 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:59:23.512756 systemd-logind[1545]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:59:23.515024 systemd-logind[1545]: Removed session 7. Jan 24 00:59:26.100524 kubelet[2642]: I0124 00:59:26.100420 2642 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:59:26.101078 kubelet[2642]: I0124 00:59:26.100938 2642 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:59:26.101111 containerd[1560]: time="2026-01-24T00:59:26.100771817Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:59:26.703894 kubelet[2642]: I0124 00:59:26.701685 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.70166327 podStartE2EDuration="7.70166327s" podCreationTimestamp="2026-01-24 00:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:59:21.961607941 +0000 UTC m=+1.239305309" watchObservedRunningTime="2026-01-24 00:59:26.70166327 +0000 UTC m=+5.979360619" Jan 24 00:59:26.768001 kubelet[2642]: I0124 00:59:26.767814 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5145ef6c-cda6-48ed-a138-0a9891f1238e-lib-modules\") pod \"kube-proxy-6xpjr\" (UID: \"5145ef6c-cda6-48ed-a138-0a9891f1238e\") " pod="kube-system/kube-proxy-6xpjr" Jan 24 00:59:26.768147 kubelet[2642]: I0124 00:59:26.768040 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-cilium-run\") pod \"cilium-kjbdr\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " pod="kube-system/cilium-kjbdr" Jan 24 00:59:26.768147 kubelet[2642]: I0124 00:59:26.768075 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-lib-modules\") pod \"cilium-kjbdr\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " pod="kube-system/cilium-kjbdr" Jan 24 00:59:26.768147 kubelet[2642]: I0124 00:59:26.768106 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb00d75b-3b1a-41ee-ba56-618dbb490221-clustermesh-secrets\") pod \"cilium-kjbdr\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " pod="kube-system/cilium-kjbdr" Jan 24 00:59:26.768147 kubelet[2642]: I0124 00:59:26.768131 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-etc-cni-netd\") pod \"cilium-kjbdr\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " pod="kube-system/cilium-kjbdr" Jan 24 00:59:26.768297 kubelet[2642]: I0124 00:59:26.768156 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb00d75b-3b1a-41ee-ba56-618dbb490221-cilium-config-path\") pod \"cilium-kjbdr\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " pod="kube-system/cilium-kjbdr" Jan 24 00:59:26.768297 kubelet[2642]: I0124 00:59:26.768180 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-host-proc-sys-net\") pod \"cilium-kjbdr\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " pod="kube-system/cilium-kjbdr" Jan 24 00:59:26.768297 kubelet[2642]: I0124 00:59:26.768206 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-host-proc-sys-kernel\") pod \"cilium-kjbdr\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " pod="kube-system/cilium-kjbdr" Jan 24 00:59:26.768297 kubelet[2642]: I0124 00:59:26.768230 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5145ef6c-cda6-48ed-a138-0a9891f1238e-kube-proxy\") pod \"kube-proxy-6xpjr\" (UID: \"5145ef6c-cda6-48ed-a138-0a9891f1238e\") " pod="kube-system/kube-proxy-6xpjr" Jan 24 00:59:26.768297 kubelet[2642]: I0124 00:59:26.768253 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-hostproc\") pod \"cilium-kjbdr\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " pod="kube-system/cilium-kjbdr" Jan 24 00:59:26.768297 kubelet[2642]: I0124 00:59:26.768274 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-cni-path\") pod \"cilium-kjbdr\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " pod="kube-system/cilium-kjbdr" Jan 24 00:59:26.768420 kubelet[2642]: I0124 00:59:26.768300 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-cilium-cgroup\") pod \"cilium-kjbdr\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " pod="kube-system/cilium-kjbdr" Jan 24 00:59:26.768420 kubelet[2642]: I0124 00:59:26.768334 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v55p\" (UniqueName: \"kubernetes.io/projected/eb00d75b-3b1a-41ee-ba56-618dbb490221-kube-api-access-4v55p\") pod \"cilium-kjbdr\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " pod="kube-system/cilium-kjbdr" Jan 24 00:59:26.768420 kubelet[2642]: I0124 00:59:26.768362 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5145ef6c-cda6-48ed-a138-0a9891f1238e-xtables-lock\") pod \"kube-proxy-6xpjr\" (UID: \"5145ef6c-cda6-48ed-a138-0a9891f1238e\") " pod="kube-system/kube-proxy-6xpjr" Jan 24 00:59:26.768420 kubelet[2642]: I0124 00:59:26.768388 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfclh\" (UniqueName: \"kubernetes.io/projected/5145ef6c-cda6-48ed-a138-0a9891f1238e-kube-api-access-zfclh\") pod \"kube-proxy-6xpjr\" (UID: \"5145ef6c-cda6-48ed-a138-0a9891f1238e\") " pod="kube-system/kube-proxy-6xpjr" Jan 24 00:59:26.768527 kubelet[2642]: I0124 00:59:26.768421 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-xtables-lock\") pod \"cilium-kjbdr\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " pod="kube-system/cilium-kjbdr" Jan 24 00:59:26.768527 kubelet[2642]: I0124 00:59:26.768444 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb00d75b-3b1a-41ee-ba56-618dbb490221-hubble-tls\") pod \"cilium-kjbdr\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " pod="kube-system/cilium-kjbdr" Jan 24 00:59:26.768527 kubelet[2642]: I0124 00:59:26.768479 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-bpf-maps\") pod \"cilium-kjbdr\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " pod="kube-system/cilium-kjbdr" Jan 24 00:59:27.010589 kubelet[2642]: E0124 00:59:27.010440 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:27.011596 containerd[1560]: time="2026-01-24T00:59:27.011360348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xpjr,Uid:5145ef6c-cda6-48ed-a138-0a9891f1238e,Namespace:kube-system,Attempt:0,}" Jan 24 00:59:27.021746 kubelet[2642]: E0124 00:59:27.021598 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:27.022876 containerd[1560]: time="2026-01-24T00:59:27.022061688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kjbdr,Uid:eb00d75b-3b1a-41ee-ba56-618dbb490221,Namespace:kube-system,Attempt:0,}" Jan 24 00:59:27.053243 containerd[1560]: time="2026-01-24T00:59:27.052088157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:59:27.053243 containerd[1560]: time="2026-01-24T00:59:27.052138911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:59:27.053243 containerd[1560]: time="2026-01-24T00:59:27.052152226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:27.053243 containerd[1560]: time="2026-01-24T00:59:27.052263274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:27.063632 containerd[1560]: time="2026-01-24T00:59:27.063443463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:59:27.063725 containerd[1560]: time="2026-01-24T00:59:27.063683822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:59:27.063807 containerd[1560]: time="2026-01-24T00:59:27.063769131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:27.064360 containerd[1560]: time="2026-01-24T00:59:27.064197320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:27.112755 containerd[1560]: time="2026-01-24T00:59:27.112075246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6xpjr,Uid:5145ef6c-cda6-48ed-a138-0a9891f1238e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bac5bd4973b2083fd715a14cac39c83932e02874edb9b81fcece961285947f6b\"" Jan 24 00:59:27.113315 kubelet[2642]: E0124 00:59:27.112705 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:27.116933 containerd[1560]: time="2026-01-24T00:59:27.116899912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kjbdr,Uid:eb00d75b-3b1a-41ee-ba56-618dbb490221,Namespace:kube-system,Attempt:0,} returns sandbox id \"fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb\"" Jan 24 00:59:27.119205 containerd[1560]: time="2026-01-24T00:59:27.118756094Z" level=info msg="CreateContainer within sandbox \"bac5bd4973b2083fd715a14cac39c83932e02874edb9b81fcece961285947f6b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:59:27.119266 kubelet[2642]: E0124 00:59:27.119049 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:27.119947 containerd[1560]: time="2026-01-24T00:59:27.119920257Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 24 00:59:27.139952 containerd[1560]: time="2026-01-24T00:59:27.139792912Z" level=info msg="CreateContainer within sandbox \"bac5bd4973b2083fd715a14cac39c83932e02874edb9b81fcece961285947f6b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"727e038c4414197cae52c907e56d528b7c0438c511c23dd91a9e261a88d482ef\"" Jan 24 00:59:27.141454 containerd[1560]: time="2026-01-24T00:59:27.140663988Z" level=info msg="StartContainer for \"727e038c4414197cae52c907e56d528b7c0438c511c23dd91a9e261a88d482ef\"" Jan 24 00:59:27.258624 containerd[1560]: time="2026-01-24T00:59:27.258537851Z" level=info msg="StartContainer for \"727e038c4414197cae52c907e56d528b7c0438c511c23dd91a9e261a88d482ef\" returns successfully" Jan 24 00:59:27.273080 kubelet[2642]: I0124 00:59:27.272874 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf4ph\" (UniqueName: \"kubernetes.io/projected/64d263b1-8906-4277-a226-f7fc35939162-kube-api-access-tf4ph\") pod \"cilium-operator-6c4d7847fc-h9d5g\" (UID: \"64d263b1-8906-4277-a226-f7fc35939162\") " pod="kube-system/cilium-operator-6c4d7847fc-h9d5g" Jan 24 00:59:27.273080 kubelet[2642]: I0124 00:59:27.272933 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64d263b1-8906-4277-a226-f7fc35939162-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-h9d5g\" (UID: \"64d263b1-8906-4277-a226-f7fc35939162\") " pod="kube-system/cilium-operator-6c4d7847fc-h9d5g" Jan 24 00:59:27.527331 kubelet[2642]: E0124 00:59:27.527136 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:27.527801 containerd[1560]: time="2026-01-24T00:59:27.527750480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h9d5g,Uid:64d263b1-8906-4277-a226-f7fc35939162,Namespace:kube-system,Attempt:0,}" Jan 24 00:59:27.558669 containerd[1560]: time="2026-01-24T00:59:27.558357213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:59:27.558669 containerd[1560]: time="2026-01-24T00:59:27.558452200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:59:27.558669 containerd[1560]: time="2026-01-24T00:59:27.558470544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:27.560868 containerd[1560]: time="2026-01-24T00:59:27.559310182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:27.629929 containerd[1560]: time="2026-01-24T00:59:27.629870877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h9d5g,Uid:64d263b1-8906-4277-a226-f7fc35939162,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a09854ab4e1aa7da73f4c9d44a5832b2da290464c6b379739e4ab2159c0ac4a\"" Jan 24 00:59:27.631685 kubelet[2642]: E0124 00:59:27.631655 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:27.653553 kubelet[2642]: E0124 00:59:27.653500 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:27.869491 kubelet[2642]: E0124 00:59:27.868876 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:27.872243 kubelet[2642]: E0124 00:59:27.872057 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:27.891701 kubelet[2642]: I0124 00:59:27.890349 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6xpjr" podStartSLOduration=1.890334158 podStartE2EDuration="1.890334158s" podCreationTimestamp="2026-01-24 00:59:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:59:27.890098278 +0000 UTC m=+7.167795627" watchObservedRunningTime="2026-01-24 00:59:27.890334158 +0000 UTC m=+7.168031507" Jan 24 00:59:28.103253 kubelet[2642]: E0124 00:59:28.103158 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:28.873221 kubelet[2642]: E0124 00:59:28.873152 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:29.874964 kubelet[2642]: E0124 00:59:29.874889 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:31.006285 kubelet[2642]: E0124 00:59:31.005053 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:31.879397 kubelet[2642]: E0124 00:59:31.879358 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:31.932736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866519128.mount: Deactivated successfully. Jan 24 00:59:32.880268 kubelet[2642]: E0124 00:59:32.880226 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:33.508435 containerd[1560]: time="2026-01-24T00:59:33.508362963Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:33.509971 containerd[1560]: time="2026-01-24T00:59:33.509935892Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 24 00:59:33.511442 containerd[1560]: time="2026-01-24T00:59:33.511392392Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:33.513780 containerd[1560]: time="2026-01-24T00:59:33.513732272Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.393576917s" Jan 24 00:59:33.513780 containerd[1560]: time="2026-01-24T00:59:33.513761415Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 24 00:59:33.520939 containerd[1560]: time="2026-01-24T00:59:33.520875256Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 24 00:59:33.530728 containerd[1560]: time="2026-01-24T00:59:33.530611853Z" level=info msg="CreateContainer within sandbox \"fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 00:59:33.560881 containerd[1560]: time="2026-01-24T00:59:33.560733428Z" level=info msg="CreateContainer within sandbox \"fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f\"" Jan 24 00:59:33.561465 containerd[1560]: time="2026-01-24T00:59:33.561419134Z" level=info msg="StartContainer for \"4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f\"" Jan 24 00:59:33.721315 containerd[1560]: time="2026-01-24T00:59:33.721261213Z" level=info msg="StartContainer for \"4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f\" returns successfully" Jan 24 00:59:33.764418 containerd[1560]: time="2026-01-24T00:59:33.762876331Z" level=info msg="shim disconnected" id=4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f namespace=k8s.io Jan 24 00:59:33.764418 containerd[1560]: time="2026-01-24T00:59:33.764333051Z" level=warning msg="cleaning up after shim disconnected" id=4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f namespace=k8s.io Jan 24 00:59:33.764418 containerd[1560]: time="2026-01-24T00:59:33.764345004Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:59:33.886679 kubelet[2642]: E0124 00:59:33.886585 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:33.892370 containerd[1560]: time="2026-01-24T00:59:33.892221193Z" level=info msg="CreateContainer within sandbox \"fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 00:59:33.907758 containerd[1560]: time="2026-01-24T00:59:33.907643438Z" level=info msg="CreateContainer within sandbox \"fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6\"" Jan 24 00:59:33.908507 containerd[1560]: time="2026-01-24T00:59:33.908407938Z" level=info msg="StartContainer for \"bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6\"" Jan 24 00:59:33.976911 containerd[1560]: time="2026-01-24T00:59:33.976247943Z" level=info msg="StartContainer for \"bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6\" returns successfully" Jan 24 00:59:33.990020 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:59:33.990384 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:59:33.990456 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:59:33.997258 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:59:34.016810 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:59:34.022252 containerd[1560]: time="2026-01-24T00:59:34.022190996Z" level=info msg="shim disconnected" id=bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6 namespace=k8s.io Jan 24 00:59:34.022363 containerd[1560]: time="2026-01-24T00:59:34.022253913Z" level=warning msg="cleaning up after shim disconnected" id=bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6 namespace=k8s.io Jan 24 00:59:34.022363 containerd[1560]: time="2026-01-24T00:59:34.022264272Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:59:34.541700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f-rootfs.mount: Deactivated successfully. Jan 24 00:59:34.890536 kubelet[2642]: E0124 00:59:34.890125 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:34.893501 containerd[1560]: time="2026-01-24T00:59:34.893433712Z" level=info msg="CreateContainer within sandbox \"fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 00:59:34.912533 containerd[1560]: time="2026-01-24T00:59:34.912467385Z" level=info msg="CreateContainer within sandbox \"fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67\"" Jan 24 00:59:34.913346 containerd[1560]: time="2026-01-24T00:59:34.913232158Z" level=info msg="StartContainer for \"0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67\"" Jan 24 00:59:34.980658 containerd[1560]: time="2026-01-24T00:59:34.980588449Z" level=info msg="StartContainer for \"0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67\" returns successfully" Jan 24 00:59:35.024268 containerd[1560]: time="2026-01-24T00:59:35.024132795Z" level=info msg="shim disconnected" id=0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67 namespace=k8s.io Jan 24 00:59:35.024268 containerd[1560]: time="2026-01-24T00:59:35.024220937Z" level=warning msg="cleaning up after shim disconnected" id=0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67 namespace=k8s.io Jan 24 00:59:35.024268 containerd[1560]: time="2026-01-24T00:59:35.024235715Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:59:35.541466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67-rootfs.mount: Deactivated successfully. Jan 24 00:59:35.774077 containerd[1560]: time="2026-01-24T00:59:35.774012321Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:35.775110 containerd[1560]: time="2026-01-24T00:59:35.775052916Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 24 00:59:35.776302 containerd[1560]: time="2026-01-24T00:59:35.776232457Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:59:35.778482 containerd[1560]: time="2026-01-24T00:59:35.778411485Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.257464517s" Jan 24 00:59:35.778557 containerd[1560]: time="2026-01-24T00:59:35.778477817Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 24 00:59:35.782937 containerd[1560]: time="2026-01-24T00:59:35.782795139Z" level=info msg="CreateContainer within sandbox \"3a09854ab4e1aa7da73f4c9d44a5832b2da290464c6b379739e4ab2159c0ac4a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 24 00:59:35.796133 containerd[1560]: time="2026-01-24T00:59:35.795952470Z" level=info msg="CreateContainer within sandbox \"3a09854ab4e1aa7da73f4c9d44a5832b2da290464c6b379739e4ab2159c0ac4a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200\"" Jan 24 00:59:35.796780 containerd[1560]: time="2026-01-24T00:59:35.796661156Z" level=info msg="StartContainer for \"7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200\"" Jan 24 00:59:35.859360 containerd[1560]: time="2026-01-24T00:59:35.859324781Z" level=info msg="StartContainer for \"7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200\" returns successfully" Jan 24 00:59:35.893748 kubelet[2642]: E0124 00:59:35.893395 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:35.899647 kubelet[2642]: E0124 00:59:35.899591 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:35.901732 containerd[1560]: time="2026-01-24T00:59:35.901663528Z" level=info msg="CreateContainer within sandbox \"fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 00:59:35.924303 containerd[1560]: time="2026-01-24T00:59:35.924128631Z" level=info msg="CreateContainer within sandbox \"fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344\"" Jan 24 00:59:35.925921 containerd[1560]: time="2026-01-24T00:59:35.925805442Z" level=info msg="StartContainer for \"77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344\"" Jan 24 00:59:35.943762 kubelet[2642]: I0124 00:59:35.943137 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-h9d5g" podStartSLOduration=0.795776706 podStartE2EDuration="8.943115365s" podCreationTimestamp="2026-01-24 00:59:27 +0000 UTC" firstStartedPulling="2026-01-24 00:59:27.632234869 +0000 UTC m=+6.909932219" lastFinishedPulling="2026-01-24 00:59:35.779573509 +0000 UTC m=+15.057270878" observedRunningTime="2026-01-24 00:59:35.912765014 +0000 UTC m=+15.190462394" watchObservedRunningTime="2026-01-24 00:59:35.943115365 +0000 UTC m=+15.220812715" Jan 24 00:59:36.096118 containerd[1560]: time="2026-01-24T00:59:36.096002335Z" level=info msg="StartContainer for \"77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344\" returns successfully" Jan 24 00:59:36.122680 containerd[1560]: time="2026-01-24T00:59:36.122573440Z" level=info msg="shim disconnected" id=77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344 namespace=k8s.io Jan 24 00:59:36.122680 containerd[1560]: time="2026-01-24T00:59:36.122637739Z" level=warning msg="cleaning up after shim disconnected" id=77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344 namespace=k8s.io Jan 24 00:59:36.122680 containerd[1560]: time="2026-01-24T00:59:36.122646986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:59:36.902918 kubelet[2642]: E0124 00:59:36.902123 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:36.902918 kubelet[2642]: E0124 00:59:36.902269 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:36.906436 containerd[1560]: time="2026-01-24T00:59:36.905422172Z" level=info msg="CreateContainer within sandbox \"fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 00:59:36.953144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3574293338.mount: Deactivated successfully. Jan 24 00:59:36.967091 containerd[1560]: time="2026-01-24T00:59:36.967012763Z" level=info msg="CreateContainer within sandbox \"fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695\"" Jan 24 00:59:36.967959 containerd[1560]: time="2026-01-24T00:59:36.967784770Z" level=info msg="StartContainer for \"58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695\"" Jan 24 00:59:37.057501 containerd[1560]: time="2026-01-24T00:59:37.057438164Z" level=info msg="StartContainer for \"58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695\" returns successfully" Jan 24 00:59:37.245660 kubelet[2642]: I0124 00:59:37.245620 2642 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:59:37.344206 kubelet[2642]: I0124 00:59:37.344119 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c47289b-a147-45c1-a622-f21fd43d2452-config-volume\") pod \"coredns-668d6bf9bc-5r9nv\" (UID: \"3c47289b-a147-45c1-a622-f21fd43d2452\") " pod="kube-system/coredns-668d6bf9bc-5r9nv" Jan 24 00:59:37.344573 kubelet[2642]: I0124 00:59:37.344454 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7ff7db2-042e-432f-872e-3547cfd157da-config-volume\") pod \"coredns-668d6bf9bc-zxhf8\" (UID: \"a7ff7db2-042e-432f-872e-3547cfd157da\") " pod="kube-system/coredns-668d6bf9bc-zxhf8" Jan 24 00:59:37.344696 kubelet[2642]: I0124 00:59:37.344636 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qsmm\" (UniqueName: \"kubernetes.io/projected/a7ff7db2-042e-432f-872e-3547cfd157da-kube-api-access-2qsmm\") pod \"coredns-668d6bf9bc-zxhf8\" (UID: \"a7ff7db2-042e-432f-872e-3547cfd157da\") " pod="kube-system/coredns-668d6bf9bc-zxhf8" Jan 24 00:59:37.344696 kubelet[2642]: I0124 00:59:37.344681 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdljj\" (UniqueName: \"kubernetes.io/projected/3c47289b-a147-45c1-a622-f21fd43d2452-kube-api-access-kdljj\") pod \"coredns-668d6bf9bc-5r9nv\" (UID: \"3c47289b-a147-45c1-a622-f21fd43d2452\") " pod="kube-system/coredns-668d6bf9bc-5r9nv" Jan 24 00:59:37.577574 kubelet[2642]: E0124 00:59:37.577233 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:37.578529 containerd[1560]: time="2026-01-24T00:59:37.578499749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zxhf8,Uid:a7ff7db2-042e-432f-872e-3547cfd157da,Namespace:kube-system,Attempt:0,}" Jan 24 00:59:37.584954 kubelet[2642]: E0124 00:59:37.584622 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:37.585455 containerd[1560]: time="2026-01-24T00:59:37.585430482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5r9nv,Uid:3c47289b-a147-45c1-a622-f21fd43d2452,Namespace:kube-system,Attempt:0,}" Jan 24 00:59:37.906791 kubelet[2642]: E0124 00:59:37.906594 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:37.938953 kubelet[2642]: I0124 00:59:37.936559 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kjbdr" podStartSLOduration=5.535369746 podStartE2EDuration="11.936540211s" podCreationTimestamp="2026-01-24 00:59:26 +0000 UTC" firstStartedPulling="2026-01-24 00:59:27.119502367 +0000 UTC m=+6.397199716" lastFinishedPulling="2026-01-24 00:59:33.520672822 +0000 UTC m=+12.798370181" observedRunningTime="2026-01-24 00:59:37.936257108 +0000 UTC m=+17.213954477" watchObservedRunningTime="2026-01-24 00:59:37.936540211 +0000 UTC m=+17.214237570" Jan 24 00:59:38.908320 kubelet[2642]: E0124 00:59:38.908242 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:39.276345 systemd-networkd[1251]: cilium_host: Link UP Jan 24 00:59:39.277299 systemd-networkd[1251]: cilium_net: Link UP Jan 24 00:59:39.277641 systemd-networkd[1251]: cilium_net: Gained carrier Jan 24 00:59:39.278319 systemd-networkd[1251]: cilium_host: Gained carrier Jan 24 00:59:39.391332 systemd-networkd[1251]: cilium_vxlan: Link UP Jan 24 00:59:39.391360 systemd-networkd[1251]: cilium_vxlan: Gained carrier Jan 24 00:59:39.591888 kernel: NET: Registered PF_ALG protocol family Jan 24 00:59:39.838053 systemd-networkd[1251]: cilium_host: Gained IPv6LL Jan 24 00:59:39.909964 kubelet[2642]: E0124 00:59:39.909933 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:40.030070 systemd-networkd[1251]: cilium_net: Gained IPv6LL Jan 24 00:59:40.279607 systemd-networkd[1251]: lxc_health: Link UP Jan 24 00:59:40.292699 systemd-networkd[1251]: lxc_health: Gained carrier Jan 24 00:59:40.542056 systemd-networkd[1251]: cilium_vxlan: Gained IPv6LL Jan 24 00:59:40.664143 systemd-networkd[1251]: lxc5b8ee8d05202: Link UP Jan 24 00:59:40.674442 systemd-networkd[1251]: lxc9ca22b0256d7: Link UP Jan 24 00:59:40.687921 kernel: eth0: renamed from tmpe596c Jan 24 00:59:40.701151 kernel: eth0: renamed from tmpbc159 Jan 24 00:59:40.713314 systemd-networkd[1251]: lxc5b8ee8d05202: Gained carrier Jan 24 00:59:40.715064 systemd-networkd[1251]: lxc9ca22b0256d7: Gained carrier Jan 24 00:59:41.023661 kubelet[2642]: E0124 00:59:41.023471 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:41.205066 update_engine[1551]: I20260124 00:59:41.204944 1551 update_attempter.cc:509] Updating boot flags... Jan 24 00:59:41.230952 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3500) Jan 24 00:59:41.268578 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3500) Jan 24 00:59:41.503090 systemd-networkd[1251]: lxc_health: Gained IPv6LL Jan 24 00:59:42.206139 systemd-networkd[1251]: lxc9ca22b0256d7: Gained IPv6LL Jan 24 00:59:42.718059 systemd-networkd[1251]: lxc5b8ee8d05202: Gained IPv6LL Jan 24 00:59:43.851332 containerd[1560]: time="2026-01-24T00:59:43.851165193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:59:43.851332 containerd[1560]: time="2026-01-24T00:59:43.851242217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:59:43.851332 containerd[1560]: time="2026-01-24T00:59:43.851292810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:43.851792 containerd[1560]: time="2026-01-24T00:59:43.851387015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:43.871935 containerd[1560]: time="2026-01-24T00:59:43.866388573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:59:43.871935 containerd[1560]: time="2026-01-24T00:59:43.870686033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:59:43.871935 containerd[1560]: time="2026-01-24T00:59:43.870700270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:43.871935 containerd[1560]: time="2026-01-24T00:59:43.870953961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:59:43.886613 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:59:43.895475 systemd[1]: run-containerd-runc-k8s.io-e596c85be5786cebe008f64ced79905181fc9b828d516872cf417694101f9c8a-runc.dGp3Ni.mount: Deactivated successfully. Jan 24 00:59:43.902424 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 24 00:59:43.929950 containerd[1560]: time="2026-01-24T00:59:43.929915587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5r9nv,Uid:3c47289b-a147-45c1-a622-f21fd43d2452,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc1591a9514f71749c721e40131f6467f5379724f1ce13811096e194dfd9e2c1\"" Jan 24 00:59:43.930748 kubelet[2642]: E0124 00:59:43.930709 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:43.933658 containerd[1560]: time="2026-01-24T00:59:43.933123025Z" level=info msg="CreateContainer within sandbox \"bc1591a9514f71749c721e40131f6467f5379724f1ce13811096e194dfd9e2c1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:59:43.939388 containerd[1560]: time="2026-01-24T00:59:43.939341772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zxhf8,Uid:a7ff7db2-042e-432f-872e-3547cfd157da,Namespace:kube-system,Attempt:0,} returns sandbox id \"e596c85be5786cebe008f64ced79905181fc9b828d516872cf417694101f9c8a\"" Jan 24 00:59:43.941270 kubelet[2642]: E0124 00:59:43.941215 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:43.944965 containerd[1560]: time="2026-01-24T00:59:43.944094664Z" level=info msg="CreateContainer within sandbox \"e596c85be5786cebe008f64ced79905181fc9b828d516872cf417694101f9c8a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:59:43.961243 containerd[1560]: time="2026-01-24T00:59:43.961178816Z" level=info msg="CreateContainer within sandbox \"bc1591a9514f71749c721e40131f6467f5379724f1ce13811096e194dfd9e2c1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a301baaab540d112c953c33fcacbbb3ab368c12c49295cda549dbd93874cb37\"" Jan 24 00:59:43.961814 containerd[1560]: time="2026-01-24T00:59:43.961785594Z" level=info msg="StartContainer for \"0a301baaab540d112c953c33fcacbbb3ab368c12c49295cda549dbd93874cb37\"" Jan 24 00:59:43.972787 containerd[1560]: time="2026-01-24T00:59:43.972708259Z" level=info msg="CreateContainer within sandbox \"e596c85be5786cebe008f64ced79905181fc9b828d516872cf417694101f9c8a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b1fa873930b2ad87ffefa69fd4a0dd4f5aacd01550211b295d02d72f104505ec\"" Jan 24 00:59:43.973718 containerd[1560]: time="2026-01-24T00:59:43.973674232Z" level=info msg="StartContainer for \"b1fa873930b2ad87ffefa69fd4a0dd4f5aacd01550211b295d02d72f104505ec\"" Jan 24 00:59:44.042142 containerd[1560]: time="2026-01-24T00:59:44.042060352Z" level=info msg="StartContainer for \"0a301baaab540d112c953c33fcacbbb3ab368c12c49295cda549dbd93874cb37\" returns successfully" Jan 24 00:59:44.045980 containerd[1560]: time="2026-01-24T00:59:44.045940146Z" level=info msg="StartContainer for \"b1fa873930b2ad87ffefa69fd4a0dd4f5aacd01550211b295d02d72f104505ec\" returns successfully" Jan 24 00:59:44.921564 kubelet[2642]: E0124 00:59:44.921339 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:44.923033 kubelet[2642]: E0124 00:59:44.923000 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:44.943747 kubelet[2642]: I0124 00:59:44.943667 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zxhf8" podStartSLOduration=17.943653213 podStartE2EDuration="17.943653213s" podCreationTimestamp="2026-01-24 00:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:59:44.930570498 +0000 UTC m=+24.208267848" watchObservedRunningTime="2026-01-24 00:59:44.943653213 +0000 UTC m=+24.221350563" Jan 24 00:59:44.944277 kubelet[2642]: I0124 00:59:44.944051 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5r9nv" podStartSLOduration=17.944045341 podStartE2EDuration="17.944045341s" podCreationTimestamp="2026-01-24 00:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:59:44.942246954 +0000 UTC m=+24.219944304" watchObservedRunningTime="2026-01-24 00:59:44.944045341 +0000 UTC m=+24.221742690" Jan 24 00:59:45.924358 kubelet[2642]: E0124 00:59:45.924289 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:45.924358 kubelet[2642]: E0124 00:59:45.924325 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:46.925541 kubelet[2642]: E0124 00:59:46.925478 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:46.925541 kubelet[2642]: E0124 00:59:46.925524 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:48.530283 kubelet[2642]: I0124 00:59:48.530126 2642 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:59:48.530815 kubelet[2642]: E0124 00:59:48.530590 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:48.929762 kubelet[2642]: E0124 00:59:48.929729 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 00:59:49.656254 systemd[1]: Started sshd@7-10.0.0.151:22-10.0.0.1:49576.service - OpenSSH per-connection server daemon (10.0.0.1:49576). Jan 24 00:59:49.690809 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 49576 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:49.692338 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:49.697117 systemd-logind[1545]: New session 8 of user core. Jan 24 00:59:49.708370 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:59:49.987082 sshd[4041]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:49.990687 systemd[1]: sshd@7-10.0.0.151:22-10.0.0.1:49576.service: Deactivated successfully. Jan 24 00:59:49.993087 systemd-logind[1545]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:59:49.993110 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:59:49.994770 systemd-logind[1545]: Removed session 8. Jan 24 00:59:54.999096 systemd[1]: Started sshd@8-10.0.0.151:22-10.0.0.1:39236.service - OpenSSH per-connection server daemon (10.0.0.1:39236). Jan 24 00:59:55.029319 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 39236 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 00:59:55.030628 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:59:55.035454 systemd-logind[1545]: New session 9 of user core. Jan 24 00:59:55.044190 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:59:55.158512 sshd[4058]: pam_unix(sshd:session): session closed for user core Jan 24 00:59:55.163672 systemd[1]: sshd@8-10.0.0.151:22-10.0.0.1:39236.service: Deactivated successfully. Jan 24 00:59:55.166530 systemd-logind[1545]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:59:55.166613 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:59:55.168422 systemd-logind[1545]: Removed session 9. Jan 24 01:00:00.174119 systemd[1]: Started sshd@9-10.0.0.151:22-10.0.0.1:39250.service - OpenSSH per-connection server daemon (10.0.0.1:39250). Jan 24 01:00:00.211520 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 39250 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:00.213029 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:00.217519 systemd-logind[1545]: New session 10 of user core. Jan 24 01:00:00.228146 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 01:00:00.338411 sshd[4076]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:00.342063 systemd[1]: sshd@9-10.0.0.151:22-10.0.0.1:39250.service: Deactivated successfully. Jan 24 01:00:00.344655 systemd-logind[1545]: Session 10 logged out. Waiting for processes to exit. Jan 24 01:00:00.344664 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 01:00:00.346109 systemd-logind[1545]: Removed session 10. Jan 24 01:00:05.352128 systemd[1]: Started sshd@10-10.0.0.151:22-10.0.0.1:55408.service - OpenSSH per-connection server daemon (10.0.0.1:55408). Jan 24 01:00:05.382454 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 55408 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:05.384045 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:05.388373 systemd-logind[1545]: New session 11 of user core. Jan 24 01:00:05.398128 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 01:00:05.507716 sshd[4093]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:05.516100 systemd[1]: Started sshd@11-10.0.0.151:22-10.0.0.1:55410.service - OpenSSH per-connection server daemon (10.0.0.1:55410). Jan 24 01:00:05.516601 systemd[1]: sshd@10-10.0.0.151:22-10.0.0.1:55408.service: Deactivated successfully. Jan 24 01:00:05.518461 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 01:00:05.520016 systemd-logind[1545]: Session 11 logged out. Waiting for processes to exit. Jan 24 01:00:05.521882 systemd-logind[1545]: Removed session 11. Jan 24 01:00:05.546680 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 55410 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:05.548163 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:05.552619 systemd-logind[1545]: New session 12 of user core. Jan 24 01:00:05.560129 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 01:00:05.703617 sshd[4107]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:05.717236 systemd[1]: Started sshd@12-10.0.0.151:22-10.0.0.1:55412.service - OpenSSH per-connection server daemon (10.0.0.1:55412). Jan 24 01:00:05.718454 systemd[1]: sshd@11-10.0.0.151:22-10.0.0.1:55410.service: Deactivated successfully. Jan 24 01:00:05.729335 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 01:00:05.729970 systemd-logind[1545]: Session 12 logged out. Waiting for processes to exit. Jan 24 01:00:05.733355 systemd-logind[1545]: Removed session 12. Jan 24 01:00:05.763067 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 55412 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:05.764683 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:05.770381 systemd-logind[1545]: New session 13 of user core. Jan 24 01:00:05.776295 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 01:00:05.901262 sshd[4119]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:05.905416 systemd[1]: sshd@12-10.0.0.151:22-10.0.0.1:55412.service: Deactivated successfully. Jan 24 01:00:05.908167 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 01:00:05.908284 systemd-logind[1545]: Session 13 logged out. Waiting for processes to exit. Jan 24 01:00:05.910113 systemd-logind[1545]: Removed session 13. Jan 24 01:00:10.909116 systemd[1]: Started sshd@13-10.0.0.151:22-10.0.0.1:55418.service - OpenSSH per-connection server daemon (10.0.0.1:55418). Jan 24 01:00:10.940172 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 55418 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:10.941515 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:10.945576 systemd-logind[1545]: New session 14 of user core. Jan 24 01:00:10.955123 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 01:00:11.057927 sshd[4137]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:11.061451 systemd[1]: sshd@13-10.0.0.151:22-10.0.0.1:55418.service: Deactivated successfully. Jan 24 01:00:11.064432 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 01:00:11.064448 systemd-logind[1545]: Session 14 logged out. Waiting for processes to exit. Jan 24 01:00:11.066470 systemd-logind[1545]: Removed session 14. Jan 24 01:00:16.074201 systemd[1]: Started sshd@14-10.0.0.151:22-10.0.0.1:59836.service - OpenSSH per-connection server daemon (10.0.0.1:59836). Jan 24 01:00:16.108702 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 59836 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:16.110318 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:16.115314 systemd-logind[1545]: New session 15 of user core. Jan 24 01:00:16.128177 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 01:00:16.236181 sshd[4153]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:16.240609 systemd[1]: sshd@14-10.0.0.151:22-10.0.0.1:59836.service: Deactivated successfully. Jan 24 01:00:16.243433 systemd-logind[1545]: Session 15 logged out. Waiting for processes to exit. Jan 24 01:00:16.243545 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 01:00:16.245196 systemd-logind[1545]: Removed session 15. Jan 24 01:00:21.250103 systemd[1]: Started sshd@15-10.0.0.151:22-10.0.0.1:59850.service - OpenSSH per-connection server daemon (10.0.0.1:59850). Jan 24 01:00:21.279867 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 59850 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:21.281223 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:21.285431 systemd-logind[1545]: New session 16 of user core. Jan 24 01:00:21.296119 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 01:00:21.400535 sshd[4170]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:21.409139 systemd[1]: Started sshd@16-10.0.0.151:22-10.0.0.1:59852.service - OpenSSH per-connection server daemon (10.0.0.1:59852). Jan 24 01:00:21.409584 systemd[1]: sshd@15-10.0.0.151:22-10.0.0.1:59850.service: Deactivated successfully. Jan 24 01:00:21.412763 systemd-logind[1545]: Session 16 logged out. Waiting for processes to exit. Jan 24 01:00:21.413807 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 01:00:21.415353 systemd-logind[1545]: Removed session 16. Jan 24 01:00:21.441651 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 59852 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:21.443005 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:21.447278 systemd-logind[1545]: New session 17 of user core. Jan 24 01:00:21.456114 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 01:00:21.672809 sshd[4182]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:21.679088 systemd[1]: Started sshd@17-10.0.0.151:22-10.0.0.1:59860.service - OpenSSH per-connection server daemon (10.0.0.1:59860). Jan 24 01:00:21.679526 systemd[1]: sshd@16-10.0.0.151:22-10.0.0.1:59852.service: Deactivated successfully. Jan 24 01:00:21.682614 systemd-logind[1545]: Session 17 logged out. Waiting for processes to exit. Jan 24 01:00:21.682721 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 01:00:21.684694 systemd-logind[1545]: Removed session 17. Jan 24 01:00:21.714779 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 59860 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:21.716275 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:21.720772 systemd-logind[1545]: New session 18 of user core. Jan 24 01:00:21.727209 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 01:00:22.207104 sshd[4196]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:22.214101 systemd[1]: Started sshd@18-10.0.0.151:22-10.0.0.1:59874.service - OpenSSH per-connection server daemon (10.0.0.1:59874). Jan 24 01:00:22.214562 systemd[1]: sshd@17-10.0.0.151:22-10.0.0.1:59860.service: Deactivated successfully. Jan 24 01:00:22.222429 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 01:00:22.225383 systemd-logind[1545]: Session 18 logged out. Waiting for processes to exit. Jan 24 01:00:22.228389 systemd-logind[1545]: Removed session 18. Jan 24 01:00:22.251321 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 59874 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:22.252956 sshd[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:22.258017 systemd-logind[1545]: New session 19 of user core. Jan 24 01:00:22.269188 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 01:00:22.513610 sshd[4214]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:22.524177 systemd[1]: Started sshd@19-10.0.0.151:22-10.0.0.1:59888.service - OpenSSH per-connection server daemon (10.0.0.1:59888). Jan 24 01:00:22.524677 systemd[1]: sshd@18-10.0.0.151:22-10.0.0.1:59874.service: Deactivated successfully. Jan 24 01:00:22.527151 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 01:00:22.529002 systemd-logind[1545]: Session 19 logged out. Waiting for processes to exit. Jan 24 01:00:22.530916 systemd-logind[1545]: Removed session 19. Jan 24 01:00:22.557260 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 59888 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:22.559267 sshd[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:22.564853 systemd-logind[1545]: New session 20 of user core. Jan 24 01:00:22.575168 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 01:00:22.687142 sshd[4230]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:22.690751 systemd[1]: sshd@19-10.0.0.151:22-10.0.0.1:59888.service: Deactivated successfully. Jan 24 01:00:22.693162 systemd-logind[1545]: Session 20 logged out. Waiting for processes to exit. Jan 24 01:00:22.693176 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 01:00:22.694715 systemd-logind[1545]: Removed session 20. Jan 24 01:00:27.699359 systemd[1]: Started sshd@20-10.0.0.151:22-10.0.0.1:40452.service - OpenSSH per-connection server daemon (10.0.0.1:40452). Jan 24 01:00:27.730295 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 40452 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:27.731970 sshd[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:27.737102 systemd-logind[1545]: New session 21 of user core. Jan 24 01:00:27.747201 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 01:00:27.857742 sshd[4252]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:27.862570 systemd[1]: sshd@20-10.0.0.151:22-10.0.0.1:40452.service: Deactivated successfully. Jan 24 01:00:27.865247 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 01:00:27.865255 systemd-logind[1545]: Session 21 logged out. Waiting for processes to exit. Jan 24 01:00:27.866494 systemd-logind[1545]: Removed session 21. Jan 24 01:00:32.873091 systemd[1]: Started sshd@21-10.0.0.151:22-10.0.0.1:40456.service - OpenSSH per-connection server daemon (10.0.0.1:40456). Jan 24 01:00:32.907251 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 40456 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:32.909201 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:32.913742 systemd-logind[1545]: New session 22 of user core. Jan 24 01:00:32.923128 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 01:00:33.037263 sshd[4268]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:33.041217 systemd[1]: sshd@21-10.0.0.151:22-10.0.0.1:40456.service: Deactivated successfully. Jan 24 01:00:33.043746 systemd-logind[1545]: Session 22 logged out. Waiting for processes to exit. Jan 24 01:00:33.043795 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 01:00:33.045436 systemd-logind[1545]: Removed session 22. Jan 24 01:00:33.849804 kubelet[2642]: E0124 01:00:33.849731 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:33.849804 kubelet[2642]: E0124 01:00:33.849786 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:38.055143 systemd[1]: Started sshd@22-10.0.0.151:22-10.0.0.1:60320.service - OpenSSH per-connection server daemon (10.0.0.1:60320). Jan 24 01:00:38.085496 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 60320 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:38.087309 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:38.091613 systemd-logind[1545]: New session 23 of user core. Jan 24 01:00:38.097140 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 01:00:38.203128 sshd[4283]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:38.212258 systemd[1]: Started sshd@23-10.0.0.151:22-10.0.0.1:60328.service - OpenSSH per-connection server daemon (10.0.0.1:60328). Jan 24 01:00:38.212722 systemd[1]: sshd@22-10.0.0.151:22-10.0.0.1:60320.service: Deactivated successfully. Jan 24 01:00:38.215924 systemd-logind[1545]: Session 23 logged out. Waiting for processes to exit. Jan 24 01:00:38.216952 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 01:00:38.218467 systemd-logind[1545]: Removed session 23. Jan 24 01:00:38.242690 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 60328 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:38.244382 sshd[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:38.248766 systemd-logind[1545]: New session 24 of user core. Jan 24 01:00:38.258109 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 01:00:39.783644 containerd[1560]: time="2026-01-24T01:00:39.783492142Z" level=info msg="StopContainer for \"7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200\" with timeout 30 (s)" Jan 24 01:00:39.784415 containerd[1560]: time="2026-01-24T01:00:39.783966441Z" level=info msg="Stop container \"7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200\" with signal terminated" Jan 24 01:00:39.836006 containerd[1560]: time="2026-01-24T01:00:39.835938452Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 01:00:39.838727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200-rootfs.mount: Deactivated successfully. Jan 24 01:00:39.844017 containerd[1560]: time="2026-01-24T01:00:39.843900679Z" level=info msg="StopContainer for \"58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695\" with timeout 2 (s)" Jan 24 01:00:39.844309 containerd[1560]: time="2026-01-24T01:00:39.844271816Z" level=info msg="Stop container \"58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695\" with signal terminated" Jan 24 01:00:39.845641 containerd[1560]: time="2026-01-24T01:00:39.845586520Z" level=info msg="shim disconnected" id=7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200 namespace=k8s.io Jan 24 01:00:39.845722 containerd[1560]: time="2026-01-24T01:00:39.845645870Z" level=warning msg="cleaning up after shim disconnected" id=7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200 namespace=k8s.io Jan 24 01:00:39.845722 containerd[1560]: time="2026-01-24T01:00:39.845658784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:00:39.853459 systemd-networkd[1251]: lxc_health: Link DOWN Jan 24 01:00:39.853927 systemd-networkd[1251]: lxc_health: Lost carrier Jan 24 01:00:39.868335 containerd[1560]: time="2026-01-24T01:00:39.868279046Z" level=info msg="StopContainer for \"7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200\" returns successfully" Jan 24 01:00:39.871978 containerd[1560]: time="2026-01-24T01:00:39.871891912Z" level=info msg="StopPodSandbox for \"3a09854ab4e1aa7da73f4c9d44a5832b2da290464c6b379739e4ab2159c0ac4a\"" Jan 24 01:00:39.871978 containerd[1560]: time="2026-01-24T01:00:39.871929211Z" level=info msg="Container to stop \"7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:00:39.874583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a09854ab4e1aa7da73f4c9d44a5832b2da290464c6b379739e4ab2159c0ac4a-shm.mount: Deactivated successfully. Jan 24 01:00:39.906720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695-rootfs.mount: Deactivated successfully. Jan 24 01:00:39.912458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a09854ab4e1aa7da73f4c9d44a5832b2da290464c6b379739e4ab2159c0ac4a-rootfs.mount: Deactivated successfully. Jan 24 01:00:39.915583 containerd[1560]: time="2026-01-24T01:00:39.915450949Z" level=info msg="shim disconnected" id=58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695 namespace=k8s.io Jan 24 01:00:39.915730 containerd[1560]: time="2026-01-24T01:00:39.915592922Z" level=warning msg="cleaning up after shim disconnected" id=58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695 namespace=k8s.io Jan 24 01:00:39.915730 containerd[1560]: time="2026-01-24T01:00:39.915626323Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:00:39.916886 containerd[1560]: time="2026-01-24T01:00:39.916770348Z" level=info msg="shim disconnected" id=3a09854ab4e1aa7da73f4c9d44a5832b2da290464c6b379739e4ab2159c0ac4a namespace=k8s.io Jan 24 01:00:39.916886 containerd[1560]: time="2026-01-24T01:00:39.916872578Z" level=warning msg="cleaning up after shim disconnected" id=3a09854ab4e1aa7da73f4c9d44a5832b2da290464c6b379739e4ab2159c0ac4a namespace=k8s.io Jan 24 01:00:39.916886 containerd[1560]: time="2026-01-24T01:00:39.916881925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:00:39.936197 containerd[1560]: time="2026-01-24T01:00:39.936058638Z" level=info msg="TearDown network for sandbox \"3a09854ab4e1aa7da73f4c9d44a5832b2da290464c6b379739e4ab2159c0ac4a\" successfully" Jan 24 01:00:39.936197 containerd[1560]: time="2026-01-24T01:00:39.936091360Z" level=info msg="StopPodSandbox for \"3a09854ab4e1aa7da73f4c9d44a5832b2da290464c6b379739e4ab2159c0ac4a\" returns successfully" Jan 24 01:00:39.939749 containerd[1560]: time="2026-01-24T01:00:39.939692574Z" level=info msg="StopContainer for \"58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695\" returns successfully" Jan 24 01:00:39.940281 containerd[1560]: time="2026-01-24T01:00:39.940027193Z" level=info msg="StopPodSandbox for \"fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb\"" Jan 24 01:00:39.940281 containerd[1560]: time="2026-01-24T01:00:39.940089378Z" level=info msg="Container to stop \"0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:00:39.940281 containerd[1560]: time="2026-01-24T01:00:39.940102612Z" level=info msg="Container to stop \"77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:00:39.940281 containerd[1560]: time="2026-01-24T01:00:39.940112912Z" level=info msg="Container to stop \"58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:00:39.940281 containerd[1560]: time="2026-01-24T01:00:39.940122489Z" level=info msg="Container to stop \"4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:00:39.940281 containerd[1560]: time="2026-01-24T01:00:39.940131016Z" level=info msg="Container to stop \"bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 24 01:00:39.975314 containerd[1560]: time="2026-01-24T01:00:39.975217202Z" level=info msg="shim disconnected" id=fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb namespace=k8s.io Jan 24 01:00:39.975314 containerd[1560]: time="2026-01-24T01:00:39.975284687Z" level=warning msg="cleaning up after shim disconnected" id=fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb namespace=k8s.io Jan 24 01:00:39.975314 containerd[1560]: time="2026-01-24T01:00:39.975294175Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:00:39.991107 containerd[1560]: time="2026-01-24T01:00:39.991029384Z" level=info msg="TearDown network for sandbox \"fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb\" successfully" Jan 24 01:00:39.991107 containerd[1560]: time="2026-01-24T01:00:39.991087884Z" level=info msg="StopPodSandbox for \"fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb\" returns successfully" Jan 24 01:00:39.998243 kubelet[2642]: I0124 01:00:39.998155 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64d263b1-8906-4277-a226-f7fc35939162-cilium-config-path\") pod \"64d263b1-8906-4277-a226-f7fc35939162\" (UID: \"64d263b1-8906-4277-a226-f7fc35939162\") " Jan 24 01:00:39.998243 kubelet[2642]: I0124 01:00:39.998202 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf4ph\" (UniqueName: \"kubernetes.io/projected/64d263b1-8906-4277-a226-f7fc35939162-kube-api-access-tf4ph\") pod \"64d263b1-8906-4277-a226-f7fc35939162\" (UID: \"64d263b1-8906-4277-a226-f7fc35939162\") " Jan 24 01:00:40.004350 kubelet[2642]: I0124 01:00:40.004285 2642 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64d263b1-8906-4277-a226-f7fc35939162-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "64d263b1-8906-4277-a226-f7fc35939162" (UID: "64d263b1-8906-4277-a226-f7fc35939162"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 01:00:40.004463 kubelet[2642]: I0124 01:00:40.004377 2642 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64d263b1-8906-4277-a226-f7fc35939162-kube-api-access-tf4ph" (OuterVolumeSpecName: "kube-api-access-tf4ph") pod "64d263b1-8906-4277-a226-f7fc35939162" (UID: "64d263b1-8906-4277-a226-f7fc35939162"). InnerVolumeSpecName "kube-api-access-tf4ph". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 01:00:40.046743 kubelet[2642]: I0124 01:00:40.046618 2642 scope.go:117] "RemoveContainer" containerID="7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200" Jan 24 01:00:40.052142 containerd[1560]: time="2026-01-24T01:00:40.052112335Z" level=info msg="RemoveContainer for \"7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200\"" Jan 24 01:00:40.064411 containerd[1560]: time="2026-01-24T01:00:40.064334050Z" level=info msg="RemoveContainer for \"7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200\" returns successfully" Jan 24 01:00:40.064588 kubelet[2642]: I0124 01:00:40.064562 2642 scope.go:117] "RemoveContainer" containerID="7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200" Jan 24 01:00:40.064959 containerd[1560]: time="2026-01-24T01:00:40.064881412Z" level=error msg="ContainerStatus for \"7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200\": not found" Jan 24 01:00:40.065175 kubelet[2642]: E0124 01:00:40.065086 2642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200\": not found" containerID="7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200" Jan 24 01:00:40.065312 kubelet[2642]: I0124 01:00:40.065133 2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200"} err="failed to get container status \"7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b25105b3572bdc44dab2bcc5c0704d648b3fdc931025f2f54e2b37be476a200\": not found" Jan 24 01:00:40.065312 kubelet[2642]: I0124 01:00:40.065202 2642 scope.go:117] "RemoveContainer" containerID="58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695" Jan 24 01:00:40.066708 containerd[1560]: time="2026-01-24T01:00:40.066640744Z" level=info msg="RemoveContainer for \"58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695\"" Jan 24 01:00:40.072339 containerd[1560]: time="2026-01-24T01:00:40.072280730Z" level=info msg="RemoveContainer for \"58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695\" returns successfully" Jan 24 01:00:40.072527 kubelet[2642]: I0124 01:00:40.072487 2642 scope.go:117] "RemoveContainer" containerID="77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344" Jan 24 01:00:40.073443 containerd[1560]: time="2026-01-24T01:00:40.073400528Z" level=info msg="RemoveContainer for \"77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344\"" Jan 24 01:00:40.076562 containerd[1560]: time="2026-01-24T01:00:40.076505851Z" level=info msg="RemoveContainer for \"77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344\" returns successfully" Jan 24 01:00:40.076762 kubelet[2642]: I0124 01:00:40.076718 2642 scope.go:117] "RemoveContainer" containerID="0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67" Jan 24 01:00:40.077742 containerd[1560]: time="2026-01-24T01:00:40.077705798Z" level=info msg="RemoveContainer for \"0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67\"" Jan 24 01:00:40.081000 containerd[1560]: time="2026-01-24T01:00:40.080955380Z" level=info msg="RemoveContainer for \"0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67\" returns successfully" Jan 24 01:00:40.081267 kubelet[2642]: I0124 01:00:40.081189 2642 scope.go:117] "RemoveContainer" containerID="bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6" Jan 24 01:00:40.082195 containerd[1560]: time="2026-01-24T01:00:40.082156268Z" level=info msg="RemoveContainer for \"bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6\"" Jan 24 01:00:40.085655 containerd[1560]: time="2026-01-24T01:00:40.085574937Z" level=info msg="RemoveContainer for \"bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6\" returns successfully" Jan 24 01:00:40.085866 kubelet[2642]: I0124 01:00:40.085773 2642 scope.go:117] "RemoveContainer" containerID="4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f" Jan 24 01:00:40.087004 containerd[1560]: time="2026-01-24T01:00:40.086949123Z" level=info msg="RemoveContainer for \"4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f\"" Jan 24 01:00:40.090514 containerd[1560]: time="2026-01-24T01:00:40.090460513Z" level=info msg="RemoveContainer for \"4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f\" returns successfully" Jan 24 01:00:40.090765 kubelet[2642]: I0124 01:00:40.090727 2642 scope.go:117] "RemoveContainer" containerID="58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695" Jan 24 01:00:40.090940 containerd[1560]: time="2026-01-24T01:00:40.090905187Z" level=error msg="ContainerStatus for \"58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695\": not found" Jan 24 01:00:40.091173 kubelet[2642]: E0124 01:00:40.091068 2642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695\": not found" containerID="58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695" Jan 24 01:00:40.091173 kubelet[2642]: I0124 01:00:40.091101 2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695"} err="failed to get container status \"58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695\": rpc error: code = NotFound desc = an error occurred when try to find container \"58e2451122efc4ee81ab133e27afdfb9f0ee4bea0f677fe6071808bb5ce1d695\": not found" Jan 24 01:00:40.091173 kubelet[2642]: I0124 01:00:40.091125 2642 scope.go:117] "RemoveContainer" containerID="77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344" Jan 24 01:00:40.091335 containerd[1560]: time="2026-01-24T01:00:40.091297409Z" level=error msg="ContainerStatus for \"77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344\": not found" Jan 24 01:00:40.091545 kubelet[2642]: E0124 01:00:40.091477 2642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344\": not found" containerID="77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344" Jan 24 01:00:40.091545 kubelet[2642]: I0124 01:00:40.091518 2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344"} err="failed to get container status \"77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344\": rpc error: code = NotFound desc = an error occurred when try to find container \"77e5f2bf31635f93812dbdd55913374d1aa66b7c74fb8a005813f588dfc99344\": not found" Jan 24 01:00:40.091545 kubelet[2642]: I0124 01:00:40.091538 2642 scope.go:117] "RemoveContainer" containerID="0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67" Jan 24 01:00:40.091767 containerd[1560]: time="2026-01-24T01:00:40.091732007Z" level=error msg="ContainerStatus for \"0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67\": not found" Jan 24 01:00:40.091958 kubelet[2642]: E0124 01:00:40.091914 2642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67\": not found" containerID="0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67" Jan 24 01:00:40.091996 kubelet[2642]: I0124 01:00:40.091960 2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67"} err="failed to get container status \"0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ae09a78aca89edb82e995e93b43cbf04778f3d52fd43aa1365e97428a4f2b67\": not found" Jan 24 01:00:40.091996 kubelet[2642]: I0124 01:00:40.091982 2642 scope.go:117] "RemoveContainer" containerID="bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6" Jan 24 01:00:40.092248 containerd[1560]: time="2026-01-24T01:00:40.092207654Z" level=error msg="ContainerStatus for \"bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6\": not found" Jan 24 01:00:40.092354 kubelet[2642]: E0124 01:00:40.092316 2642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6\": not found" containerID="bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6" Jan 24 01:00:40.092354 kubelet[2642]: I0124 01:00:40.092343 2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6"} err="failed to get container status \"bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf27557e65707ba823db8322099fb73c7b6205c4f2ec83bd126f905477823eb6\": not found" Jan 24 01:00:40.092413 kubelet[2642]: I0124 01:00:40.092355 2642 scope.go:117] "RemoveContainer" containerID="4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f" Jan 24 01:00:40.092531 containerd[1560]: time="2026-01-24T01:00:40.092499548Z" level=error msg="ContainerStatus for \"4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f\": not found" Jan 24 01:00:40.092612 kubelet[2642]: E0124 01:00:40.092579 2642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f\": not found" containerID="4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f" Jan 24 01:00:40.092732 kubelet[2642]: I0124 01:00:40.092685 2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f"} err="failed to get container status \"4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a821845e0eb0151d9638cefa17456ba02550d4cb2e43a978f2d00334a2dd89f\": not found" Jan 24 01:00:40.099003 kubelet[2642]: I0124 01:00:40.098930 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-host-proc-sys-kernel\") pod \"eb00d75b-3b1a-41ee-ba56-618dbb490221\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " Jan 24 01:00:40.099003 kubelet[2642]: I0124 01:00:40.098974 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-xtables-lock\") pod \"eb00d75b-3b1a-41ee-ba56-618dbb490221\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " Jan 24 01:00:40.099003 kubelet[2642]: I0124 01:00:40.098995 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb00d75b-3b1a-41ee-ba56-618dbb490221-hubble-tls\") pod \"eb00d75b-3b1a-41ee-ba56-618dbb490221\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " Jan 24 01:00:40.099003 kubelet[2642]: I0124 01:00:40.099008 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-lib-modules\") pod \"eb00d75b-3b1a-41ee-ba56-618dbb490221\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " Jan 24 01:00:40.099187 kubelet[2642]: I0124 01:00:40.099027 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb00d75b-3b1a-41ee-ba56-618dbb490221-cilium-config-path\") pod \"eb00d75b-3b1a-41ee-ba56-618dbb490221\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " Jan 24 01:00:40.099187 kubelet[2642]: I0124 01:00:40.099082 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-cilium-cgroup\") pod \"eb00d75b-3b1a-41ee-ba56-618dbb490221\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " Jan 24 01:00:40.099187 kubelet[2642]: I0124 01:00:40.099097 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-cni-path\") pod \"eb00d75b-3b1a-41ee-ba56-618dbb490221\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " Jan 24 01:00:40.099187 kubelet[2642]: I0124 01:00:40.099111 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v55p\" (UniqueName: \"kubernetes.io/projected/eb00d75b-3b1a-41ee-ba56-618dbb490221-kube-api-access-4v55p\") pod \"eb00d75b-3b1a-41ee-ba56-618dbb490221\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " Jan 24 01:00:40.099187 kubelet[2642]: I0124 01:00:40.099126 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-cilium-run\") pod \"eb00d75b-3b1a-41ee-ba56-618dbb490221\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " Jan 24 01:00:40.099187 kubelet[2642]: I0124 01:00:40.099123 2642 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "eb00d75b-3b1a-41ee-ba56-618dbb490221" (UID: "eb00d75b-3b1a-41ee-ba56-618dbb490221"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:40.099341 kubelet[2642]: I0124 01:00:40.099138 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-etc-cni-netd\") pod \"eb00d75b-3b1a-41ee-ba56-618dbb490221\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " Jan 24 01:00:40.099341 kubelet[2642]: I0124 01:00:40.099156 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb00d75b-3b1a-41ee-ba56-618dbb490221-clustermesh-secrets\") pod \"eb00d75b-3b1a-41ee-ba56-618dbb490221\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " Jan 24 01:00:40.099341 kubelet[2642]: I0124 01:00:40.099169 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-host-proc-sys-net\") pod \"eb00d75b-3b1a-41ee-ba56-618dbb490221\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " Jan 24 01:00:40.099341 kubelet[2642]: I0124 01:00:40.099174 2642 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "eb00d75b-3b1a-41ee-ba56-618dbb490221" (UID: "eb00d75b-3b1a-41ee-ba56-618dbb490221"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:40.099341 kubelet[2642]: I0124 01:00:40.099182 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-hostproc\") pod \"eb00d75b-3b1a-41ee-ba56-618dbb490221\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " Jan 24 01:00:40.099451 kubelet[2642]: I0124 01:00:40.099202 2642 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-hostproc" (OuterVolumeSpecName: "hostproc") pod "eb00d75b-3b1a-41ee-ba56-618dbb490221" (UID: "eb00d75b-3b1a-41ee-ba56-618dbb490221"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:40.099451 kubelet[2642]: I0124 01:00:40.099222 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-bpf-maps\") pod \"eb00d75b-3b1a-41ee-ba56-618dbb490221\" (UID: \"eb00d75b-3b1a-41ee-ba56-618dbb490221\") " Jan 24 01:00:40.099451 kubelet[2642]: I0124 01:00:40.099272 2642 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 24 01:00:40.099451 kubelet[2642]: I0124 01:00:40.099287 2642 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 24 01:00:40.099451 kubelet[2642]: I0124 01:00:40.099300 2642 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64d263b1-8906-4277-a226-f7fc35939162-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 24 01:00:40.099451 kubelet[2642]: I0124 01:00:40.099313 2642 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tf4ph\" (UniqueName: \"kubernetes.io/projected/64d263b1-8906-4277-a226-f7fc35939162-kube-api-access-tf4ph\") on node \"localhost\" DevicePath \"\"" Jan 24 01:00:40.099451 kubelet[2642]: I0124 01:00:40.099329 2642 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 24 01:00:40.099653 kubelet[2642]: I0124 01:00:40.099355 2642 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "eb00d75b-3b1a-41ee-ba56-618dbb490221" (UID: "eb00d75b-3b1a-41ee-ba56-618dbb490221"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:40.099794 kubelet[2642]: I0124 01:00:40.099739 2642 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "eb00d75b-3b1a-41ee-ba56-618dbb490221" (UID: "eb00d75b-3b1a-41ee-ba56-618dbb490221"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:40.101949 kubelet[2642]: I0124 01:00:40.101918 2642 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "eb00d75b-3b1a-41ee-ba56-618dbb490221" (UID: "eb00d75b-3b1a-41ee-ba56-618dbb490221"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:40.102030 kubelet[2642]: I0124 01:00:40.101959 2642 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-cni-path" (OuterVolumeSpecName: "cni-path") pod "eb00d75b-3b1a-41ee-ba56-618dbb490221" (UID: "eb00d75b-3b1a-41ee-ba56-618dbb490221"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:40.102030 kubelet[2642]: I0124 01:00:40.101977 2642 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "eb00d75b-3b1a-41ee-ba56-618dbb490221" (UID: "eb00d75b-3b1a-41ee-ba56-618dbb490221"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:40.102292 kubelet[2642]: I0124 01:00:40.102251 2642 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "eb00d75b-3b1a-41ee-ba56-618dbb490221" (UID: "eb00d75b-3b1a-41ee-ba56-618dbb490221"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:40.102889 kubelet[2642]: I0124 01:00:40.099810 2642 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "eb00d75b-3b1a-41ee-ba56-618dbb490221" (UID: "eb00d75b-3b1a-41ee-ba56-618dbb490221"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 24 01:00:40.102960 kubelet[2642]: I0124 01:00:40.102944 2642 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb00d75b-3b1a-41ee-ba56-618dbb490221-kube-api-access-4v55p" (OuterVolumeSpecName: "kube-api-access-4v55p") pod "eb00d75b-3b1a-41ee-ba56-618dbb490221" (UID: "eb00d75b-3b1a-41ee-ba56-618dbb490221"). InnerVolumeSpecName "kube-api-access-4v55p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 01:00:40.103073 kubelet[2642]: I0124 01:00:40.103015 2642 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb00d75b-3b1a-41ee-ba56-618dbb490221-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eb00d75b-3b1a-41ee-ba56-618dbb490221" (UID: "eb00d75b-3b1a-41ee-ba56-618dbb490221"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 01:00:40.104670 kubelet[2642]: I0124 01:00:40.104612 2642 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb00d75b-3b1a-41ee-ba56-618dbb490221-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "eb00d75b-3b1a-41ee-ba56-618dbb490221" (UID: "eb00d75b-3b1a-41ee-ba56-618dbb490221"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 01:00:40.105431 kubelet[2642]: I0124 01:00:40.105379 2642 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb00d75b-3b1a-41ee-ba56-618dbb490221-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "eb00d75b-3b1a-41ee-ba56-618dbb490221" (UID: "eb00d75b-3b1a-41ee-ba56-618dbb490221"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 01:00:40.200316 kubelet[2642]: I0124 01:00:40.200231 2642 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 24 01:00:40.200316 kubelet[2642]: I0124 01:00:40.200293 2642 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 24 01:00:40.200316 kubelet[2642]: I0124 01:00:40.200308 2642 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4v55p\" (UniqueName: \"kubernetes.io/projected/eb00d75b-3b1a-41ee-ba56-618dbb490221-kube-api-access-4v55p\") on node \"localhost\" DevicePath \"\"" Jan 24 01:00:40.200440 kubelet[2642]: I0124 01:00:40.200329 2642 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 24 01:00:40.200440 kubelet[2642]: I0124 01:00:40.200344 2642 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 24 01:00:40.200440 kubelet[2642]: I0124 01:00:40.200357 2642 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb00d75b-3b1a-41ee-ba56-618dbb490221-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 24 01:00:40.200440 kubelet[2642]: I0124 01:00:40.200369 2642 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 24 01:00:40.200440 kubelet[2642]: I0124 01:00:40.200383 2642 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 24 01:00:40.200440 kubelet[2642]: I0124 01:00:40.200396 2642 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb00d75b-3b1a-41ee-ba56-618dbb490221-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 24 01:00:40.200440 kubelet[2642]: I0124 01:00:40.200408 2642 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb00d75b-3b1a-41ee-ba56-618dbb490221-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 24 01:00:40.200440 kubelet[2642]: I0124 01:00:40.200422 2642 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb00d75b-3b1a-41ee-ba56-618dbb490221-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 24 01:00:40.817175 systemd[1]: var-lib-kubelet-pods-64d263b1\x2d8906\x2d4277\x2da226\x2df7fc35939162-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtf4ph.mount: Deactivated successfully. Jan 24 01:00:40.817402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb-rootfs.mount: Deactivated successfully. Jan 24 01:00:40.817559 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fed5f202b2633a5a87d459668662d6e55f05130ebd97a845479859e891e120cb-shm.mount: Deactivated successfully. Jan 24 01:00:40.817709 systemd[1]: var-lib-kubelet-pods-eb00d75b\x2d3b1a\x2d41ee\x2dba56\x2d618dbb490221-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4v55p.mount: Deactivated successfully. Jan 24 01:00:40.817887 systemd[1]: var-lib-kubelet-pods-eb00d75b\x2d3b1a\x2d41ee\x2dba56\x2d618dbb490221-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 24 01:00:40.818012 systemd[1]: var-lib-kubelet-pods-eb00d75b\x2d3b1a\x2d41ee\x2dba56\x2d618dbb490221-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 24 01:00:40.851745 kubelet[2642]: I0124 01:00:40.851639 2642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64d263b1-8906-4277-a226-f7fc35939162" path="/var/lib/kubelet/pods/64d263b1-8906-4277-a226-f7fc35939162/volumes" Jan 24 01:00:40.852378 kubelet[2642]: I0124 01:00:40.852314 2642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb00d75b-3b1a-41ee-ba56-618dbb490221" path="/var/lib/kubelet/pods/eb00d75b-3b1a-41ee-ba56-618dbb490221/volumes" Jan 24 01:00:40.916519 kubelet[2642]: E0124 01:00:40.916474 2642 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 24 01:00:41.737531 sshd[4295]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:41.746094 systemd[1]: Started sshd@24-10.0.0.151:22-10.0.0.1:60344.service - OpenSSH per-connection server daemon (10.0.0.1:60344). Jan 24 01:00:41.746536 systemd[1]: sshd@23-10.0.0.151:22-10.0.0.1:60328.service: Deactivated successfully. Jan 24 01:00:41.750404 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 01:00:41.750437 systemd-logind[1545]: Session 24 logged out. Waiting for processes to exit. Jan 24 01:00:41.752051 systemd-logind[1545]: Removed session 24. Jan 24 01:00:41.782424 sshd[4463]: Accepted publickey for core from 10.0.0.1 port 60344 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:41.784072 sshd[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:41.788677 systemd-logind[1545]: New session 25 of user core. Jan 24 01:00:41.799121 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 01:00:42.223091 sshd[4463]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:42.232239 systemd[1]: Started sshd@25-10.0.0.151:22-10.0.0.1:60350.service - OpenSSH per-connection server daemon (10.0.0.1:60350). Jan 24 01:00:42.233612 systemd[1]: sshd@24-10.0.0.151:22-10.0.0.1:60344.service: Deactivated successfully. Jan 24 01:00:42.240557 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 01:00:42.251875 kubelet[2642]: I0124 01:00:42.250924 2642 memory_manager.go:355] "RemoveStaleState removing state" podUID="eb00d75b-3b1a-41ee-ba56-618dbb490221" containerName="cilium-agent" Jan 24 01:00:42.251875 kubelet[2642]: I0124 01:00:42.250949 2642 memory_manager.go:355] "RemoveStaleState removing state" podUID="64d263b1-8906-4277-a226-f7fc35939162" containerName="cilium-operator" Jan 24 01:00:42.255321 systemd-logind[1545]: Session 25 logged out. Waiting for processes to exit. Jan 24 01:00:42.267197 systemd-logind[1545]: Removed session 25. Jan 24 01:00:42.299227 sshd[4477]: Accepted publickey for core from 10.0.0.1 port 60350 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:42.302750 sshd[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:42.307639 systemd-logind[1545]: New session 26 of user core. Jan 24 01:00:42.314222 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 01:00:42.315277 kubelet[2642]: I0124 01:00:42.315099 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4eede1ad-5028-4104-a832-50b225c645e6-cilium-run\") pod \"cilium-dz6jc\" (UID: \"4eede1ad-5028-4104-a832-50b225c645e6\") " pod="kube-system/cilium-dz6jc" Jan 24 01:00:42.315277 kubelet[2642]: I0124 01:00:42.315145 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eede1ad-5028-4104-a832-50b225c645e6-etc-cni-netd\") pod \"cilium-dz6jc\" (UID: \"4eede1ad-5028-4104-a832-50b225c645e6\") " pod="kube-system/cilium-dz6jc" Jan 24 01:00:42.315277 kubelet[2642]: I0124 01:00:42.315180 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4eede1ad-5028-4104-a832-50b225c645e6-host-proc-sys-net\") pod \"cilium-dz6jc\" (UID: \"4eede1ad-5028-4104-a832-50b225c645e6\") " pod="kube-system/cilium-dz6jc" Jan 24 01:00:42.315277 kubelet[2642]: I0124 01:00:42.315207 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4eede1ad-5028-4104-a832-50b225c645e6-host-proc-sys-kernel\") pod \"cilium-dz6jc\" (UID: \"4eede1ad-5028-4104-a832-50b225c645e6\") " pod="kube-system/cilium-dz6jc" Jan 24 01:00:42.315277 kubelet[2642]: I0124 01:00:42.315233 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4eede1ad-5028-4104-a832-50b225c645e6-cilium-cgroup\") pod \"cilium-dz6jc\" (UID: \"4eede1ad-5028-4104-a832-50b225c645e6\") " pod="kube-system/cilium-dz6jc" Jan 24 01:00:42.315277 kubelet[2642]: I0124 01:00:42.315254 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4eede1ad-5028-4104-a832-50b225c645e6-lib-modules\") pod \"cilium-dz6jc\" (UID: \"4eede1ad-5028-4104-a832-50b225c645e6\") " pod="kube-system/cilium-dz6jc" Jan 24 01:00:42.315523 kubelet[2642]: I0124 01:00:42.315280 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4eede1ad-5028-4104-a832-50b225c645e6-hostproc\") pod \"cilium-dz6jc\" (UID: \"4eede1ad-5028-4104-a832-50b225c645e6\") " pod="kube-system/cilium-dz6jc" Jan 24 01:00:42.315523 kubelet[2642]: I0124 01:00:42.315305 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4eede1ad-5028-4104-a832-50b225c645e6-clustermesh-secrets\") pod \"cilium-dz6jc\" (UID: \"4eede1ad-5028-4104-a832-50b225c645e6\") " pod="kube-system/cilium-dz6jc" Jan 24 01:00:42.315523 kubelet[2642]: I0124 01:00:42.315329 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4eede1ad-5028-4104-a832-50b225c645e6-cilium-config-path\") pod \"cilium-dz6jc\" (UID: \"4eede1ad-5028-4104-a832-50b225c645e6\") " pod="kube-system/cilium-dz6jc" Jan 24 01:00:42.315523 kubelet[2642]: I0124 01:00:42.315353 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tt2p\" (UniqueName: \"kubernetes.io/projected/4eede1ad-5028-4104-a832-50b225c645e6-kube-api-access-6tt2p\") pod \"cilium-dz6jc\" (UID: \"4eede1ad-5028-4104-a832-50b225c645e6\") " pod="kube-system/cilium-dz6jc" Jan 24 01:00:42.315523 kubelet[2642]: I0124 01:00:42.315377 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4eede1ad-5028-4104-a832-50b225c645e6-bpf-maps\") pod \"cilium-dz6jc\" (UID: \"4eede1ad-5028-4104-a832-50b225c645e6\") " pod="kube-system/cilium-dz6jc" Jan 24 01:00:42.315523 kubelet[2642]: I0124 01:00:42.315399 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4eede1ad-5028-4104-a832-50b225c645e6-xtables-lock\") pod \"cilium-dz6jc\" (UID: \"4eede1ad-5028-4104-a832-50b225c645e6\") " pod="kube-system/cilium-dz6jc" Jan 24 01:00:42.315656 kubelet[2642]: I0124 01:00:42.315425 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4eede1ad-5028-4104-a832-50b225c645e6-hubble-tls\") pod \"cilium-dz6jc\" (UID: \"4eede1ad-5028-4104-a832-50b225c645e6\") " pod="kube-system/cilium-dz6jc" Jan 24 01:00:42.315656 kubelet[2642]: I0124 01:00:42.315455 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4eede1ad-5028-4104-a832-50b225c645e6-cni-path\") pod \"cilium-dz6jc\" (UID: \"4eede1ad-5028-4104-a832-50b225c645e6\") " pod="kube-system/cilium-dz6jc" Jan 24 01:00:42.315656 kubelet[2642]: I0124 01:00:42.315476 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4eede1ad-5028-4104-a832-50b225c645e6-cilium-ipsec-secrets\") pod \"cilium-dz6jc\" (UID: \"4eede1ad-5028-4104-a832-50b225c645e6\") " pod="kube-system/cilium-dz6jc" Jan 24 01:00:42.367325 sshd[4477]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:42.378196 systemd[1]: Started sshd@26-10.0.0.151:22-10.0.0.1:60356.service - OpenSSH per-connection server daemon (10.0.0.1:60356). Jan 24 01:00:42.378682 systemd[1]: sshd@25-10.0.0.151:22-10.0.0.1:60350.service: Deactivated successfully. Jan 24 01:00:42.382217 systemd-logind[1545]: Session 26 logged out. Waiting for processes to exit. Jan 24 01:00:42.383732 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 01:00:42.385094 systemd-logind[1545]: Removed session 26. Jan 24 01:00:42.408764 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 60356 ssh2: RSA SHA256:s4fSqoqaAzPa1ksBB2TmRI5uwi06lfdKAHz+DQ9/svw Jan 24 01:00:42.410641 sshd[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 01:00:42.415389 systemd-logind[1545]: New session 27 of user core. Jan 24 01:00:42.424248 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 24 01:00:42.571342 kubelet[2642]: E0124 01:00:42.571114 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:42.571786 containerd[1560]: time="2026-01-24T01:00:42.571718467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dz6jc,Uid:4eede1ad-5028-4104-a832-50b225c645e6,Namespace:kube-system,Attempt:0,}" Jan 24 01:00:42.598165 containerd[1560]: time="2026-01-24T01:00:42.598004505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 01:00:42.598643 containerd[1560]: time="2026-01-24T01:00:42.598131190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 01:00:42.598643 containerd[1560]: time="2026-01-24T01:00:42.598195910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:00:42.598643 containerd[1560]: time="2026-01-24T01:00:42.598326131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 01:00:42.645230 containerd[1560]: time="2026-01-24T01:00:42.645137066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dz6jc,Uid:4eede1ad-5028-4104-a832-50b225c645e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"75bd347bf5611a55007ebc147129b9a2d7d2d842a13cf6c73b8e1b709c3678dc\"" Jan 24 01:00:42.645920 kubelet[2642]: E0124 01:00:42.645887 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:42.648387 containerd[1560]: time="2026-01-24T01:00:42.648354154Z" level=info msg="CreateContainer within sandbox \"75bd347bf5611a55007ebc147129b9a2d7d2d842a13cf6c73b8e1b709c3678dc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 24 01:00:42.664609 containerd[1560]: time="2026-01-24T01:00:42.664534229Z" level=info msg="CreateContainer within sandbox \"75bd347bf5611a55007ebc147129b9a2d7d2d842a13cf6c73b8e1b709c3678dc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"42dbb0996512b3c7fcb6d95332f942569153865b55b7b2f047c71ff08e23f3d3\"" Jan 24 01:00:42.665637 containerd[1560]: time="2026-01-24T01:00:42.665592957Z" level=info msg="StartContainer for \"42dbb0996512b3c7fcb6d95332f942569153865b55b7b2f047c71ff08e23f3d3\"" Jan 24 01:00:42.723364 containerd[1560]: time="2026-01-24T01:00:42.723322744Z" level=info msg="StartContainer for \"42dbb0996512b3c7fcb6d95332f942569153865b55b7b2f047c71ff08e23f3d3\" returns successfully" Jan 24 01:00:42.746954 kubelet[2642]: I0124 01:00:42.746774 2642 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-24T01:00:42Z","lastTransitionTime":"2026-01-24T01:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 24 01:00:42.775051 containerd[1560]: time="2026-01-24T01:00:42.774913478Z" level=info msg="shim disconnected" id=42dbb0996512b3c7fcb6d95332f942569153865b55b7b2f047c71ff08e23f3d3 namespace=k8s.io Jan 24 01:00:42.775051 containerd[1560]: time="2026-01-24T01:00:42.774977948Z" level=warning msg="cleaning up after shim disconnected" id=42dbb0996512b3c7fcb6d95332f942569153865b55b7b2f047c71ff08e23f3d3 namespace=k8s.io Jan 24 01:00:42.775051 containerd[1560]: time="2026-01-24T01:00:42.774987506Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:00:43.064728 kubelet[2642]: E0124 01:00:43.064689 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:43.066477 containerd[1560]: time="2026-01-24T01:00:43.066434089Z" level=info msg="CreateContainer within sandbox \"75bd347bf5611a55007ebc147129b9a2d7d2d842a13cf6c73b8e1b709c3678dc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 24 01:00:43.087439 containerd[1560]: time="2026-01-24T01:00:43.087352115Z" level=info msg="CreateContainer within sandbox \"75bd347bf5611a55007ebc147129b9a2d7d2d842a13cf6c73b8e1b709c3678dc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"af0c73a96cbc9ce2e95c81d1ab61d2c109ea98869306dd923688ebd689c8a81c\"" Jan 24 01:00:43.088910 containerd[1560]: time="2026-01-24T01:00:43.088061850Z" level=info msg="StartContainer for \"af0c73a96cbc9ce2e95c81d1ab61d2c109ea98869306dd923688ebd689c8a81c\"" Jan 24 01:00:43.145148 containerd[1560]: time="2026-01-24T01:00:43.145072380Z" level=info msg="StartContainer for \"af0c73a96cbc9ce2e95c81d1ab61d2c109ea98869306dd923688ebd689c8a81c\" returns successfully" Jan 24 01:00:43.174641 containerd[1560]: time="2026-01-24T01:00:43.174580675Z" level=info msg="shim disconnected" id=af0c73a96cbc9ce2e95c81d1ab61d2c109ea98869306dd923688ebd689c8a81c namespace=k8s.io Jan 24 01:00:43.174641 containerd[1560]: time="2026-01-24T01:00:43.174635305Z" level=warning msg="cleaning up after shim disconnected" id=af0c73a96cbc9ce2e95c81d1ab61d2c109ea98869306dd923688ebd689c8a81c namespace=k8s.io Jan 24 01:00:43.174641 containerd[1560]: time="2026-01-24T01:00:43.174644713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:00:44.069308 kubelet[2642]: E0124 01:00:44.069237 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:44.071483 containerd[1560]: time="2026-01-24T01:00:44.071425900Z" level=info msg="CreateContainer within sandbox \"75bd347bf5611a55007ebc147129b9a2d7d2d842a13cf6c73b8e1b709c3678dc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 24 01:00:44.090899 containerd[1560]: time="2026-01-24T01:00:44.090790461Z" level=info msg="CreateContainer within sandbox \"75bd347bf5611a55007ebc147129b9a2d7d2d842a13cf6c73b8e1b709c3678dc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d8b3abd3fa039c914f24abf3ccb07eb03018bb8a052f34b9718ea1a2def69a25\"" Jan 24 01:00:44.091558 containerd[1560]: time="2026-01-24T01:00:44.091478204Z" level=info msg="StartContainer for \"d8b3abd3fa039c914f24abf3ccb07eb03018bb8a052f34b9718ea1a2def69a25\"" Jan 24 01:00:44.166888 containerd[1560]: time="2026-01-24T01:00:44.165350703Z" level=info msg="StartContainer for \"d8b3abd3fa039c914f24abf3ccb07eb03018bb8a052f34b9718ea1a2def69a25\" returns successfully" Jan 24 01:00:44.196606 containerd[1560]: time="2026-01-24T01:00:44.196483936Z" level=info msg="shim disconnected" id=d8b3abd3fa039c914f24abf3ccb07eb03018bb8a052f34b9718ea1a2def69a25 namespace=k8s.io Jan 24 01:00:44.196606 containerd[1560]: time="2026-01-24T01:00:44.196539528Z" level=warning msg="cleaning up after shim disconnected" id=d8b3abd3fa039c914f24abf3ccb07eb03018bb8a052f34b9718ea1a2def69a25 namespace=k8s.io Jan 24 01:00:44.196606 containerd[1560]: time="2026-01-24T01:00:44.196549136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:00:44.426591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8b3abd3fa039c914f24abf3ccb07eb03018bb8a052f34b9718ea1a2def69a25-rootfs.mount: Deactivated successfully. Jan 24 01:00:45.073371 kubelet[2642]: E0124 01:00:45.073157 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:45.075616 containerd[1560]: time="2026-01-24T01:00:45.075553275Z" level=info msg="CreateContainer within sandbox \"75bd347bf5611a55007ebc147129b9a2d7d2d842a13cf6c73b8e1b709c3678dc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 24 01:00:45.090654 containerd[1560]: time="2026-01-24T01:00:45.090590602Z" level=info msg="CreateContainer within sandbox \"75bd347bf5611a55007ebc147129b9a2d7d2d842a13cf6c73b8e1b709c3678dc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"74a41a1e42f8762c80e055b571175d5b7033c383c756ba4d58502449be55186a\"" Jan 24 01:00:45.091952 containerd[1560]: time="2026-01-24T01:00:45.091389493Z" level=info msg="StartContainer for \"74a41a1e42f8762c80e055b571175d5b7033c383c756ba4d58502449be55186a\"" Jan 24 01:00:45.148101 containerd[1560]: time="2026-01-24T01:00:45.148044368Z" level=info msg="StartContainer for \"74a41a1e42f8762c80e055b571175d5b7033c383c756ba4d58502449be55186a\" returns successfully" Jan 24 01:00:45.169675 containerd[1560]: time="2026-01-24T01:00:45.169609826Z" level=info msg="shim disconnected" id=74a41a1e42f8762c80e055b571175d5b7033c383c756ba4d58502449be55186a namespace=k8s.io Jan 24 01:00:45.169675 containerd[1560]: time="2026-01-24T01:00:45.169664477Z" level=warning msg="cleaning up after shim disconnected" id=74a41a1e42f8762c80e055b571175d5b7033c383c756ba4d58502449be55186a namespace=k8s.io Jan 24 01:00:45.169675 containerd[1560]: time="2026-01-24T01:00:45.169673074Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 01:00:45.426732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74a41a1e42f8762c80e055b571175d5b7033c383c756ba4d58502449be55186a-rootfs.mount: Deactivated successfully. Jan 24 01:00:45.917763 kubelet[2642]: E0124 01:00:45.917618 2642 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 24 01:00:46.077927 kubelet[2642]: E0124 01:00:46.077722 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:46.081986 containerd[1560]: time="2026-01-24T01:00:46.081943979Z" level=info msg="CreateContainer within sandbox \"75bd347bf5611a55007ebc147129b9a2d7d2d842a13cf6c73b8e1b709c3678dc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 24 01:00:46.101103 containerd[1560]: time="2026-01-24T01:00:46.100996880Z" level=info msg="CreateContainer within sandbox \"75bd347bf5611a55007ebc147129b9a2d7d2d842a13cf6c73b8e1b709c3678dc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cd74291d403ed2b2b2f6794a91283d39e55c4a09ecb2b948f375149d3aa87b78\"" Jan 24 01:00:46.101754 containerd[1560]: time="2026-01-24T01:00:46.101617162Z" level=info msg="StartContainer for \"cd74291d403ed2b2b2f6794a91283d39e55c4a09ecb2b948f375149d3aa87b78\"" Jan 24 01:00:46.163382 containerd[1560]: time="2026-01-24T01:00:46.163337461Z" level=info msg="StartContainer for \"cd74291d403ed2b2b2f6794a91283d39e55c4a09ecb2b948f375149d3aa87b78\" returns successfully" Jan 24 01:00:46.562895 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 24 01:00:46.851339 kubelet[2642]: E0124 01:00:46.851183 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:47.083651 kubelet[2642]: E0124 01:00:47.083349 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:47.098048 kubelet[2642]: I0124 01:00:47.097936 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dz6jc" podStartSLOduration=5.097913055 podStartE2EDuration="5.097913055s" podCreationTimestamp="2026-01-24 01:00:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 01:00:47.097287173 +0000 UTC m=+86.374984522" watchObservedRunningTime="2026-01-24 01:00:47.097913055 +0000 UTC m=+86.375610444" Jan 24 01:00:48.572479 kubelet[2642]: E0124 01:00:48.572405 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:49.734699 systemd-networkd[1251]: lxc_health: Link UP Jan 24 01:00:49.743663 systemd-networkd[1251]: lxc_health: Gained carrier Jan 24 01:00:50.573739 kubelet[2642]: E0124 01:00:50.573066 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:51.097616 kubelet[2642]: E0124 01:00:51.097461 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 24 01:00:51.454133 systemd-networkd[1251]: lxc_health: Gained IPv6LL Jan 24 01:00:55.058305 systemd[1]: run-containerd-runc-k8s.io-cd74291d403ed2b2b2f6794a91283d39e55c4a09ecb2b948f375149d3aa87b78-runc.fyBhNe.mount: Deactivated successfully. Jan 24 01:00:57.212496 sshd[4486]: pam_unix(sshd:session): session closed for user core Jan 24 01:00:57.216790 systemd[1]: sshd@26-10.0.0.151:22-10.0.0.1:60356.service: Deactivated successfully. Jan 24 01:00:57.219252 systemd-logind[1545]: Session 27 logged out. Waiting for processes to exit. Jan 24 01:00:57.219289 systemd[1]: session-27.scope: Deactivated successfully. Jan 24 01:00:57.220518 systemd-logind[1545]: Removed session 27. Jan 24 01:00:57.849728 kubelet[2642]: E0124 01:00:57.849623 2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"