Jan 29 16:24:42.884358 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:24:42.884424 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:24:42.884462 kernel: BIOS-provided physical RAM map: Jan 29 16:24:42.884472 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 16:24:42.884481 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 16:24:42.884489 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 16:24:42.884500 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 29 16:24:42.884509 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 29 16:24:42.884518 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 16:24:42.884530 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 16:24:42.884538 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 16:24:42.884547 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 16:24:42.884556 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 16:24:42.884564 kernel: NX (Execute Disable) protection: active Jan 29 16:24:42.884575 kernel: APIC: Static calls initialized Jan 29 16:24:42.884587 kernel: SMBIOS 2.8 present. Jan 29 16:24:42.884597 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 29 16:24:42.884606 kernel: Hypervisor detected: KVM Jan 29 16:24:42.884615 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:24:42.884625 kernel: kvm-clock: using sched offset of 2258111496 cycles Jan 29 16:24:42.884635 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:24:42.884645 kernel: tsc: Detected 2794.748 MHz processor Jan 29 16:24:42.884654 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:24:42.884675 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:24:42.884693 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 29 16:24:42.884707 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 16:24:42.884717 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:24:42.884726 kernel: Using GB pages for direct mapping Jan 29 16:24:42.884736 kernel: ACPI: Early table checksum verification disabled Jan 29 16:24:42.884746 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 29 16:24:42.884758 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:24:42.884770 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:24:42.884874 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:24:42.884889 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 29 16:24:42.884899 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:24:42.884909 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:24:42.884919 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:24:42.884930 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:24:42.884940 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 29 16:24:42.884950 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 29 16:24:42.884965 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 29 16:24:42.884978 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 29 16:24:42.884989 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 29 16:24:42.885000 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 29 16:24:42.885011 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 29 16:24:42.885021 kernel: No NUMA configuration found Jan 29 16:24:42.885032 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 29 16:24:42.885043 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 29 16:24:42.885056 kernel: Zone ranges: Jan 29 16:24:42.885067 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:24:42.885077 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 29 16:24:42.885088 kernel: Normal empty Jan 29 16:24:42.885099 kernel: Movable zone start for each node Jan 29 16:24:42.885110 kernel: Early memory node ranges Jan 29 16:24:42.885120 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 16:24:42.885131 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 29 16:24:42.885141 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 29 16:24:42.885155 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:24:42.885165 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 16:24:42.885175 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 16:24:42.885186 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 16:24:42.885197 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:24:42.885207 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:24:42.885218 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 16:24:42.885228 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:24:42.885239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:24:42.885253 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:24:42.885263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:24:42.885274 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:24:42.885294 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 16:24:42.885304 kernel: TSC deadline timer available Jan 29 16:24:42.885315 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 16:24:42.885325 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:24:42.885335 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 16:24:42.885345 kernel: kvm-guest: setup PV sched yield Jan 29 16:24:42.885356 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 16:24:42.885370 kernel: Booting paravirtualized kernel on KVM Jan 29 16:24:42.885381 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:24:42.885391 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 16:24:42.885402 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 16:24:42.885413 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 16:24:42.885423 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 16:24:42.885432 kernel: kvm-guest: PV spinlocks enabled Jan 29 16:24:42.885442 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 16:24:42.885454 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:24:42.885468 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:24:42.885477 kernel: random: crng init done Jan 29 16:24:42.885487 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:24:42.885497 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:24:42.885507 kernel: Fallback order for Node 0: 0 Jan 29 16:24:42.885517 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 29 16:24:42.885527 kernel: Policy zone: DMA32 Jan 29 16:24:42.885537 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:24:42.885550 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 138948K reserved, 0K cma-reserved) Jan 29 16:24:42.885561 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 16:24:42.885570 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:24:42.885580 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:24:42.885590 kernel: Dynamic Preempt: voluntary Jan 29 16:24:42.885600 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:24:42.885611 kernel: rcu: RCU event tracing is enabled. Jan 29 16:24:42.885621 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 16:24:42.885632 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:24:42.885646 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:24:42.885656 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:24:42.885666 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:24:42.885676 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 16:24:42.885686 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 16:24:42.885696 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:24:42.885706 kernel: Console: colour VGA+ 80x25 Jan 29 16:24:42.885715 kernel: printk: console [ttyS0] enabled Jan 29 16:24:42.885725 kernel: ACPI: Core revision 20230628 Jan 29 16:24:42.885738 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 16:24:42.885748 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:24:42.885758 kernel: x2apic enabled Jan 29 16:24:42.885768 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:24:42.885790 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 16:24:42.885800 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 16:24:42.885832 kernel: kvm-guest: setup PV IPIs Jan 29 16:24:42.885870 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 16:24:42.885880 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 16:24:42.885890 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 29 16:24:42.885900 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 16:24:42.885910 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 16:24:42.885927 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 16:24:42.885937 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:24:42.885947 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:24:42.885958 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:24:42.885970 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:24:42.885981 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 16:24:42.885991 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 16:24:42.886001 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 16:24:42.886011 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 16:24:42.886021 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 16:24:42.886029 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 16:24:42.886038 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 16:24:42.886046 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:24:42.886056 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:24:42.886064 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:24:42.886071 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:24:42.886079 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 16:24:42.886087 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:24:42.886095 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:24:42.886102 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:24:42.886110 kernel: landlock: Up and running. Jan 29 16:24:42.886118 kernel: SELinux: Initializing. Jan 29 16:24:42.886128 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:24:42.886135 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:24:42.886143 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 16:24:42.886151 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:24:42.886159 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:24:42.886167 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:24:42.886174 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 16:24:42.886182 kernel: ... version: 0 Jan 29 16:24:42.886192 kernel: ... bit width: 48 Jan 29 16:24:42.886200 kernel: ... generic registers: 6 Jan 29 16:24:42.886207 kernel: ... value mask: 0000ffffffffffff Jan 29 16:24:42.886215 kernel: ... max period: 00007fffffffffff Jan 29 16:24:42.886223 kernel: ... fixed-purpose events: 0 Jan 29 16:24:42.886230 kernel: ... event mask: 000000000000003f Jan 29 16:24:42.886238 kernel: signal: max sigframe size: 1776 Jan 29 16:24:42.886245 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:24:42.886253 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:24:42.886261 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:24:42.886271 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:24:42.886287 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 16:24:42.886295 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 16:24:42.886302 kernel: smpboot: Max logical packages: 1 Jan 29 16:24:42.886310 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 29 16:24:42.886317 kernel: devtmpfs: initialized Jan 29 16:24:42.886325 kernel: x86/mm: Memory block size: 128MB Jan 29 16:24:42.886333 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:24:42.886341 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 16:24:42.886351 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:24:42.886358 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:24:42.886366 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:24:42.886374 kernel: audit: type=2000 audit(1738167882.576:1): state=initialized audit_enabled=0 res=1 Jan 29 16:24:42.886381 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:24:42.886389 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:24:42.886397 kernel: cpuidle: using governor menu Jan 29 16:24:42.886404 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:24:42.886412 kernel: dca service started, version 1.12.1 Jan 29 16:24:42.886422 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 16:24:42.886430 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 16:24:42.886438 kernel: PCI: Using configuration type 1 for base access Jan 29 16:24:42.886445 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:24:42.886453 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:24:42.886461 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:24:42.886469 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:24:42.886476 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:24:42.886484 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:24:42.886494 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:24:42.886502 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:24:42.886509 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:24:42.886517 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:24:42.886525 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:24:42.886532 kernel: ACPI: Interpreter enabled Jan 29 16:24:42.886540 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 16:24:42.886547 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:24:42.886555 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:24:42.886565 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:24:42.886573 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 16:24:42.886580 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:24:42.886834 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:24:42.887008 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 16:24:42.887140 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 16:24:42.887150 kernel: PCI host bridge to bus 0000:00 Jan 29 16:24:42.887292 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:24:42.887407 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:24:42.887519 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:24:42.887631 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 16:24:42.887742 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 16:24:42.887883 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 16:24:42.887999 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:24:42.888143 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 16:24:42.888285 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 16:24:42.888410 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 29 16:24:42.888533 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 29 16:24:42.888656 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 29 16:24:42.888804 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:24:42.888971 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 16:24:42.889098 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 16:24:42.889222 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 29 16:24:42.889357 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 29 16:24:42.889489 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 16:24:42.889615 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 16:24:42.889740 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 29 16:24:42.889931 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 29 16:24:42.890068 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 16:24:42.890194 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 29 16:24:42.890328 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 29 16:24:42.890454 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 29 16:24:42.890576 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 29 16:24:42.890707 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 16:24:42.890894 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 16:24:42.891028 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 16:24:42.891149 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 29 16:24:42.891269 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 29 16:24:42.891410 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 16:24:42.891532 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 16:24:42.891543 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:24:42.891555 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:24:42.891563 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:24:42.891571 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:24:42.891578 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 16:24:42.891586 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 16:24:42.891594 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 16:24:42.891601 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 16:24:42.891609 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 16:24:42.891617 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 16:24:42.891627 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 16:24:42.891634 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 16:24:42.891642 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 16:24:42.891650 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 16:24:42.891657 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 16:24:42.891665 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 16:24:42.891673 kernel: iommu: Default domain type: Translated Jan 29 16:24:42.891680 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:24:42.891688 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:24:42.891698 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:24:42.891705 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 16:24:42.891713 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 29 16:24:42.891861 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 16:24:42.891986 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 16:24:42.892109 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:24:42.892120 kernel: vgaarb: loaded Jan 29 16:24:42.892128 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 16:24:42.892139 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 16:24:42.892147 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:24:42.892155 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:24:42.892163 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:24:42.892170 kernel: pnp: PnP ACPI init Jan 29 16:24:42.892315 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 16:24:42.892326 kernel: pnp: PnP ACPI: found 6 devices Jan 29 16:24:42.892334 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:24:42.892345 kernel: NET: Registered PF_INET protocol family Jan 29 16:24:42.892353 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:24:42.892361 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 16:24:42.892369 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:24:42.892377 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:24:42.892384 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 16:24:42.892392 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 16:24:42.892400 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:24:42.892407 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:24:42.892417 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:24:42.892425 kernel: NET: Registered PF_XDP protocol family Jan 29 16:24:42.892539 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:24:42.892652 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:24:42.892764 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:24:42.892963 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 16:24:42.893077 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 16:24:42.893187 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 16:24:42.893201 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:24:42.893209 kernel: Initialise system trusted keyrings Jan 29 16:24:42.893217 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 16:24:42.893225 kernel: Key type asymmetric registered Jan 29 16:24:42.893233 kernel: Asymmetric key parser 'x509' registered Jan 29 16:24:42.893240 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:24:42.893248 kernel: io scheduler mq-deadline registered Jan 29 16:24:42.893256 kernel: io scheduler kyber registered Jan 29 16:24:42.893263 kernel: io scheduler bfq registered Jan 29 16:24:42.893273 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:24:42.893291 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 16:24:42.893299 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 16:24:42.893307 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 16:24:42.893315 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:24:42.893322 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:24:42.893330 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:24:42.893338 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:24:42.893345 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:24:42.893356 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:24:42.893487 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 16:24:42.893602 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 16:24:42.893716 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T16:24:42 UTC (1738167882) Jan 29 16:24:42.893856 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 16:24:42.893868 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 16:24:42.893876 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:24:42.893884 kernel: Segment Routing with IPv6 Jan 29 16:24:42.893896 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:24:42.893903 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:24:42.893911 kernel: Key type dns_resolver registered Jan 29 16:24:42.893919 kernel: IPI shorthand broadcast: enabled Jan 29 16:24:42.893926 kernel: sched_clock: Marking stable (541002813, 112739199)->(705731368, -51989356) Jan 29 16:24:42.893934 kernel: registered taskstats version 1 Jan 29 16:24:42.893942 kernel: Loading compiled-in X.509 certificates Jan 29 16:24:42.893950 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:24:42.893957 kernel: Key type .fscrypt registered Jan 29 16:24:42.893967 kernel: Key type fscrypt-provisioning registered Jan 29 16:24:42.893975 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:24:42.893983 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:24:42.893991 kernel: ima: No architecture policies found Jan 29 16:24:42.893998 kernel: clk: Disabling unused clocks Jan 29 16:24:42.894006 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:24:42.894014 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:24:42.894022 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:24:42.894029 kernel: Run /init as init process Jan 29 16:24:42.894039 kernel: with arguments: Jan 29 16:24:42.894047 kernel: /init Jan 29 16:24:42.894054 kernel: with environment: Jan 29 16:24:42.894062 kernel: HOME=/ Jan 29 16:24:42.894069 kernel: TERM=linux Jan 29 16:24:42.894077 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:24:42.894086 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:24:42.894097 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:24:42.894108 systemd[1]: Detected virtualization kvm. Jan 29 16:24:42.894116 systemd[1]: Detected architecture x86-64. Jan 29 16:24:42.894124 systemd[1]: Running in initrd. Jan 29 16:24:42.894132 systemd[1]: No hostname configured, using default hostname. Jan 29 16:24:42.894141 systemd[1]: Hostname set to . Jan 29 16:24:42.894149 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:24:42.894157 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:24:42.894165 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:24:42.894176 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:24:42.894197 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:24:42.894207 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:24:42.894216 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:24:42.894225 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:24:42.894237 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:24:42.894246 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:24:42.894255 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:24:42.894263 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:24:42.894272 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:24:42.894290 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:24:42.894299 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:24:42.894307 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:24:42.894318 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:24:42.894327 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:24:42.894336 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:24:42.894344 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:24:42.894353 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:24:42.894361 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:24:42.894370 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:24:42.894378 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:24:42.894389 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:24:42.894400 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:24:42.894408 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:24:42.894417 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:24:42.894425 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:24:42.894434 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:24:42.894442 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:24:42.894451 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:24:42.894459 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:24:42.894471 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:24:42.894500 systemd-journald[194]: Collecting audit messages is disabled. Jan 29 16:24:42.894522 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:24:42.894531 systemd-journald[194]: Journal started Jan 29 16:24:42.894553 systemd-journald[194]: Runtime Journal (/run/log/journal/6ca94e338bcc4309a92aae2b2bed0f3f) is 6M, max 48.4M, 42.3M free. Jan 29 16:24:42.884714 systemd-modules-load[195]: Inserted module 'overlay' Jan 29 16:24:42.916124 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:24:42.916139 kernel: Bridge firewalling registered Jan 29 16:24:42.910817 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 29 16:24:42.917802 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:24:42.918002 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:24:42.918546 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:24:42.929994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:24:42.933620 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:24:42.937030 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:24:42.941010 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:24:42.942435 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:24:42.949301 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:24:42.953341 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:24:42.953613 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:24:42.955735 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:24:42.969927 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:24:42.972470 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:24:42.979617 dracut-cmdline[230]: dracut-dracut-053 Jan 29 16:24:42.982250 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:24:43.021619 systemd-resolved[232]: Positive Trust Anchors: Jan 29 16:24:43.021634 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:24:43.021673 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:24:43.027017 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 29 16:24:43.028033 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:24:43.031496 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:24:43.056805 kernel: SCSI subsystem initialized Jan 29 16:24:43.065793 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:24:43.076808 kernel: iscsi: registered transport (tcp) Jan 29 16:24:43.096802 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:24:43.096828 kernel: QLogic iSCSI HBA Driver Jan 29 16:24:43.139544 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:24:43.153934 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:24:43.177615 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:24:43.177662 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:24:43.177675 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:24:43.218815 kernel: raid6: avx2x4 gen() 30158 MB/s Jan 29 16:24:43.235808 kernel: raid6: avx2x2 gen() 30767 MB/s Jan 29 16:24:43.252876 kernel: raid6: avx2x1 gen() 26025 MB/s Jan 29 16:24:43.252899 kernel: raid6: using algorithm avx2x2 gen() 30767 MB/s Jan 29 16:24:43.270919 kernel: raid6: .... xor() 19924 MB/s, rmw enabled Jan 29 16:24:43.270955 kernel: raid6: using avx2x2 recovery algorithm Jan 29 16:24:43.290814 kernel: xor: automatically using best checksumming function avx Jan 29 16:24:43.443821 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:24:43.456473 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:24:43.469892 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:24:43.484916 systemd-udevd[417]: Using default interface naming scheme 'v255'. Jan 29 16:24:43.490234 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:24:43.498993 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:24:43.516763 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jan 29 16:24:43.551993 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:24:43.562957 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:24:43.624424 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:24:43.632960 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:24:43.643327 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:24:43.646233 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:24:43.647535 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:24:43.650953 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:24:43.661807 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 16:24:43.682680 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 16:24:43.682853 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:24:43.682865 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:24:43.682884 kernel: GPT:9289727 != 19775487 Jan 29 16:24:43.682895 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:24:43.682905 kernel: GPT:9289727 != 19775487 Jan 29 16:24:43.682915 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:24:43.682925 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:24:43.658924 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:24:43.671466 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:24:43.692807 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:24:43.695816 kernel: AES CTR mode by8 optimization enabled Jan 29 16:24:43.695848 kernel: libata version 3.00 loaded. Jan 29 16:24:43.700850 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:24:43.701013 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:24:43.709623 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 16:24:43.740052 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 16:24:43.740071 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 16:24:43.740246 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 16:24:43.740416 kernel: scsi host0: ahci Jan 29 16:24:43.740575 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (480) Jan 29 16:24:43.740588 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (463) Jan 29 16:24:43.740599 kernel: scsi host1: ahci Jan 29 16:24:43.740752 kernel: scsi host2: ahci Jan 29 16:24:43.740934 kernel: scsi host3: ahci Jan 29 16:24:43.741082 kernel: scsi host4: ahci Jan 29 16:24:43.741237 kernel: scsi host5: ahci Jan 29 16:24:43.741394 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 29 16:24:43.741406 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 29 16:24:43.741416 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 29 16:24:43.741427 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 29 16:24:43.741437 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 29 16:24:43.741448 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 29 16:24:43.702550 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:24:43.703923 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:24:43.704082 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:24:43.706805 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:24:43.720085 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:24:43.751739 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 16:24:43.782753 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:24:43.807522 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 16:24:43.815708 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 16:24:43.816985 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 16:24:43.828033 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:24:43.841908 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:24:43.843813 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:24:43.860845 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:24:44.046809 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 16:24:44.046847 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 16:24:44.047808 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 16:24:44.047836 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 16:24:44.049262 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 16:24:44.049279 kernel: ata3.00: applying bridge limits Jan 29 16:24:44.050812 kernel: ata3.00: configured for UDMA/100 Jan 29 16:24:44.050840 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 16:24:44.064809 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 16:24:44.064867 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 16:24:44.111426 disk-uuid[563]: Primary Header is updated. Jan 29 16:24:44.111426 disk-uuid[563]: Secondary Entries is updated. Jan 29 16:24:44.111426 disk-uuid[563]: Secondary Header is updated. Jan 29 16:24:44.129581 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:24:44.129617 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 16:24:44.140066 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:24:44.140083 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 16:24:45.144017 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:24:45.150094 disk-uuid[582]: The operation has completed successfully. Jan 29 16:24:45.194164 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:24:45.194330 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:24:45.251943 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:24:45.255851 sh[597]: Success Jan 29 16:24:45.268825 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 16:24:45.304952 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:24:45.319662 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:24:45.323773 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:24:45.333378 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:24:45.333416 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:24:45.333427 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:24:45.334395 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:24:45.335802 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:24:45.339883 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:24:45.340661 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:24:45.346966 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:24:45.349670 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:24:45.362303 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:24:45.362328 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:24:45.362339 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:24:45.365808 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:24:45.375309 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:24:45.377196 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:24:45.386868 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:24:45.391929 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:24:45.445158 ignition[696]: Ignition 2.20.0 Jan 29 16:24:45.445171 ignition[696]: Stage: fetch-offline Jan 29 16:24:45.445238 ignition[696]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:45.445251 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:24:45.445357 ignition[696]: parsed url from cmdline: "" Jan 29 16:24:45.445362 ignition[696]: no config URL provided Jan 29 16:24:45.445369 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:24:45.445381 ignition[696]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:24:45.445408 ignition[696]: op(1): [started] loading QEMU firmware config module Jan 29 16:24:45.445415 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 16:24:45.451357 ignition[696]: op(1): [finished] loading QEMU firmware config module Jan 29 16:24:45.451376 ignition[696]: QEMU firmware config was not found. Ignoring... Jan 29 16:24:45.480628 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:24:45.491906 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:24:45.500880 ignition[696]: parsing config with SHA512: d390efde098bd3ca725bd610369393cbad5e34955ef3d75501dab30c4dcd0dd712fa1ef02015398410fad2e9254602b0dcbb8dd9c99e8561affd8ba8711d8c11 Jan 29 16:24:45.505813 unknown[696]: fetched base config from "system" Jan 29 16:24:45.505830 unknown[696]: fetched user config from "qemu" Jan 29 16:24:45.506452 ignition[696]: fetch-offline: fetch-offline passed Jan 29 16:24:45.508745 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:24:45.506538 ignition[696]: Ignition finished successfully Jan 29 16:24:45.518367 systemd-networkd[786]: lo: Link UP Jan 29 16:24:45.518377 systemd-networkd[786]: lo: Gained carrier Jan 29 16:24:45.520012 systemd-networkd[786]: Enumeration completed Jan 29 16:24:45.520123 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:24:45.520350 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:24:45.520355 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:24:45.521063 systemd-networkd[786]: eth0: Link UP Jan 29 16:24:45.521066 systemd-networkd[786]: eth0: Gained carrier Jan 29 16:24:45.521073 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:24:45.522255 systemd[1]: Reached target network.target - Network. Jan 29 16:24:45.524254 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 16:24:45.530912 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:24:45.536812 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:24:45.548635 ignition[790]: Ignition 2.20.0 Jan 29 16:24:45.548646 ignition[790]: Stage: kargs Jan 29 16:24:45.548833 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:45.548845 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:24:45.549621 ignition[790]: kargs: kargs passed Jan 29 16:24:45.549664 ignition[790]: Ignition finished successfully Jan 29 16:24:45.552907 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:24:45.566935 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:24:45.577530 ignition[800]: Ignition 2.20.0 Jan 29 16:24:45.577541 ignition[800]: Stage: disks Jan 29 16:24:45.577681 ignition[800]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:45.577692 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:24:45.580467 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:24:45.578465 ignition[800]: disks: disks passed Jan 29 16:24:45.582177 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:24:45.578509 ignition[800]: Ignition finished successfully Jan 29 16:24:45.584055 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:24:45.585916 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:24:45.585975 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:24:45.586318 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:24:45.596890 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:24:45.609300 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 16:24:45.616767 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:24:46.342866 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:24:46.424826 kernel: EXT4-fs (vda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:24:46.425465 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:24:46.427114 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:24:46.436882 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:24:46.438662 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:24:46.439889 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 16:24:46.445283 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (819) Jan 29 16:24:46.445303 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:24:46.439942 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:24:46.452011 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:24:46.452032 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:24:46.452044 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:24:46.439970 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:24:46.445690 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:24:46.453097 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:24:46.461946 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:24:46.493031 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:24:46.496742 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:24:46.501026 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:24:46.505367 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:24:46.580227 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:24:46.593887 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:24:46.595846 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:24:46.601829 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:24:46.618632 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:24:46.620916 ignition[932]: INFO : Ignition 2.20.0 Jan 29 16:24:46.620916 ignition[932]: INFO : Stage: mount Jan 29 16:24:46.622701 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:46.622701 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:24:46.622701 ignition[932]: INFO : mount: mount passed Jan 29 16:24:46.622701 ignition[932]: INFO : Ignition finished successfully Jan 29 16:24:46.623596 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:24:46.628900 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:24:46.647910 systemd-networkd[786]: eth0: Gained IPv6LL Jan 29 16:24:47.332755 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:24:47.342112 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:24:47.350551 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (945) Jan 29 16:24:47.350593 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:24:47.350605 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:24:47.351414 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:24:47.354801 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:24:47.355988 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:24:47.381141 ignition[962]: INFO : Ignition 2.20.0 Jan 29 16:24:47.381141 ignition[962]: INFO : Stage: files Jan 29 16:24:47.383083 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:47.383083 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:24:47.383083 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:24:47.386805 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:24:47.386805 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:24:47.389930 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:24:47.389930 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:24:47.389930 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:24:47.389930 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 16:24:47.389930 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 29 16:24:47.387558 unknown[962]: wrote ssh authorized keys file for user: core Jan 29 16:24:47.451123 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:24:47.573906 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 16:24:47.573906 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:24:47.578582 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 16:24:47.947065 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 16:24:48.012945 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:24:48.012945 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:24:48.016816 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:24:48.016816 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:24:48.016816 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:24:48.016816 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:24:48.016816 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:24:48.016816 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:24:48.016816 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:24:48.016816 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:24:48.016816 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:24:48.016816 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 16:24:48.016816 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 16:24:48.016816 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 16:24:48.016816 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 29 16:24:48.458711 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 16:24:48.803668 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 16:24:48.803668 ignition[962]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 16:24:48.808343 ignition[962]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:24:48.811138 ignition[962]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:24:48.811138 ignition[962]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 16:24:48.811138 ignition[962]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 16:24:48.816454 ignition[962]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 16:24:48.818528 ignition[962]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 16:24:48.818528 ignition[962]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 16:24:48.821684 ignition[962]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 16:24:48.836902 ignition[962]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 16:24:48.842263 ignition[962]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 16:24:48.843900 ignition[962]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 16:24:48.843900 ignition[962]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:24:48.843900 ignition[962]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:24:48.843900 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:24:48.843900 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:24:48.843900 ignition[962]: INFO : files: files passed Jan 29 16:24:48.843900 ignition[962]: INFO : Ignition finished successfully Jan 29 16:24:48.854905 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:24:48.867940 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:24:48.870062 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:24:48.871730 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:24:48.871847 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:24:48.880505 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 16:24:48.883304 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:24:48.884966 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:24:48.887693 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:24:48.886305 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:24:48.887994 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:24:48.896909 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:24:48.922024 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:24:48.923087 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:24:48.925943 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:24:48.928006 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:24:48.930104 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:24:48.943980 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:24:48.956239 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:24:48.959143 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:24:48.973436 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:24:48.973668 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:24:48.974220 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:24:48.974542 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:24:48.974697 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:24:48.975529 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:24:48.975884 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:24:48.976203 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:24:48.976530 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:24:49.015635 ignition[1016]: INFO : Ignition 2.20.0 Jan 29 16:24:49.015635 ignition[1016]: INFO : Stage: umount Jan 29 16:24:49.015635 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:24:49.015635 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:24:49.015635 ignition[1016]: INFO : umount: umount passed Jan 29 16:24:49.015635 ignition[1016]: INFO : Ignition finished successfully Jan 29 16:24:48.976877 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:24:48.977204 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:24:48.977531 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:24:48.977891 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:24:48.978212 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:24:48.978534 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:24:48.978850 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:24:48.978982 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:24:48.979519 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:24:48.979888 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:24:48.980161 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:24:48.980332 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:24:48.980677 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:24:48.980821 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:24:48.981326 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:24:48.981450 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:24:48.981937 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:24:48.982328 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:24:48.992848 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:24:48.993237 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:24:48.993536 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:24:48.993892 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:24:48.994006 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:24:48.994398 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:24:48.994499 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:24:48.994925 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:24:48.995061 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:24:48.995422 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:24:48.995546 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:24:48.996998 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:24:48.998010 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:24:48.998331 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:24:48.998474 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:24:48.999272 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:24:48.999408 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:24:49.004517 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:24:49.004674 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:24:49.017637 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:24:49.017798 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:24:49.019576 systemd[1]: Stopped target network.target - Network. Jan 29 16:24:49.021424 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:24:49.021492 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:24:49.023242 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:24:49.023304 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:24:49.025135 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:24:49.025192 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:24:49.027193 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:24:49.027250 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:24:49.029748 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:24:49.032192 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:24:49.035313 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:24:49.040384 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:24:49.040515 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:24:49.043957 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:24:49.044201 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:24:49.044318 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:24:49.047989 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:24:49.048712 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:24:49.048813 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:24:49.056913 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:24:49.058531 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:24:49.058600 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:24:49.060905 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:24:49.060955 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:24:49.063169 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:24:49.063219 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:24:49.065291 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:24:49.065339 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:24:49.067755 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:24:49.070969 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:24:49.071036 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:24:49.079604 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:24:49.079753 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:24:49.081766 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:24:49.081953 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:24:49.084551 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:24:49.084601 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:24:49.086533 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:24:49.086572 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:24:49.088449 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:24:49.088497 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:24:49.090479 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:24:49.090526 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:24:49.092402 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:24:49.092452 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:24:49.101985 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:24:49.103605 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:24:49.103676 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:24:49.107124 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:24:49.107185 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:24:49.109390 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:24:49.109448 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:24:49.111629 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:24:49.111680 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:24:49.115098 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:24:49.115174 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:24:49.115636 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:24:49.115756 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:24:49.237346 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:24:49.237480 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:24:49.239749 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:24:49.241497 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:24:49.241555 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:24:49.257986 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:24:49.267730 systemd[1]: Switching root. Jan 29 16:24:49.301262 systemd-journald[194]: Journal stopped Jan 29 16:24:50.852409 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 29 16:24:50.852495 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:24:50.852521 kernel: SELinux: policy capability open_perms=1 Jan 29 16:24:50.852544 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:24:50.852560 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:24:50.852576 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:24:50.852595 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:24:50.852610 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:24:50.852627 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:24:50.852649 kernel: audit: type=1403 audit(1738167889.912:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:24:50.852675 systemd[1]: Successfully loaded SELinux policy in 46.996ms. Jan 29 16:24:50.852701 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.426ms. Jan 29 16:24:50.852721 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:24:50.852738 systemd[1]: Detected virtualization kvm. Jan 29 16:24:50.852755 systemd[1]: Detected architecture x86-64. Jan 29 16:24:50.852771 systemd[1]: Detected first boot. Jan 29 16:24:50.853662 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:24:50.853683 zram_generator::config[1063]: No configuration found. Jan 29 16:24:50.853706 kernel: Guest personality initialized and is inactive Jan 29 16:24:50.853721 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:24:50.853737 kernel: Initialized host personality Jan 29 16:24:50.853752 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:24:50.853769 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:24:50.853805 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:24:50.853823 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:24:50.853840 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:24:50.853857 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:24:50.853879 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:24:50.853896 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:24:50.853915 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:24:50.853937 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:24:50.853955 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:24:50.853973 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:24:50.853990 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:24:50.854007 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:24:50.854024 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:24:50.854046 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:24:50.854074 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:24:50.854092 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:24:50.854110 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:24:50.854127 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:24:50.854143 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:24:50.854160 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:24:50.854182 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:24:50.854199 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:24:50.854216 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:24:50.854233 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:24:50.854250 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:24:50.854267 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:24:50.854284 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:24:50.854304 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:24:50.854321 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:24:50.854341 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:24:50.854359 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:24:50.854375 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:24:50.854392 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:24:50.854409 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:24:50.854426 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:24:50.854443 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:24:50.854460 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:24:50.854477 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:24:50.854494 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:24:50.854515 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:24:50.854532 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:24:50.854550 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:24:50.854567 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:24:50.854584 systemd[1]: Reached target machines.target - Containers. Jan 29 16:24:50.854601 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:24:50.854618 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:24:50.854635 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:24:50.854656 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:24:50.854676 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:24:50.854693 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:24:50.854710 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:24:50.854727 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:24:50.854744 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:24:50.854760 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:24:50.854831 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:24:50.854856 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:24:50.854874 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:24:50.854891 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:24:50.854909 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:24:50.854926 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:24:50.854943 kernel: fuse: init (API version 7.39) Jan 29 16:24:50.854959 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:24:50.855520 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:24:50.855543 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:24:50.855566 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:24:50.855583 kernel: loop: module loaded Jan 29 16:24:50.855600 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:24:50.855958 systemd-journald[1134]: Collecting audit messages is disabled. Jan 29 16:24:50.855989 kernel: ACPI: bus type drm_connector registered Jan 29 16:24:50.856007 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:24:50.856024 systemd[1]: Stopped verity-setup.service. Jan 29 16:24:50.856046 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:24:50.856076 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:24:50.856094 systemd-journald[1134]: Journal started Jan 29 16:24:50.856127 systemd-journald[1134]: Runtime Journal (/run/log/journal/6ca94e338bcc4309a92aae2b2bed0f3f) is 6M, max 48.4M, 42.3M free. Jan 29 16:24:50.597377 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:24:50.611681 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 16:24:50.612295 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:24:50.860307 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:24:50.861276 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:24:50.862689 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:24:50.863920 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:24:50.865468 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:24:50.866971 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:24:50.868438 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:24:50.870267 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:24:50.871895 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:24:50.872140 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:24:50.873860 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:24:50.874079 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:24:50.875679 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:24:50.875934 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:24:50.877385 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:24:50.877609 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:24:50.879187 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:24:50.879398 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:24:50.880833 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:24:50.881038 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:24:50.882512 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:24:50.884103 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:24:50.885713 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:24:50.887576 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:24:50.901357 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:24:50.916905 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:24:50.919256 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:24:50.920505 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:24:50.920537 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:24:50.922576 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:24:50.924971 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:24:50.927975 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:24:50.929172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:24:50.932048 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:24:50.935350 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:24:50.937444 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:24:50.938843 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:24:50.941953 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:24:50.945741 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:24:50.951925 systemd-journald[1134]: Time spent on flushing to /var/log/journal/6ca94e338bcc4309a92aae2b2bed0f3f is 17.037ms for 969 entries. Jan 29 16:24:50.951925 systemd-journald[1134]: System Journal (/var/log/journal/6ca94e338bcc4309a92aae2b2bed0f3f) is 8M, max 195.6M, 187.6M free. Jan 29 16:24:50.981462 systemd-journald[1134]: Received client request to flush runtime journal. Jan 29 16:24:50.953348 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:24:50.956881 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:24:50.961063 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:24:50.963593 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:24:50.965004 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:24:50.966634 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:24:50.969114 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:24:50.975107 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:24:50.983798 kernel: loop0: detected capacity change from 0 to 218376 Jan 29 16:24:50.984967 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:24:50.990866 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:24:50.993333 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:24:50.995467 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:24:51.005358 udevadm[1195]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 16:24:51.009703 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Jan 29 16:24:51.011805 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:24:51.010147 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Jan 29 16:24:51.010971 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:24:51.011873 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:24:51.019240 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:24:51.027108 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:24:51.042807 kernel: loop1: detected capacity change from 0 to 138176 Jan 29 16:24:51.056746 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:24:51.067017 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:24:51.081835 kernel: loop2: detected capacity change from 0 to 147912 Jan 29 16:24:51.087893 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jan 29 16:24:51.087919 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jan 29 16:24:51.095905 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:24:51.118823 kernel: loop3: detected capacity change from 0 to 218376 Jan 29 16:24:51.131832 kernel: loop4: detected capacity change from 0 to 138176 Jan 29 16:24:51.145834 kernel: loop5: detected capacity change from 0 to 147912 Jan 29 16:24:51.157884 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 16:24:51.158515 (sd-merge)[1212]: Merged extensions into '/usr'. Jan 29 16:24:51.164373 systemd[1]: Reload requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:24:51.164392 systemd[1]: Reloading... Jan 29 16:24:51.241307 zram_generator::config[1243]: No configuration found. Jan 29 16:24:51.280683 ldconfig[1178]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:24:51.375299 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:24:51.447013 systemd[1]: Reloading finished in 281 ms. Jan 29 16:24:51.472225 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:24:51.474267 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:24:51.491404 systemd[1]: Starting ensure-sysext.service... Jan 29 16:24:51.493957 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:24:51.507832 systemd[1]: Reload requested from client PID 1277 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:24:51.507848 systemd[1]: Reloading... Jan 29 16:24:51.517577 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:24:51.517890 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:24:51.518859 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:24:51.519315 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Jan 29 16:24:51.520055 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Jan 29 16:24:51.524687 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:24:51.524806 systemd-tmpfiles[1278]: Skipping /boot Jan 29 16:24:51.538547 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:24:51.538731 systemd-tmpfiles[1278]: Skipping /boot Jan 29 16:24:51.576376 zram_generator::config[1310]: No configuration found. Jan 29 16:24:51.707400 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:24:51.776810 systemd[1]: Reloading finished in 268 ms. Jan 29 16:24:51.793820 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:24:51.813989 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:24:51.824238 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:24:51.827368 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:24:51.830593 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:24:51.835546 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:24:51.840309 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:24:51.844231 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:24:51.853167 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:24:51.853409 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:24:51.861538 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:24:51.865198 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:24:51.869517 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:24:51.871063 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:24:51.871210 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:24:51.873849 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:24:51.875438 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:24:51.877796 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:24:51.878300 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:24:51.880543 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:24:51.881050 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:24:51.886412 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:24:51.888862 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:24:51.889218 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:24:51.890457 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Jan 29 16:24:51.899157 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:24:51.905866 augenrules[1379]: No rules Jan 29 16:24:51.907845 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:24:51.908182 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:24:51.912964 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:24:51.913521 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:24:51.921050 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:24:51.925994 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:24:51.930866 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:24:51.935093 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:24:51.936842 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:24:51.936889 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:24:51.938886 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:24:51.940467 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:24:51.941343 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:24:51.943636 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:24:51.946045 systemd[1]: Finished ensure-sysext.service. Jan 29 16:24:51.951875 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:24:51.955390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:24:51.955654 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:24:51.957390 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:24:51.957650 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:24:51.963704 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:24:51.963997 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:24:51.967356 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:24:51.967626 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:24:51.995320 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:24:52.007187 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:24:52.008506 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:24:52.008598 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:24:52.012022 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:24:52.016370 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:24:52.016969 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:24:52.060841 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1395) Jan 29 16:24:52.079869 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 16:24:52.096727 systemd-resolved[1349]: Positive Trust Anchors: Jan 29 16:24:52.096747 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:24:52.096814 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:24:52.098831 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:24:52.105804 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 16:24:52.120812 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 16:24:52.121090 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 16:24:52.107395 systemd-resolved[1349]: Defaulting to hostname 'linux'. Jan 29 16:24:52.128244 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:24:52.131077 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:24:52.132536 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:24:52.137428 systemd-networkd[1423]: lo: Link UP Jan 29 16:24:52.137447 systemd-networkd[1423]: lo: Gained carrier Jan 29 16:24:52.139573 systemd-networkd[1423]: Enumeration completed Jan 29 16:24:52.142105 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:24:52.142429 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:24:52.142443 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:24:52.144231 systemd-networkd[1423]: eth0: Link UP Jan 29 16:24:52.144247 systemd-networkd[1423]: eth0: Gained carrier Jan 29 16:24:52.144263 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:24:52.144531 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:24:52.146062 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:24:52.147728 systemd[1]: Reached target network.target - Network. Jan 29 16:24:52.148762 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:24:52.158798 systemd-networkd[1423]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:24:52.159018 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:24:52.160485 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. Jan 29 16:24:52.823222 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:24:52.823380 systemd-timesyncd[1426]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 16:24:52.823452 systemd-timesyncd[1426]: Initial clock synchronization to Wed 2025-01-29 16:24:52.822995 UTC. Jan 29 16:24:52.823517 systemd-resolved[1349]: Clock change detected. Flushing caches. Jan 29 16:24:52.841554 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:24:52.858600 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 16:24:52.849115 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:24:52.900072 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:24:52.914090 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:24:52.928086 kernel: kvm_amd: TSC scaling supported Jan 29 16:24:52.928208 kernel: kvm_amd: Nested Virtualization enabled Jan 29 16:24:52.928221 kernel: kvm_amd: Nested Paging enabled Jan 29 16:24:52.928244 kernel: kvm_amd: LBR virtualization supported Jan 29 16:24:52.929158 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 16:24:52.929187 kernel: kvm_amd: Virtual GIF supported Jan 29 16:24:52.949933 kernel: EDAC MC: Ver: 3.0.0 Jan 29 16:24:52.986246 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:24:53.012069 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:24:53.026228 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:24:53.034414 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:24:53.065391 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:24:53.067080 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:24:53.068260 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:24:53.069464 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:24:53.070825 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:24:53.072470 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:24:53.073703 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:24:53.074997 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:24:53.076324 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:24:53.076351 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:24:53.077311 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:24:53.079266 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:24:53.082056 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:24:53.085541 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:24:53.087003 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:24:53.088336 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:24:53.092245 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:24:53.093705 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:24:53.096415 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:24:53.098551 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:24:53.100012 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:24:53.101300 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:24:53.102402 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:24:53.102434 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:24:53.103569 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:24:53.105988 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:24:53.109106 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:24:53.110571 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:24:53.114128 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:24:53.115531 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:24:53.118435 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:24:53.123118 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:24:53.124824 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:24:53.127525 jq[1458]: false Jan 29 16:24:53.128209 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:24:53.133926 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:24:53.136487 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:24:53.137241 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:24:53.140193 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:24:53.144983 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:24:53.148981 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:24:53.152990 extend-filesystems[1459]: Found loop3 Jan 29 16:24:53.152990 extend-filesystems[1459]: Found loop4 Jan 29 16:24:53.152990 extend-filesystems[1459]: Found loop5 Jan 29 16:24:53.152990 extend-filesystems[1459]: Found sr0 Jan 29 16:24:53.152990 extend-filesystems[1459]: Found vda Jan 29 16:24:53.152990 extend-filesystems[1459]: Found vda1 Jan 29 16:24:53.152990 extend-filesystems[1459]: Found vda2 Jan 29 16:24:53.152990 extend-filesystems[1459]: Found vda3 Jan 29 16:24:53.152990 extend-filesystems[1459]: Found usr Jan 29 16:24:53.152990 extend-filesystems[1459]: Found vda4 Jan 29 16:24:53.152990 extend-filesystems[1459]: Found vda6 Jan 29 16:24:53.152990 extend-filesystems[1459]: Found vda7 Jan 29 16:24:53.152990 extend-filesystems[1459]: Found vda9 Jan 29 16:24:53.152990 extend-filesystems[1459]: Checking size of /dev/vda9 Jan 29 16:24:53.149840 dbus-daemon[1457]: [system] SELinux support is enabled Jan 29 16:24:53.151088 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:24:53.211974 extend-filesystems[1459]: Resized partition /dev/vda9 Jan 29 16:24:53.215949 jq[1469]: true Jan 29 16:24:53.158425 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:24:53.216316 extend-filesystems[1488]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:24:53.158942 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:24:53.166499 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:24:53.169157 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:24:53.218201 tar[1477]: linux-amd64/LICENSE Jan 29 16:24:53.218201 tar[1477]: linux-amd64/helm Jan 29 16:24:53.174244 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:24:53.218578 jq[1483]: true Jan 29 16:24:53.174562 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:24:53.192736 (ntainerd)[1484]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:24:53.223890 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 16:24:53.230564 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:24:53.230618 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:24:53.234599 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:24:53.234626 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:24:53.235807 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1395) Jan 29 16:24:53.239263 update_engine[1466]: I20250129 16:24:53.239172 1466 main.cc:92] Flatcar Update Engine starting Jan 29 16:24:53.244247 systemd-logind[1465]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 16:24:53.246839 update_engine[1466]: I20250129 16:24:53.245137 1466 update_check_scheduler.cc:74] Next update check in 5m19s Jan 29 16:24:53.244280 systemd-logind[1465]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:24:53.250359 systemd-logind[1465]: New seat seat0. Jan 29 16:24:53.270450 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 16:24:53.273558 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:24:53.274944 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:24:53.282989 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:24:53.295341 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:24:53.307165 extend-filesystems[1488]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 16:24:53.307165 extend-filesystems[1488]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 16:24:53.307165 extend-filesystems[1488]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 16:24:53.313878 extend-filesystems[1459]: Resized filesystem in /dev/vda9 Jan 29 16:24:53.315069 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:24:53.315365 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:24:53.326960 bash[1510]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:24:53.328938 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:24:53.331186 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 16:24:53.333822 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:24:53.430921 containerd[1484]: time="2025-01-29T16:24:53.430787453Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:24:53.454825 containerd[1484]: time="2025-01-29T16:24:53.454750833Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:24:53.456635 containerd[1484]: time="2025-01-29T16:24:53.456595722Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:24:53.456635 containerd[1484]: time="2025-01-29T16:24:53.456624797Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:24:53.456707 containerd[1484]: time="2025-01-29T16:24:53.456643211Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:24:53.456901 containerd[1484]: time="2025-01-29T16:24:53.456879294Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:24:53.456940 containerd[1484]: time="2025-01-29T16:24:53.456902337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:24:53.457014 containerd[1484]: time="2025-01-29T16:24:53.456992426Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:24:53.457047 containerd[1484]: time="2025-01-29T16:24:53.457011522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:24:53.457312 containerd[1484]: time="2025-01-29T16:24:53.457282250Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:24:53.457312 containerd[1484]: time="2025-01-29T16:24:53.457302788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:24:53.457374 containerd[1484]: time="2025-01-29T16:24:53.457319540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:24:53.457374 containerd[1484]: time="2025-01-29T16:24:53.457332274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:24:53.457477 containerd[1484]: time="2025-01-29T16:24:53.457456326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:24:53.457754 containerd[1484]: time="2025-01-29T16:24:53.457725181Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:24:53.457958 containerd[1484]: time="2025-01-29T16:24:53.457929634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:24:53.457958 containerd[1484]: time="2025-01-29T16:24:53.457949992Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:24:53.458123 containerd[1484]: time="2025-01-29T16:24:53.458083392Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:24:53.458192 containerd[1484]: time="2025-01-29T16:24:53.458174012Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:24:53.465510 containerd[1484]: time="2025-01-29T16:24:53.465457852Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:24:53.465609 containerd[1484]: time="2025-01-29T16:24:53.465536560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:24:53.465609 containerd[1484]: time="2025-01-29T16:24:53.465555215Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:24:53.465609 containerd[1484]: time="2025-01-29T16:24:53.465571846Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:24:53.465609 containerd[1484]: time="2025-01-29T16:24:53.465588778Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:24:53.465890 containerd[1484]: time="2025-01-29T16:24:53.465788682Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:24:53.466248 containerd[1484]: time="2025-01-29T16:24:53.466056284Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:24:53.466248 containerd[1484]: time="2025-01-29T16:24:53.466218539Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:24:53.466248 containerd[1484]: time="2025-01-29T16:24:53.466237354Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:24:53.466328 containerd[1484]: time="2025-01-29T16:24:53.466256319Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:24:53.466328 containerd[1484]: time="2025-01-29T16:24:53.466270095Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:24:53.466328 containerd[1484]: time="2025-01-29T16:24:53.466282448Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:24:53.466328 containerd[1484]: time="2025-01-29T16:24:53.466294040Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:24:53.466328 containerd[1484]: time="2025-01-29T16:24:53.466306263Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:24:53.466328 containerd[1484]: time="2025-01-29T16:24:53.466319328Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:24:53.466328 containerd[1484]: time="2025-01-29T16:24:53.466330799Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:24:53.466488 containerd[1484]: time="2025-01-29T16:24:53.466346198Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:24:53.466488 containerd[1484]: time="2025-01-29T16:24:53.466358641Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:24:53.466488 containerd[1484]: time="2025-01-29T16:24:53.466377236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.466488 containerd[1484]: time="2025-01-29T16:24:53.466395380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.466488 containerd[1484]: time="2025-01-29T16:24:53.466406852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.466488 containerd[1484]: time="2025-01-29T16:24:53.466426969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.466488 containerd[1484]: time="2025-01-29T16:24:53.466438381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.466488 containerd[1484]: time="2025-01-29T16:24:53.466450764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.466488 containerd[1484]: time="2025-01-29T16:24:53.466461364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.466488 containerd[1484]: time="2025-01-29T16:24:53.466474509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.466488 containerd[1484]: time="2025-01-29T16:24:53.466488485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.466690 containerd[1484]: time="2025-01-29T16:24:53.466506228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.466690 containerd[1484]: time="2025-01-29T16:24:53.466520836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.466690 containerd[1484]: time="2025-01-29T16:24:53.466533179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.466690 containerd[1484]: time="2025-01-29T16:24:53.466556643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.466690 containerd[1484]: time="2025-01-29T16:24:53.466571250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:24:53.466690 containerd[1484]: time="2025-01-29T16:24:53.466591157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.466690 containerd[1484]: time="2025-01-29T16:24:53.466603951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.466690 containerd[1484]: time="2025-01-29T16:24:53.466615463Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:24:53.466690 containerd[1484]: time="2025-01-29T16:24:53.466653945Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:24:53.466690 containerd[1484]: time="2025-01-29T16:24:53.466672780Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:24:53.466690 containerd[1484]: time="2025-01-29T16:24:53.466684162Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:24:53.466690 containerd[1484]: time="2025-01-29T16:24:53.466696014Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:24:53.466943 containerd[1484]: time="2025-01-29T16:24:53.466705722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.466943 containerd[1484]: time="2025-01-29T16:24:53.466718937Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:24:53.466943 containerd[1484]: time="2025-01-29T16:24:53.466728715Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:24:53.466943 containerd[1484]: time="2025-01-29T16:24:53.466738985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:24:53.467255 containerd[1484]: time="2025-01-29T16:24:53.467022637Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:24:53.467255 containerd[1484]: time="2025-01-29T16:24:53.467069094Z" level=info msg="Connect containerd service" Jan 29 16:24:53.467255 containerd[1484]: time="2025-01-29T16:24:53.467120039Z" level=info msg="using legacy CRI server" Jan 29 16:24:53.467255 containerd[1484]: time="2025-01-29T16:24:53.467127904Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:24:53.467255 containerd[1484]: time="2025-01-29T16:24:53.467238081Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:24:53.467870 containerd[1484]: time="2025-01-29T16:24:53.467831053Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:24:53.468377 containerd[1484]: time="2025-01-29T16:24:53.468116568Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:24:53.468377 containerd[1484]: time="2025-01-29T16:24:53.468134201Z" level=info msg="Start subscribing containerd event" Jan 29 16:24:53.468377 containerd[1484]: time="2025-01-29T16:24:53.468169608Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:24:53.468377 containerd[1484]: time="2025-01-29T16:24:53.468212969Z" level=info msg="Start recovering state" Jan 29 16:24:53.468377 containerd[1484]: time="2025-01-29T16:24:53.468287308Z" level=info msg="Start event monitor" Jan 29 16:24:53.468377 containerd[1484]: time="2025-01-29T16:24:53.468298489Z" level=info msg="Start snapshots syncer" Jan 29 16:24:53.468377 containerd[1484]: time="2025-01-29T16:24:53.468310722Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:24:53.468377 containerd[1484]: time="2025-01-29T16:24:53.468318527Z" level=info msg="Start streaming server" Jan 29 16:24:53.468964 containerd[1484]: time="2025-01-29T16:24:53.468740528Z" level=info msg="containerd successfully booted in 0.039598s" Jan 29 16:24:53.468837 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:24:53.551548 sshd_keygen[1479]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:24:53.575983 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:24:53.586225 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:24:53.588919 systemd[1]: Started sshd@0-10.0.0.140:22-10.0.0.1:49630.service - OpenSSH per-connection server daemon (10.0.0.1:49630). Jan 29 16:24:53.592011 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:24:53.592323 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:24:53.596126 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:24:53.613788 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:24:53.627154 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:24:53.630203 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:24:53.631652 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:24:53.660151 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 49630 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:24:53.661965 sshd-session[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:24:53.668455 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:24:53.676127 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:24:53.679498 tar[1477]: linux-amd64/README.md Jan 29 16:24:53.685175 systemd-logind[1465]: New session 1 of user core. Jan 29 16:24:53.692031 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:24:53.693972 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:24:53.706212 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:24:53.710753 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:24:53.713216 systemd-logind[1465]: New session c1 of user core. Jan 29 16:24:53.845144 systemd[1552]: Queued start job for default target default.target. Jan 29 16:24:53.855542 systemd[1552]: Created slice app.slice - User Application Slice. Jan 29 16:24:53.855573 systemd[1552]: Reached target paths.target - Paths. Jan 29 16:24:53.855622 systemd[1552]: Reached target timers.target - Timers. Jan 29 16:24:53.857408 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:24:53.870988 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:24:53.871151 systemd[1552]: Reached target sockets.target - Sockets. Jan 29 16:24:53.871205 systemd[1552]: Reached target basic.target - Basic System. Jan 29 16:24:53.871247 systemd[1552]: Reached target default.target - Main User Target. Jan 29 16:24:53.871286 systemd[1552]: Startup finished in 150ms. Jan 29 16:24:53.871616 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:24:53.887118 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:24:53.950048 systemd[1]: Started sshd@1-10.0.0.140:22-10.0.0.1:49632.service - OpenSSH per-connection server daemon (10.0.0.1:49632). Jan 29 16:24:53.992090 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 49632 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:24:53.993680 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:24:53.997680 systemd-logind[1465]: New session 2 of user core. Jan 29 16:24:54.006987 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:24:54.061471 sshd[1565]: Connection closed by 10.0.0.1 port 49632 Jan 29 16:24:54.061887 sshd-session[1563]: pam_unix(sshd:session): session closed for user core Jan 29 16:24:54.077540 systemd[1]: sshd@1-10.0.0.140:22-10.0.0.1:49632.service: Deactivated successfully. Jan 29 16:24:54.079678 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:24:54.081224 systemd-logind[1465]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:24:54.089246 systemd[1]: Started sshd@2-10.0.0.140:22-10.0.0.1:49642.service - OpenSSH per-connection server daemon (10.0.0.1:49642). Jan 29 16:24:54.091754 systemd-logind[1465]: Removed session 2. Jan 29 16:24:54.124307 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 49642 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:24:54.126010 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:24:54.130496 systemd-logind[1465]: New session 3 of user core. Jan 29 16:24:54.150183 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:24:54.205457 sshd[1573]: Connection closed by 10.0.0.1 port 49642 Jan 29 16:24:54.205796 sshd-session[1570]: pam_unix(sshd:session): session closed for user core Jan 29 16:24:54.209555 systemd[1]: sshd@2-10.0.0.140:22-10.0.0.1:49642.service: Deactivated successfully. Jan 29 16:24:54.211463 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:24:54.212158 systemd-logind[1465]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:24:54.213280 systemd-logind[1465]: Removed session 3. Jan 29 16:24:54.477077 systemd-networkd[1423]: eth0: Gained IPv6LL Jan 29 16:24:54.480209 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:24:54.482117 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:24:54.495150 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 16:24:54.497989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:24:54.500292 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:24:54.519020 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 16:24:54.519387 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 16:24:54.521445 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:24:54.523107 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:24:55.192702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:24:55.194497 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:24:55.196616 systemd[1]: Startup finished in 685ms (kernel) + 7.197s (initrd) + 4.668s (userspace) = 12.551s. Jan 29 16:24:55.227261 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:24:55.644816 kubelet[1600]: E0129 16:24:55.644749 1600 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:24:55.648817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:24:55.649089 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:24:55.649474 systemd[1]: kubelet.service: Consumed 977ms CPU time, 253.7M memory peak. Jan 29 16:25:04.217920 systemd[1]: Started sshd@3-10.0.0.140:22-10.0.0.1:38456.service - OpenSSH per-connection server daemon (10.0.0.1:38456). Jan 29 16:25:04.256422 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 38456 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:04.257836 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:04.261805 systemd-logind[1465]: New session 4 of user core. Jan 29 16:25:04.276991 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:25:04.330206 sshd[1615]: Connection closed by 10.0.0.1 port 38456 Jan 29 16:25:04.330556 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:04.343375 systemd[1]: sshd@3-10.0.0.140:22-10.0.0.1:38456.service: Deactivated successfully. Jan 29 16:25:04.344997 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:25:04.346561 systemd-logind[1465]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:25:04.347783 systemd[1]: Started sshd@4-10.0.0.140:22-10.0.0.1:38458.service - OpenSSH per-connection server daemon (10.0.0.1:38458). Jan 29 16:25:04.348459 systemd-logind[1465]: Removed session 4. Jan 29 16:25:04.385013 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 38458 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:04.386281 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:04.390340 systemd-logind[1465]: New session 5 of user core. Jan 29 16:25:04.400994 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:25:04.451603 sshd[1623]: Connection closed by 10.0.0.1 port 38458 Jan 29 16:25:04.452048 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:04.465498 systemd[1]: sshd@4-10.0.0.140:22-10.0.0.1:38458.service: Deactivated successfully. Jan 29 16:25:04.467145 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:25:04.468521 systemd-logind[1465]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:25:04.479120 systemd[1]: Started sshd@5-10.0.0.140:22-10.0.0.1:38472.service - OpenSSH per-connection server daemon (10.0.0.1:38472). Jan 29 16:25:04.479937 systemd-logind[1465]: Removed session 5. Jan 29 16:25:04.513172 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 38472 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:04.514752 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:04.519274 systemd-logind[1465]: New session 6 of user core. Jan 29 16:25:04.528995 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:25:04.583456 sshd[1631]: Connection closed by 10.0.0.1 port 38472 Jan 29 16:25:04.583916 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:04.600763 systemd[1]: sshd@5-10.0.0.140:22-10.0.0.1:38472.service: Deactivated successfully. Jan 29 16:25:04.602892 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:25:04.604354 systemd-logind[1465]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:25:04.614099 systemd[1]: Started sshd@6-10.0.0.140:22-10.0.0.1:38476.service - OpenSSH per-connection server daemon (10.0.0.1:38476). Jan 29 16:25:04.614995 systemd-logind[1465]: Removed session 6. Jan 29 16:25:04.650117 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 38476 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:04.651541 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:04.655692 systemd-logind[1465]: New session 7 of user core. Jan 29 16:25:04.668981 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:25:04.726258 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:25:04.726587 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:25:04.742878 sudo[1640]: pam_unix(sudo:session): session closed for user root Jan 29 16:25:04.744317 sshd[1639]: Connection closed by 10.0.0.1 port 38476 Jan 29 16:25:04.744722 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:04.762620 systemd[1]: sshd@6-10.0.0.140:22-10.0.0.1:38476.service: Deactivated successfully. Jan 29 16:25:04.764461 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:25:04.765891 systemd-logind[1465]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:25:04.777173 systemd[1]: Started sshd@7-10.0.0.140:22-10.0.0.1:38484.service - OpenSSH per-connection server daemon (10.0.0.1:38484). Jan 29 16:25:04.778316 systemd-logind[1465]: Removed session 7. Jan 29 16:25:04.811781 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 38484 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:04.813269 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:04.817379 systemd-logind[1465]: New session 8 of user core. Jan 29 16:25:04.827969 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:25:04.881144 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:25:04.881535 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:25:04.885146 sudo[1650]: pam_unix(sudo:session): session closed for user root Jan 29 16:25:04.890636 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:25:04.891035 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:25:04.908110 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:25:04.937458 augenrules[1672]: No rules Jan 29 16:25:04.939266 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:25:04.939532 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:25:04.940656 sudo[1649]: pam_unix(sudo:session): session closed for user root Jan 29 16:25:04.942100 sshd[1648]: Connection closed by 10.0.0.1 port 38484 Jan 29 16:25:04.942399 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:04.954215 systemd[1]: sshd@7-10.0.0.140:22-10.0.0.1:38484.service: Deactivated successfully. Jan 29 16:25:04.955788 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:25:04.957078 systemd-logind[1465]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:25:04.969125 systemd[1]: Started sshd@8-10.0.0.140:22-10.0.0.1:38496.service - OpenSSH per-connection server daemon (10.0.0.1:38496). Jan 29 16:25:04.970151 systemd-logind[1465]: Removed session 8. Jan 29 16:25:05.002551 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 38496 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:05.003976 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:05.007953 systemd-logind[1465]: New session 9 of user core. Jan 29 16:25:05.030020 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:25:05.082811 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:25:05.083156 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:25:05.361070 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:25:05.361231 (dockerd)[1704]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:25:05.640423 dockerd[1704]: time="2025-01-29T16:25:05.640292493Z" level=info msg="Starting up" Jan 29 16:25:05.707053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:25:05.722182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:05.801437 dockerd[1704]: time="2025-01-29T16:25:05.801258992Z" level=info msg="Loading containers: start." Jan 29 16:25:05.992792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:05.996810 (kubelet)[1760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:25:06.046658 kubelet[1760]: E0129 16:25:06.046545 1760 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:25:06.053315 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:25:06.053527 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:25:06.053926 systemd[1]: kubelet.service: Consumed 213ms CPU time, 108M memory peak. Jan 29 16:25:06.297895 kernel: Initializing XFRM netlink socket Jan 29 16:25:06.382410 systemd-networkd[1423]: docker0: Link UP Jan 29 16:25:06.423586 dockerd[1704]: time="2025-01-29T16:25:06.423527348Z" level=info msg="Loading containers: done." Jan 29 16:25:06.437485 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3375183247-merged.mount: Deactivated successfully. Jan 29 16:25:06.441523 dockerd[1704]: time="2025-01-29T16:25:06.441477931Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:25:06.441634 dockerd[1704]: time="2025-01-29T16:25:06.441595302Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:25:06.441758 dockerd[1704]: time="2025-01-29T16:25:06.441738841Z" level=info msg="Daemon has completed initialization" Jan 29 16:25:06.483362 dockerd[1704]: time="2025-01-29T16:25:06.483293180Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:25:06.483562 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:25:06.975600 containerd[1484]: time="2025-01-29T16:25:06.975544246Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 16:25:07.654593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3975962096.mount: Deactivated successfully. Jan 29 16:25:08.559463 containerd[1484]: time="2025-01-29T16:25:08.559416973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:08.560215 containerd[1484]: time="2025-01-29T16:25:08.560179804Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674824" Jan 29 16:25:08.561397 containerd[1484]: time="2025-01-29T16:25:08.561367952Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:08.563850 containerd[1484]: time="2025-01-29T16:25:08.563815803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:08.564802 containerd[1484]: time="2025-01-29T16:25:08.564768359Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 1.589178827s" Jan 29 16:25:08.564869 containerd[1484]: time="2025-01-29T16:25:08.564801030Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 29 16:25:08.565311 containerd[1484]: time="2025-01-29T16:25:08.565281381Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 16:25:09.766645 containerd[1484]: time="2025-01-29T16:25:09.766577600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:09.767430 containerd[1484]: time="2025-01-29T16:25:09.767363374Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770711" Jan 29 16:25:09.768426 containerd[1484]: time="2025-01-29T16:25:09.768393225Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:09.771971 containerd[1484]: time="2025-01-29T16:25:09.771914529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:09.772905 containerd[1484]: time="2025-01-29T16:25:09.772829665Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.207522045s" Jan 29 16:25:09.772905 containerd[1484]: time="2025-01-29T16:25:09.772900899Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 29 16:25:09.773364 containerd[1484]: time="2025-01-29T16:25:09.773339121Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 16:25:11.242237 containerd[1484]: time="2025-01-29T16:25:11.242168089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:11.243126 containerd[1484]: time="2025-01-29T16:25:11.243076583Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169759" Jan 29 16:25:11.244299 containerd[1484]: time="2025-01-29T16:25:11.244266645Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:11.247341 containerd[1484]: time="2025-01-29T16:25:11.247274926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:11.248356 containerd[1484]: time="2025-01-29T16:25:11.248320347Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 1.474949397s" Jan 29 16:25:11.248412 containerd[1484]: time="2025-01-29T16:25:11.248356645Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 29 16:25:11.248956 containerd[1484]: time="2025-01-29T16:25:11.248885407Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 16:25:12.261503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3053078684.mount: Deactivated successfully. Jan 29 16:25:13.450369 containerd[1484]: time="2025-01-29T16:25:13.450313941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:13.451064 containerd[1484]: time="2025-01-29T16:25:13.451035604Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 29 16:25:13.452245 containerd[1484]: time="2025-01-29T16:25:13.452211189Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:13.454269 containerd[1484]: time="2025-01-29T16:25:13.454234904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:13.454932 containerd[1484]: time="2025-01-29T16:25:13.454903989Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 2.2059374s" Jan 29 16:25:13.454970 containerd[1484]: time="2025-01-29T16:25:13.454932582Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 29 16:25:13.455391 containerd[1484]: time="2025-01-29T16:25:13.455368129Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 16:25:13.939460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3539834067.mount: Deactivated successfully. Jan 29 16:25:14.686933 containerd[1484]: time="2025-01-29T16:25:14.686875104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:14.688234 containerd[1484]: time="2025-01-29T16:25:14.688174211Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 29 16:25:14.731282 containerd[1484]: time="2025-01-29T16:25:14.731247278Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:14.794830 containerd[1484]: time="2025-01-29T16:25:14.794761019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:14.795958 containerd[1484]: time="2025-01-29T16:25:14.795899985Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.340498373s" Jan 29 16:25:14.795958 containerd[1484]: time="2025-01-29T16:25:14.795941092Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 29 16:25:14.796920 containerd[1484]: time="2025-01-29T16:25:14.796882888Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 16:25:15.318914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2856622125.mount: Deactivated successfully. Jan 29 16:25:15.325148 containerd[1484]: time="2025-01-29T16:25:15.325089865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:15.325783 containerd[1484]: time="2025-01-29T16:25:15.325737590Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 16:25:15.326932 containerd[1484]: time="2025-01-29T16:25:15.326905620Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:15.329557 containerd[1484]: time="2025-01-29T16:25:15.329511567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:15.330147 containerd[1484]: time="2025-01-29T16:25:15.330113456Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 533.195722ms" Jan 29 16:25:15.330147 containerd[1484]: time="2025-01-29T16:25:15.330139465Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 16:25:15.330632 containerd[1484]: time="2025-01-29T16:25:15.330575532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 16:25:15.845293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046785563.mount: Deactivated successfully. Jan 29 16:25:16.271762 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:25:16.280095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:16.979101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:16.983690 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:25:17.033052 kubelet[2067]: E0129 16:25:17.032973 2067 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:25:17.036706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:25:17.036924 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:25:17.037269 systemd[1]: kubelet.service: Consumed 205ms CPU time, 106.6M memory peak. Jan 29 16:25:19.066766 containerd[1484]: time="2025-01-29T16:25:19.066709417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:19.067978 containerd[1484]: time="2025-01-29T16:25:19.067932120Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Jan 29 16:25:19.069256 containerd[1484]: time="2025-01-29T16:25:19.069228732Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:19.074147 containerd[1484]: time="2025-01-29T16:25:19.074103133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:19.075327 containerd[1484]: time="2025-01-29T16:25:19.075288917Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.744685352s" Jan 29 16:25:19.075379 containerd[1484]: time="2025-01-29T16:25:19.075325085Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 29 16:25:21.165458 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:21.165624 systemd[1]: kubelet.service: Consumed 205ms CPU time, 106.6M memory peak. Jan 29 16:25:21.181097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:21.208917 systemd[1]: Reload requested from client PID 2147 ('systemctl') (unit session-9.scope)... Jan 29 16:25:21.208933 systemd[1]: Reloading... Jan 29 16:25:21.298905 zram_generator::config[2191]: No configuration found. Jan 29 16:25:21.696211 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:25:21.803909 systemd[1]: Reloading finished in 594 ms. Jan 29 16:25:21.853515 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:21.857835 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:25:21.858817 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:21.859141 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:25:21.859417 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:21.859458 systemd[1]: kubelet.service: Consumed 141ms CPU time, 91.9M memory peak. Jan 29 16:25:21.862209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:22.020593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:22.024754 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:25:22.145633 kubelet[2242]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:25:22.145633 kubelet[2242]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 16:25:22.145633 kubelet[2242]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:25:22.146131 kubelet[2242]: I0129 16:25:22.145705 2242 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:25:22.534085 kubelet[2242]: I0129 16:25:22.534034 2242 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 16:25:22.534085 kubelet[2242]: I0129 16:25:22.534069 2242 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:25:22.534368 kubelet[2242]: I0129 16:25:22.534343 2242 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 16:25:22.608351 kubelet[2242]: E0129 16:25:22.608298 2242 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:22.609431 kubelet[2242]: I0129 16:25:22.609381 2242 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:25:22.616112 kubelet[2242]: E0129 16:25:22.616075 2242 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:25:22.616112 kubelet[2242]: I0129 16:25:22.616103 2242 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:25:22.621620 kubelet[2242]: I0129 16:25:22.621595 2242 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:25:22.622707 kubelet[2242]: I0129 16:25:22.622666 2242 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:25:22.622872 kubelet[2242]: I0129 16:25:22.622698 2242 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:25:22.623038 kubelet[2242]: I0129 16:25:22.622882 2242 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:25:22.623038 kubelet[2242]: I0129 16:25:22.622892 2242 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 16:25:22.623038 kubelet[2242]: I0129 16:25:22.623014 2242 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:25:22.625575 kubelet[2242]: I0129 16:25:22.625549 2242 kubelet.go:446] "Attempting to sync node with API server" Jan 29 16:25:22.625575 kubelet[2242]: I0129 16:25:22.625571 2242 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:25:22.625718 kubelet[2242]: I0129 16:25:22.625597 2242 kubelet.go:352] "Adding apiserver pod source" Jan 29 16:25:22.625718 kubelet[2242]: I0129 16:25:22.625608 2242 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:25:22.631360 kubelet[2242]: W0129 16:25:22.631143 2242 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Jan 29 16:25:22.631360 kubelet[2242]: E0129 16:25:22.631218 2242 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:22.631360 kubelet[2242]: W0129 16:25:22.631274 2242 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Jan 29 16:25:22.631360 kubelet[2242]: E0129 16:25:22.631326 2242 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:22.632932 kubelet[2242]: I0129 16:25:22.632384 2242 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:25:22.633280 kubelet[2242]: I0129 16:25:22.633244 2242 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:25:22.633868 kubelet[2242]: W0129 16:25:22.633834 2242 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:25:22.636054 kubelet[2242]: I0129 16:25:22.636031 2242 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 16:25:22.636100 kubelet[2242]: I0129 16:25:22.636075 2242 server.go:1287] "Started kubelet" Jan 29 16:25:22.636584 kubelet[2242]: I0129 16:25:22.636529 2242 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:25:22.637493 kubelet[2242]: I0129 16:25:22.637026 2242 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:25:22.637493 kubelet[2242]: I0129 16:25:22.637106 2242 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:25:22.639341 kubelet[2242]: I0129 16:25:22.638095 2242 server.go:490] "Adding debug handlers to kubelet server" Jan 29 16:25:22.639341 kubelet[2242]: I0129 16:25:22.638263 2242 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:25:22.641169 kubelet[2242]: E0129 16:25:22.640533 2242 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:25:22.641169 kubelet[2242]: I0129 16:25:22.640733 2242 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:25:22.642068 kubelet[2242]: E0129 16:25:22.640385 2242 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.140:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.140:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f368434b33da9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:25:22.636045737 +0000 UTC m=+0.601543249,LastTimestamp:2025-01-29 16:25:22.636045737 +0000 UTC m=+0.601543249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 16:25:22.642264 kubelet[2242]: E0129 16:25:22.642171 2242 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:25:22.642327 kubelet[2242]: I0129 16:25:22.642304 2242 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 16:25:22.642584 kubelet[2242]: I0129 16:25:22.642559 2242 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:25:22.642732 kubelet[2242]: I0129 16:25:22.642704 2242 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:25:22.643366 kubelet[2242]: I0129 16:25:22.643232 2242 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:25:22.643366 kubelet[2242]: E0129 16:25:22.643301 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="200ms" Jan 29 16:25:22.643366 kubelet[2242]: W0129 16:25:22.643280 2242 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Jan 29 16:25:22.643492 kubelet[2242]: E0129 16:25:22.643374 2242 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:22.643492 kubelet[2242]: I0129 16:25:22.643339 2242 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:25:22.644397 kubelet[2242]: I0129 16:25:22.644345 2242 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:25:22.657305 kubelet[2242]: I0129 16:25:22.657134 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:25:22.658443 kubelet[2242]: I0129 16:25:22.658420 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:25:22.658443 kubelet[2242]: I0129 16:25:22.658443 2242 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 16:25:22.658510 kubelet[2242]: I0129 16:25:22.658466 2242 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 16:25:22.658510 kubelet[2242]: I0129 16:25:22.658475 2242 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 16:25:22.658553 kubelet[2242]: E0129 16:25:22.658524 2242 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:25:22.663459 kubelet[2242]: W0129 16:25:22.663406 2242 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Jan 29 16:25:22.663710 kubelet[2242]: E0129 16:25:22.663475 2242 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:22.663847 kubelet[2242]: I0129 16:25:22.663832 2242 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 16:25:22.663847 kubelet[2242]: I0129 16:25:22.663846 2242 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 16:25:22.663978 kubelet[2242]: I0129 16:25:22.663882 2242 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:25:22.743051 kubelet[2242]: E0129 16:25:22.743013 2242 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:25:22.759205 kubelet[2242]: E0129 16:25:22.759180 2242 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:25:22.843624 kubelet[2242]: E0129 16:25:22.843515 2242 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:25:22.843977 kubelet[2242]: E0129 16:25:22.843931 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="400ms" Jan 29 16:25:22.901168 kubelet[2242]: I0129 16:25:22.901126 2242 policy_none.go:49] "None policy: Start" Jan 29 16:25:22.901168 kubelet[2242]: I0129 16:25:22.901156 2242 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 16:25:22.901168 kubelet[2242]: I0129 16:25:22.901170 2242 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:25:22.913029 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:25:22.927888 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:25:22.930705 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:25:22.940688 kubelet[2242]: I0129 16:25:22.940657 2242 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:25:22.940897 kubelet[2242]: I0129 16:25:22.940884 2242 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:25:22.940959 kubelet[2242]: I0129 16:25:22.940897 2242 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:25:22.941663 kubelet[2242]: I0129 16:25:22.941139 2242 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:25:22.942018 kubelet[2242]: E0129 16:25:22.941958 2242 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 16:25:22.942018 kubelet[2242]: E0129 16:25:22.942001 2242 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 16:25:22.966245 systemd[1]: Created slice kubepods-burstable-podb12190510895c3d3955f8100e7f37e33.slice - libcontainer container kubepods-burstable-podb12190510895c3d3955f8100e7f37e33.slice. Jan 29 16:25:22.976784 kubelet[2242]: E0129 16:25:22.976746 2242 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:25:22.979124 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 29 16:25:22.990125 kubelet[2242]: E0129 16:25:22.990107 2242 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:25:22.992816 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 29 16:25:22.994314 kubelet[2242]: E0129 16:25:22.994293 2242 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:25:23.042383 kubelet[2242]: I0129 16:25:23.042359 2242 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:25:23.042774 kubelet[2242]: E0129 16:25:23.042748 2242 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Jan 29 16:25:23.044970 kubelet[2242]: I0129 16:25:23.044946 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:23.044970 kubelet[2242]: I0129 16:25:23.044968 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:23.045059 kubelet[2242]: I0129 16:25:23.044986 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:23.045059 kubelet[2242]: I0129 16:25:23.045006 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b12190510895c3d3955f8100e7f37e33-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b12190510895c3d3955f8100e7f37e33\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:23.045059 kubelet[2242]: I0129 16:25:23.045020 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b12190510895c3d3955f8100e7f37e33-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b12190510895c3d3955f8100e7f37e33\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:23.045059 kubelet[2242]: I0129 16:25:23.045034 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b12190510895c3d3955f8100e7f37e33-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b12190510895c3d3955f8100e7f37e33\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:23.045059 kubelet[2242]: I0129 16:25:23.045051 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:23.045203 kubelet[2242]: I0129 16:25:23.045064 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:23.045203 kubelet[2242]: I0129 16:25:23.045078 2242 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 16:25:23.244745 kubelet[2242]: I0129 16:25:23.244590 2242 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:25:23.244745 kubelet[2242]: E0129 16:25:23.244677 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="800ms" Jan 29 16:25:23.245210 kubelet[2242]: E0129 16:25:23.244854 2242 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Jan 29 16:25:23.278249 kubelet[2242]: E0129 16:25:23.278215 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:23.278815 containerd[1484]: time="2025-01-29T16:25:23.278770074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b12190510895c3d3955f8100e7f37e33,Namespace:kube-system,Attempt:0,}" Jan 29 16:25:23.290995 kubelet[2242]: E0129 16:25:23.290968 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:23.291336 containerd[1484]: time="2025-01-29T16:25:23.291304280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 29 16:25:23.294555 kubelet[2242]: E0129 16:25:23.294532 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:23.294800 containerd[1484]: time="2025-01-29T16:25:23.294779548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 29 16:25:23.502726 kubelet[2242]: W0129 16:25:23.502572 2242 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Jan 29 16:25:23.502726 kubelet[2242]: E0129 16:25:23.502654 2242 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:23.646759 kubelet[2242]: I0129 16:25:23.646720 2242 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:25:23.647076 kubelet[2242]: E0129 16:25:23.647040 2242 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Jan 29 16:25:23.912194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2592288756.mount: Deactivated successfully. Jan 29 16:25:23.918472 containerd[1484]: time="2025-01-29T16:25:23.918418373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:25:23.921834 containerd[1484]: time="2025-01-29T16:25:23.921769909Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 16:25:23.922763 containerd[1484]: time="2025-01-29T16:25:23.922710513Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:25:23.924701 containerd[1484]: time="2025-01-29T16:25:23.924664387Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:25:23.925404 containerd[1484]: time="2025-01-29T16:25:23.925371443Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:25:23.926269 containerd[1484]: time="2025-01-29T16:25:23.926233549Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:25:23.929350 containerd[1484]: time="2025-01-29T16:25:23.929315619Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:25:23.930355 containerd[1484]: time="2025-01-29T16:25:23.930309884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:25:23.931138 containerd[1484]: time="2025-01-29T16:25:23.931107470Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 652.226307ms" Jan 29 16:25:23.933602 containerd[1484]: time="2025-01-29T16:25:23.933573505Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 642.204382ms" Jan 29 16:25:23.935992 containerd[1484]: time="2025-01-29T16:25:23.935966412Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 641.138022ms" Jan 29 16:25:23.993738 kubelet[2242]: W0129 16:25:23.993664 2242 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Jan 29 16:25:23.993738 kubelet[2242]: E0129 16:25:23.993738 2242 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:24.045224 kubelet[2242]: E0129 16:25:24.045177 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="1.6s" Jan 29 16:25:24.070938 kubelet[2242]: W0129 16:25:24.070873 2242 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Jan 29 16:25:24.071058 kubelet[2242]: E0129 16:25:24.070945 2242 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:24.083812 kubelet[2242]: W0129 16:25:24.083742 2242 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Jan 29 16:25:24.083812 kubelet[2242]: E0129 16:25:24.083809 2242 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:25:24.093434 containerd[1484]: time="2025-01-29T16:25:24.093129539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:24.093434 containerd[1484]: time="2025-01-29T16:25:24.093184332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:24.093434 containerd[1484]: time="2025-01-29T16:25:24.093195693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:24.093434 containerd[1484]: time="2025-01-29T16:25:24.093280823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:24.094111 containerd[1484]: time="2025-01-29T16:25:24.094032342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:24.094177 containerd[1484]: time="2025-01-29T16:25:24.094122521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:24.094177 containerd[1484]: time="2025-01-29T16:25:24.094160072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:24.095638 containerd[1484]: time="2025-01-29T16:25:24.094261973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:24.098254 containerd[1484]: time="2025-01-29T16:25:24.098037934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:24.098254 containerd[1484]: time="2025-01-29T16:25:24.098099670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:24.098254 containerd[1484]: time="2025-01-29T16:25:24.098152138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:24.098361 containerd[1484]: time="2025-01-29T16:25:24.098316166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:24.138330 systemd[1]: Started cri-containerd-4255bad1212445b5ffdeb11561892ce509cf7661201e36668e385198857d5379.scope - libcontainer container 4255bad1212445b5ffdeb11561892ce509cf7661201e36668e385198857d5379. Jan 29 16:25:24.142898 systemd[1]: Started cri-containerd-fe1a7626ff0d5922c7f75dee98590401a90e7ca3ecb81c0dfd5d4f44f71c6fa3.scope - libcontainer container fe1a7626ff0d5922c7f75dee98590401a90e7ca3ecb81c0dfd5d4f44f71c6fa3. Jan 29 16:25:24.147256 systemd[1]: Started cri-containerd-aecd61a2f040e0718866bac3c4fe0805b189f384429a55b828793f1d783a95d7.scope - libcontainer container aecd61a2f040e0718866bac3c4fe0805b189f384429a55b828793f1d783a95d7. Jan 29 16:25:24.192631 containerd[1484]: time="2025-01-29T16:25:24.192509261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"4255bad1212445b5ffdeb11561892ce509cf7661201e36668e385198857d5379\"" Jan 29 16:25:24.195774 kubelet[2242]: E0129 16:25:24.195739 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:24.196767 containerd[1484]: time="2025-01-29T16:25:24.196741038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b12190510895c3d3955f8100e7f37e33,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe1a7626ff0d5922c7f75dee98590401a90e7ca3ecb81c0dfd5d4f44f71c6fa3\"" Jan 29 16:25:24.197097 containerd[1484]: time="2025-01-29T16:25:24.196894395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"aecd61a2f040e0718866bac3c4fe0805b189f384429a55b828793f1d783a95d7\"" Jan 29 16:25:24.197241 kubelet[2242]: E0129 16:25:24.197204 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:24.197796 kubelet[2242]: E0129 16:25:24.197698 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:24.199308 containerd[1484]: time="2025-01-29T16:25:24.199281903Z" level=info msg="CreateContainer within sandbox \"4255bad1212445b5ffdeb11561892ce509cf7661201e36668e385198857d5379\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:25:24.199460 containerd[1484]: time="2025-01-29T16:25:24.199280480Z" level=info msg="CreateContainer within sandbox \"fe1a7626ff0d5922c7f75dee98590401a90e7ca3ecb81c0dfd5d4f44f71c6fa3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:25:24.200076 containerd[1484]: time="2025-01-29T16:25:24.200047007Z" level=info msg="CreateContainer within sandbox \"aecd61a2f040e0718866bac3c4fe0805b189f384429a55b828793f1d783a95d7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:25:24.283832 containerd[1484]: time="2025-01-29T16:25:24.283786789Z" level=info msg="CreateContainer within sandbox \"4255bad1212445b5ffdeb11561892ce509cf7661201e36668e385198857d5379\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e2621d4366198d11f4aa8cdabef3306a8ec5361925ec3142006b3eca6f1aa2e5\"" Jan 29 16:25:24.284478 containerd[1484]: time="2025-01-29T16:25:24.284452388Z" level=info msg="StartContainer for \"e2621d4366198d11f4aa8cdabef3306a8ec5361925ec3142006b3eca6f1aa2e5\"" Jan 29 16:25:24.286839 containerd[1484]: time="2025-01-29T16:25:24.286806813Z" level=info msg="CreateContainer within sandbox \"fe1a7626ff0d5922c7f75dee98590401a90e7ca3ecb81c0dfd5d4f44f71c6fa3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"858fa7cc4c1f33724fb42c9ea187e64a4a97066c3c3de184ec4acc4ab24260f1\"" Jan 29 16:25:24.287261 containerd[1484]: time="2025-01-29T16:25:24.287211682Z" level=info msg="StartContainer for \"858fa7cc4c1f33724fb42c9ea187e64a4a97066c3c3de184ec4acc4ab24260f1\"" Jan 29 16:25:24.289036 containerd[1484]: time="2025-01-29T16:25:24.288991711Z" level=info msg="CreateContainer within sandbox \"aecd61a2f040e0718866bac3c4fe0805b189f384429a55b828793f1d783a95d7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4e61037c3e76ba82a111e6aa4166ee9e5e7eaf64cf34db5f0a67053c83103ea4\"" Jan 29 16:25:24.289359 containerd[1484]: time="2025-01-29T16:25:24.289333021Z" level=info msg="StartContainer for \"4e61037c3e76ba82a111e6aa4166ee9e5e7eaf64cf34db5f0a67053c83103ea4\"" Jan 29 16:25:24.313008 systemd[1]: Started cri-containerd-858fa7cc4c1f33724fb42c9ea187e64a4a97066c3c3de184ec4acc4ab24260f1.scope - libcontainer container 858fa7cc4c1f33724fb42c9ea187e64a4a97066c3c3de184ec4acc4ab24260f1. Jan 29 16:25:24.314238 systemd[1]: Started cri-containerd-e2621d4366198d11f4aa8cdabef3306a8ec5361925ec3142006b3eca6f1aa2e5.scope - libcontainer container e2621d4366198d11f4aa8cdabef3306a8ec5361925ec3142006b3eca6f1aa2e5. Jan 29 16:25:24.319312 systemd[1]: Started cri-containerd-4e61037c3e76ba82a111e6aa4166ee9e5e7eaf64cf34db5f0a67053c83103ea4.scope - libcontainer container 4e61037c3e76ba82a111e6aa4166ee9e5e7eaf64cf34db5f0a67053c83103ea4. Jan 29 16:25:24.370814 containerd[1484]: time="2025-01-29T16:25:24.369459727Z" level=info msg="StartContainer for \"e2621d4366198d11f4aa8cdabef3306a8ec5361925ec3142006b3eca6f1aa2e5\" returns successfully" Jan 29 16:25:24.376068 containerd[1484]: time="2025-01-29T16:25:24.375906607Z" level=info msg="StartContainer for \"4e61037c3e76ba82a111e6aa4166ee9e5e7eaf64cf34db5f0a67053c83103ea4\" returns successfully" Jan 29 16:25:24.376547 containerd[1484]: time="2025-01-29T16:25:24.376096012Z" level=info msg="StartContainer for \"858fa7cc4c1f33724fb42c9ea187e64a4a97066c3c3de184ec4acc4ab24260f1\" returns successfully" Jan 29 16:25:24.448881 kubelet[2242]: I0129 16:25:24.448750 2242 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:25:24.672581 kubelet[2242]: E0129 16:25:24.672544 2242 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:25:24.672727 kubelet[2242]: E0129 16:25:24.672673 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:24.674760 kubelet[2242]: E0129 16:25:24.674733 2242 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:25:24.674853 kubelet[2242]: E0129 16:25:24.674829 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:24.676626 kubelet[2242]: E0129 16:25:24.676601 2242 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:25:24.676706 kubelet[2242]: E0129 16:25:24.676686 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:25.679408 kubelet[2242]: E0129 16:25:25.679178 2242 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:25:25.679408 kubelet[2242]: E0129 16:25:25.679308 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:25.679408 kubelet[2242]: E0129 16:25:25.679330 2242 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 16:25:25.679853 kubelet[2242]: E0129 16:25:25.679555 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:25.761140 kubelet[2242]: E0129 16:25:25.761097 2242 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 16:25:25.859854 kubelet[2242]: I0129 16:25:25.859810 2242 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 16:25:25.859854 kubelet[2242]: E0129 16:25:25.859844 2242 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 16:25:25.863875 kubelet[2242]: E0129 16:25:25.862350 2242 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:25:25.943832 kubelet[2242]: I0129 16:25:25.943692 2242 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:25.947747 kubelet[2242]: E0129 16:25:25.947642 2242 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:25.947747 kubelet[2242]: I0129 16:25:25.947674 2242 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:25.949145 kubelet[2242]: E0129 16:25:25.949111 2242 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:25.949145 kubelet[2242]: I0129 16:25:25.949130 2242 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 16:25:25.950275 kubelet[2242]: E0129 16:25:25.950236 2242 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 29 16:25:26.655239 kubelet[2242]: I0129 16:25:26.655186 2242 apiserver.go:52] "Watching apiserver" Jan 29 16:25:26.743553 kubelet[2242]: I0129 16:25:26.743514 2242 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:25:26.763268 kubelet[2242]: I0129 16:25:26.763233 2242 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 16:25:26.785005 kubelet[2242]: E0129 16:25:26.784977 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:27.680294 kubelet[2242]: E0129 16:25:27.680256 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:28.062958 systemd[1]: Reload requested from client PID 2517 ('systemctl') (unit session-9.scope)... Jan 29 16:25:28.062973 systemd[1]: Reloading... Jan 29 16:25:28.143907 zram_generator::config[2564]: No configuration found. Jan 29 16:25:28.254646 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:25:28.370619 systemd[1]: Reloading finished in 307 ms. Jan 29 16:25:28.397468 kubelet[2242]: I0129 16:25:28.397395 2242 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:25:28.397569 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:28.415010 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:25:28.415309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:28.415355 systemd[1]: kubelet.service: Consumed 1.090s CPU time, 129M memory peak. Jan 29 16:25:28.422398 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:28.592390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:28.597501 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:25:28.643729 kubelet[2606]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:25:28.643729 kubelet[2606]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 16:25:28.643729 kubelet[2606]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:25:28.644171 kubelet[2606]: I0129 16:25:28.643727 2606 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:25:28.649756 kubelet[2606]: I0129 16:25:28.649724 2606 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 16:25:28.649756 kubelet[2606]: I0129 16:25:28.649749 2606 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:25:28.649980 kubelet[2606]: I0129 16:25:28.649964 2606 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 16:25:28.651064 kubelet[2606]: I0129 16:25:28.651046 2606 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:25:28.652855 kubelet[2606]: I0129 16:25:28.652833 2606 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:25:28.656691 kubelet[2606]: E0129 16:25:28.656242 2606 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:25:28.656691 kubelet[2606]: I0129 16:25:28.656279 2606 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:25:28.661228 kubelet[2606]: I0129 16:25:28.661193 2606 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:25:28.662000 kubelet[2606]: I0129 16:25:28.661951 2606 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:25:28.662210 kubelet[2606]: I0129 16:25:28.662012 2606 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:25:28.662299 kubelet[2606]: I0129 16:25:28.662217 2606 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:25:28.662299 kubelet[2606]: I0129 16:25:28.662230 2606 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 16:25:28.662299 kubelet[2606]: I0129 16:25:28.662280 2606 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:25:28.662469 kubelet[2606]: I0129 16:25:28.662441 2606 kubelet.go:446] "Attempting to sync node with API server" Jan 29 16:25:28.662469 kubelet[2606]: I0129 16:25:28.662461 2606 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:25:28.662560 kubelet[2606]: I0129 16:25:28.662481 2606 kubelet.go:352] "Adding apiserver pod source" Jan 29 16:25:28.662560 kubelet[2606]: I0129 16:25:28.662495 2606 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:25:28.663335 kubelet[2606]: I0129 16:25:28.663069 2606 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:25:28.663434 kubelet[2606]: I0129 16:25:28.663418 2606 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:25:28.663820 kubelet[2606]: I0129 16:25:28.663806 2606 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 16:25:28.663882 kubelet[2606]: I0129 16:25:28.663839 2606 server.go:1287] "Started kubelet" Jan 29 16:25:28.665335 kubelet[2606]: I0129 16:25:28.663936 2606 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:25:28.665335 kubelet[2606]: I0129 16:25:28.664144 2606 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:25:28.665335 kubelet[2606]: I0129 16:25:28.664425 2606 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:25:28.665335 kubelet[2606]: I0129 16:25:28.665144 2606 server.go:490] "Adding debug handlers to kubelet server" Jan 29 16:25:28.667474 kubelet[2606]: I0129 16:25:28.667443 2606 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:25:28.667701 kubelet[2606]: I0129 16:25:28.667680 2606 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:25:28.676354 kubelet[2606]: E0129 16:25:28.676315 2606 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:25:28.677256 kubelet[2606]: I0129 16:25:28.677230 2606 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 16:25:28.677421 kubelet[2606]: E0129 16:25:28.677399 2606 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:25:28.677740 kubelet[2606]: I0129 16:25:28.677717 2606 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:25:28.678917 kubelet[2606]: I0129 16:25:28.677896 2606 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:25:28.684255 kubelet[2606]: I0129 16:25:28.684153 2606 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:25:28.684788 kubelet[2606]: I0129 16:25:28.684485 2606 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:25:28.687396 kubelet[2606]: I0129 16:25:28.687098 2606 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:25:28.689717 kubelet[2606]: I0129 16:25:28.689658 2606 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:25:28.690898 kubelet[2606]: I0129 16:25:28.690870 2606 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:25:28.690898 kubelet[2606]: I0129 16:25:28.690902 2606 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 16:25:28.691026 kubelet[2606]: I0129 16:25:28.690927 2606 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 16:25:28.691026 kubelet[2606]: I0129 16:25:28.690948 2606 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 16:25:28.691026 kubelet[2606]: E0129 16:25:28.691001 2606 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:25:28.722109 kubelet[2606]: I0129 16:25:28.722080 2606 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 16:25:28.722109 kubelet[2606]: I0129 16:25:28.722097 2606 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 16:25:28.722109 kubelet[2606]: I0129 16:25:28.722116 2606 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:25:28.722466 kubelet[2606]: I0129 16:25:28.722442 2606 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:25:28.722495 kubelet[2606]: I0129 16:25:28.722459 2606 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:25:28.722495 kubelet[2606]: I0129 16:25:28.722479 2606 policy_none.go:49] "None policy: Start" Jan 29 16:25:28.722495 kubelet[2606]: I0129 16:25:28.722488 2606 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 16:25:28.722495 kubelet[2606]: I0129 16:25:28.722499 2606 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:25:28.722642 kubelet[2606]: I0129 16:25:28.722625 2606 state_mem.go:75] "Updated machine memory state" Jan 29 16:25:28.728292 kubelet[2606]: I0129 16:25:28.728262 2606 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:25:28.728501 kubelet[2606]: I0129 16:25:28.728480 2606 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:25:28.728538 kubelet[2606]: I0129 16:25:28.728494 2606 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:25:28.728905 kubelet[2606]: I0129 16:25:28.728727 2606 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:25:28.729354 kubelet[2606]: E0129 16:25:28.729339 2606 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 16:25:28.792405 kubelet[2606]: I0129 16:25:28.792364 2606 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 16:25:28.792405 kubelet[2606]: I0129 16:25:28.792409 2606 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:28.792657 kubelet[2606]: I0129 16:25:28.792633 2606 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:28.800012 kubelet[2606]: E0129 16:25:28.799976 2606 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 16:25:28.833384 kubelet[2606]: I0129 16:25:28.833360 2606 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 16:25:28.839043 kubelet[2606]: I0129 16:25:28.839015 2606 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 29 16:25:28.839137 kubelet[2606]: I0129 16:25:28.839119 2606 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 16:25:28.879349 kubelet[2606]: I0129 16:25:28.879318 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:28.879349 kubelet[2606]: I0129 16:25:28.879348 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:28.879477 kubelet[2606]: I0129 16:25:28.879368 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:28.879477 kubelet[2606]: I0129 16:25:28.879384 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 16:25:28.879477 kubelet[2606]: I0129 16:25:28.879399 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b12190510895c3d3955f8100e7f37e33-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b12190510895c3d3955f8100e7f37e33\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:28.879477 kubelet[2606]: I0129 16:25:28.879414 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b12190510895c3d3955f8100e7f37e33-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b12190510895c3d3955f8100e7f37e33\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:28.879477 kubelet[2606]: I0129 16:25:28.879429 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:28.879591 kubelet[2606]: I0129 16:25:28.879478 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:25:28.879591 kubelet[2606]: I0129 16:25:28.879505 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b12190510895c3d3955f8100e7f37e33-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b12190510895c3d3955f8100e7f37e33\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:25:29.030268 sudo[2644]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 16:25:29.030601 sudo[2644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 16:25:29.101318 kubelet[2606]: E0129 16:25:29.101280 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:29.101447 kubelet[2606]: E0129 16:25:29.101280 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:29.101447 kubelet[2606]: E0129 16:25:29.101289 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:29.485931 sudo[2644]: pam_unix(sudo:session): session closed for user root Jan 29 16:25:29.664797 kubelet[2606]: I0129 16:25:29.664764 2606 apiserver.go:52] "Watching apiserver" Jan 29 16:25:29.679886 kubelet[2606]: I0129 16:25:29.679851 2606 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:25:29.703873 kubelet[2606]: I0129 16:25:29.703834 2606 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 16:25:29.705549 kubelet[2606]: E0129 16:25:29.704152 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:29.705549 kubelet[2606]: E0129 16:25:29.704520 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:29.712887 kubelet[2606]: E0129 16:25:29.709718 2606 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 16:25:29.712887 kubelet[2606]: E0129 16:25:29.709843 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:29.749370 kubelet[2606]: I0129 16:25:29.749211 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.749197256 podStartE2EDuration="1.749197256s" podCreationTimestamp="2025-01-29 16:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:25:29.748271583 +0000 UTC m=+1.146280884" watchObservedRunningTime="2025-01-29 16:25:29.749197256 +0000 UTC m=+1.147206567" Jan 29 16:25:29.754195 kubelet[2606]: I0129 16:25:29.754127 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7541154350000001 podStartE2EDuration="1.754115435s" podCreationTimestamp="2025-01-29 16:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:25:29.754086379 +0000 UTC m=+1.152095710" watchObservedRunningTime="2025-01-29 16:25:29.754115435 +0000 UTC m=+1.152124746" Jan 29 16:25:29.761309 kubelet[2606]: I0129 16:25:29.761196 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.7611840130000003 podStartE2EDuration="3.761184013s" podCreationTimestamp="2025-01-29 16:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:25:29.761114861 +0000 UTC m=+1.159124172" watchObservedRunningTime="2025-01-29 16:25:29.761184013 +0000 UTC m=+1.159193334" Jan 29 16:25:30.643765 sudo[1684]: pam_unix(sudo:session): session closed for user root Jan 29 16:25:30.645310 sshd[1683]: Connection closed by 10.0.0.1 port 38496 Jan 29 16:25:30.645708 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:30.649012 systemd[1]: sshd@8-10.0.0.140:22-10.0.0.1:38496.service: Deactivated successfully. Jan 29 16:25:30.651003 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:25:30.651197 systemd[1]: session-9.scope: Consumed 4.102s CPU time, 253.7M memory peak. Jan 29 16:25:30.652326 systemd-logind[1465]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:25:30.653189 systemd-logind[1465]: Removed session 9. Jan 29 16:25:30.705443 kubelet[2606]: E0129 16:25:30.705215 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:30.705443 kubelet[2606]: E0129 16:25:30.705294 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:31.705992 kubelet[2606]: E0129 16:25:31.705948 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:32.707366 kubelet[2606]: E0129 16:25:32.707333 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:33.130628 kubelet[2606]: I0129 16:25:33.130584 2606 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:25:33.130895 containerd[1484]: time="2025-01-29T16:25:33.130848249Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:25:33.131228 kubelet[2606]: I0129 16:25:33.131004 2606 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:25:33.589989 systemd[1]: Created slice kubepods-besteffort-pod491ad20e_f2c0_4e66_aaaa_88984bbccca4.slice - libcontainer container kubepods-besteffort-pod491ad20e_f2c0_4e66_aaaa_88984bbccca4.slice. Jan 29 16:25:33.611432 kubelet[2606]: I0129 16:25:33.611404 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/acbf8f52-2030-4935-b4e3-dc6c908882cc-hubble-tls\") pod \"cilium-ckw8x\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " pod="kube-system/cilium-ckw8x" Jan 29 16:25:33.611422 systemd[1]: Created slice kubepods-burstable-podacbf8f52_2030_4935_b4e3_dc6c908882cc.slice - libcontainer container kubepods-burstable-podacbf8f52_2030_4935_b4e3_dc6c908882cc.slice. Jan 29 16:25:33.611751 kubelet[2606]: I0129 16:25:33.611624 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-cilium-run\") pod \"cilium-ckw8x\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " pod="kube-system/cilium-ckw8x" Jan 29 16:25:33.611751 kubelet[2606]: I0129 16:25:33.611646 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-lib-modules\") pod \"cilium-ckw8x\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " pod="kube-system/cilium-ckw8x" Jan 29 16:25:33.611751 kubelet[2606]: I0129 16:25:33.611661 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-etc-cni-netd\") pod \"cilium-ckw8x\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " pod="kube-system/cilium-ckw8x" Jan 29 16:25:33.611751 kubelet[2606]: I0129 16:25:33.611675 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6jxw\" (UniqueName: \"kubernetes.io/projected/acbf8f52-2030-4935-b4e3-dc6c908882cc-kube-api-access-z6jxw\") pod \"cilium-ckw8x\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " pod="kube-system/cilium-ckw8x" Jan 29 16:25:33.611751 kubelet[2606]: I0129 16:25:33.611689 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-cilium-cgroup\") pod \"cilium-ckw8x\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " pod="kube-system/cilium-ckw8x" Jan 29 16:25:33.611751 kubelet[2606]: I0129 16:25:33.611702 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-cni-path\") pod \"cilium-ckw8x\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " pod="kube-system/cilium-ckw8x" Jan 29 16:25:33.611969 kubelet[2606]: I0129 16:25:33.611714 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/acbf8f52-2030-4935-b4e3-dc6c908882cc-clustermesh-secrets\") pod \"cilium-ckw8x\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " pod="kube-system/cilium-ckw8x" Jan 29 16:25:33.611969 kubelet[2606]: I0129 16:25:33.611726 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/491ad20e-f2c0-4e66-aaaa-88984bbccca4-kube-proxy\") pod \"kube-proxy-tqkw4\" (UID: \"491ad20e-f2c0-4e66-aaaa-88984bbccca4\") " pod="kube-system/kube-proxy-tqkw4" Jan 29 16:25:33.611969 kubelet[2606]: I0129 16:25:33.611752 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/491ad20e-f2c0-4e66-aaaa-88984bbccca4-xtables-lock\") pod \"kube-proxy-tqkw4\" (UID: \"491ad20e-f2c0-4e66-aaaa-88984bbccca4\") " pod="kube-system/kube-proxy-tqkw4" Jan 29 16:25:33.612165 kubelet[2606]: I0129 16:25:33.611911 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-bpf-maps\") pod \"cilium-ckw8x\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " pod="kube-system/cilium-ckw8x" Jan 29 16:25:33.612201 kubelet[2606]: I0129 16:25:33.612164 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-hostproc\") pod \"cilium-ckw8x\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " pod="kube-system/cilium-ckw8x" Jan 29 16:25:33.612201 kubelet[2606]: I0129 16:25:33.612181 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/491ad20e-f2c0-4e66-aaaa-88984bbccca4-lib-modules\") pod \"kube-proxy-tqkw4\" (UID: \"491ad20e-f2c0-4e66-aaaa-88984bbccca4\") " pod="kube-system/kube-proxy-tqkw4" Jan 29 16:25:33.612201 kubelet[2606]: I0129 16:25:33.612193 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-xtables-lock\") pod \"cilium-ckw8x\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " pod="kube-system/cilium-ckw8x" Jan 29 16:25:33.612265 kubelet[2606]: I0129 16:25:33.612207 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acbf8f52-2030-4935-b4e3-dc6c908882cc-cilium-config-path\") pod \"cilium-ckw8x\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " pod="kube-system/cilium-ckw8x" Jan 29 16:25:33.612265 kubelet[2606]: I0129 16:25:33.612221 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-host-proc-sys-net\") pod \"cilium-ckw8x\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " pod="kube-system/cilium-ckw8x" Jan 29 16:25:33.612265 kubelet[2606]: I0129 16:25:33.612240 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns6m7\" (UniqueName: \"kubernetes.io/projected/491ad20e-f2c0-4e66-aaaa-88984bbccca4-kube-api-access-ns6m7\") pod \"kube-proxy-tqkw4\" (UID: \"491ad20e-f2c0-4e66-aaaa-88984bbccca4\") " pod="kube-system/kube-proxy-tqkw4" Jan 29 16:25:33.612265 kubelet[2606]: I0129 16:25:33.612254 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-host-proc-sys-kernel\") pod \"cilium-ckw8x\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " pod="kube-system/cilium-ckw8x" Jan 29 16:25:33.721989 kubelet[2606]: E0129 16:25:33.721961 2606 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 16:25:33.722841 kubelet[2606]: E0129 16:25:33.722327 2606 projected.go:194] Error preparing data for projected volume kube-api-access-ns6m7 for pod kube-system/kube-proxy-tqkw4: configmap "kube-root-ca.crt" not found Jan 29 16:25:33.722841 kubelet[2606]: E0129 16:25:33.722378 2606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/491ad20e-f2c0-4e66-aaaa-88984bbccca4-kube-api-access-ns6m7 podName:491ad20e-f2c0-4e66-aaaa-88984bbccca4 nodeName:}" failed. No retries permitted until 2025-01-29 16:25:34.222362035 +0000 UTC m=+5.620371346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ns6m7" (UniqueName: "kubernetes.io/projected/491ad20e-f2c0-4e66-aaaa-88984bbccca4-kube-api-access-ns6m7") pod "kube-proxy-tqkw4" (UID: "491ad20e-f2c0-4e66-aaaa-88984bbccca4") : configmap "kube-root-ca.crt" not found Jan 29 16:25:33.722841 kubelet[2606]: E0129 16:25:33.722629 2606 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 16:25:33.722841 kubelet[2606]: E0129 16:25:33.722646 2606 projected.go:194] Error preparing data for projected volume kube-api-access-z6jxw for pod kube-system/cilium-ckw8x: configmap "kube-root-ca.crt" not found Jan 29 16:25:33.722841 kubelet[2606]: E0129 16:25:33.722701 2606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/acbf8f52-2030-4935-b4e3-dc6c908882cc-kube-api-access-z6jxw podName:acbf8f52-2030-4935-b4e3-dc6c908882cc nodeName:}" failed. No retries permitted until 2025-01-29 16:25:34.22266839 +0000 UTC m=+5.620677701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z6jxw" (UniqueName: "kubernetes.io/projected/acbf8f52-2030-4935-b4e3-dc6c908882cc-kube-api-access-z6jxw") pod "cilium-ckw8x" (UID: "acbf8f52-2030-4935-b4e3-dc6c908882cc") : configmap "kube-root-ca.crt" not found Jan 29 16:25:34.141328 systemd[1]: Created slice kubepods-besteffort-podc69f1ba6_8703_434f_a14e_22d47f68ec03.slice - libcontainer container kubepods-besteffort-podc69f1ba6_8703_434f_a14e_22d47f68ec03.slice. Jan 29 16:25:34.216662 kubelet[2606]: I0129 16:25:34.216583 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c69f1ba6-8703-434f-a14e-22d47f68ec03-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-9ddf2\" (UID: \"c69f1ba6-8703-434f-a14e-22d47f68ec03\") " pod="kube-system/cilium-operator-6c4d7847fc-9ddf2" Jan 29 16:25:34.216662 kubelet[2606]: I0129 16:25:34.216661 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvm6v\" (UniqueName: \"kubernetes.io/projected/c69f1ba6-8703-434f-a14e-22d47f68ec03-kube-api-access-jvm6v\") pod \"cilium-operator-6c4d7847fc-9ddf2\" (UID: \"c69f1ba6-8703-434f-a14e-22d47f68ec03\") " pod="kube-system/cilium-operator-6c4d7847fc-9ddf2" Jan 29 16:25:34.444949 kubelet[2606]: E0129 16:25:34.444783 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:34.446399 containerd[1484]: time="2025-01-29T16:25:34.446343927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9ddf2,Uid:c69f1ba6-8703-434f-a14e-22d47f68ec03,Namespace:kube-system,Attempt:0,}" Jan 29 16:25:34.509585 kubelet[2606]: E0129 16:25:34.509537 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:34.510220 containerd[1484]: time="2025-01-29T16:25:34.510154227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqkw4,Uid:491ad20e-f2c0-4e66-aaaa-88984bbccca4,Namespace:kube-system,Attempt:0,}" Jan 29 16:25:34.514835 kubelet[2606]: E0129 16:25:34.514786 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:34.515404 containerd[1484]: time="2025-01-29T16:25:34.515362214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ckw8x,Uid:acbf8f52-2030-4935-b4e3-dc6c908882cc,Namespace:kube-system,Attempt:0,}" Jan 29 16:25:34.613410 containerd[1484]: time="2025-01-29T16:25:34.612942501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:34.613410 containerd[1484]: time="2025-01-29T16:25:34.613364525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:34.613549 containerd[1484]: time="2025-01-29T16:25:34.613417936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:34.613600 containerd[1484]: time="2025-01-29T16:25:34.613562211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:34.618163 containerd[1484]: time="2025-01-29T16:25:34.618018998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:34.618650 containerd[1484]: time="2025-01-29T16:25:34.618589234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:34.618715 containerd[1484]: time="2025-01-29T16:25:34.618666801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:34.618927 containerd[1484]: time="2025-01-29T16:25:34.618838468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:34.619328 containerd[1484]: time="2025-01-29T16:25:34.619042797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:34.619328 containerd[1484]: time="2025-01-29T16:25:34.619098684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:34.619328 containerd[1484]: time="2025-01-29T16:25:34.619113963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:34.619638 containerd[1484]: time="2025-01-29T16:25:34.619527520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:34.639042 systemd[1]: Started cri-containerd-c57e67bdac2f26b44e33dc783ee9d89620856fad6c1dc9d3d029d017efb39f8e.scope - libcontainer container c57e67bdac2f26b44e33dc783ee9d89620856fad6c1dc9d3d029d017efb39f8e. Jan 29 16:25:34.643734 systemd[1]: Started cri-containerd-09e9d513cd2177d2627c2eca8c913f0ccf0a933c8bb67f116bb356da32a03643.scope - libcontainer container 09e9d513cd2177d2627c2eca8c913f0ccf0a933c8bb67f116bb356da32a03643. Jan 29 16:25:34.645222 systemd[1]: Started cri-containerd-7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db.scope - libcontainer container 7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db. Jan 29 16:25:34.673100 containerd[1484]: time="2025-01-29T16:25:34.673020935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqkw4,Uid:491ad20e-f2c0-4e66-aaaa-88984bbccca4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c57e67bdac2f26b44e33dc783ee9d89620856fad6c1dc9d3d029d017efb39f8e\"" Jan 29 16:25:34.674901 kubelet[2606]: E0129 16:25:34.674331 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:34.678818 containerd[1484]: time="2025-01-29T16:25:34.678761888Z" level=info msg="CreateContainer within sandbox \"c57e67bdac2f26b44e33dc783ee9d89620856fad6c1dc9d3d029d017efb39f8e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:25:34.684455 containerd[1484]: time="2025-01-29T16:25:34.684081407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ckw8x,Uid:acbf8f52-2030-4935-b4e3-dc6c908882cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db\"" Jan 29 16:25:34.685994 kubelet[2606]: E0129 16:25:34.685908 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:34.688002 containerd[1484]: time="2025-01-29T16:25:34.687316948Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:25:34.694278 containerd[1484]: time="2025-01-29T16:25:34.694207279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9ddf2,Uid:c69f1ba6-8703-434f-a14e-22d47f68ec03,Namespace:kube-system,Attempt:0,} returns sandbox id \"09e9d513cd2177d2627c2eca8c913f0ccf0a933c8bb67f116bb356da32a03643\"" Jan 29 16:25:34.695777 kubelet[2606]: E0129 16:25:34.695106 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:34.705434 containerd[1484]: time="2025-01-29T16:25:34.705388641Z" level=info msg="CreateContainer within sandbox \"c57e67bdac2f26b44e33dc783ee9d89620856fad6c1dc9d3d029d017efb39f8e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"13798df19d9ecda83d8b4a2e589c4fdfc131accac894c0579df73b98dd3e735f\"" Jan 29 16:25:34.706322 containerd[1484]: time="2025-01-29T16:25:34.706286741Z" level=info msg="StartContainer for \"13798df19d9ecda83d8b4a2e589c4fdfc131accac894c0579df73b98dd3e735f\"" Jan 29 16:25:34.739013 systemd[1]: Started cri-containerd-13798df19d9ecda83d8b4a2e589c4fdfc131accac894c0579df73b98dd3e735f.scope - libcontainer container 13798df19d9ecda83d8b4a2e589c4fdfc131accac894c0579df73b98dd3e735f. Jan 29 16:25:34.772406 containerd[1484]: time="2025-01-29T16:25:34.772274067Z" level=info msg="StartContainer for \"13798df19d9ecda83d8b4a2e589c4fdfc131accac894c0579df73b98dd3e735f\" returns successfully" Jan 29 16:25:35.716854 kubelet[2606]: E0129 16:25:35.716806 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:35.725774 kubelet[2606]: I0129 16:25:35.725714 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tqkw4" podStartSLOduration=2.725692483 podStartE2EDuration="2.725692483s" podCreationTimestamp="2025-01-29 16:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:25:35.725655432 +0000 UTC m=+7.123664753" watchObservedRunningTime="2025-01-29 16:25:35.725692483 +0000 UTC m=+7.123701804" Jan 29 16:25:36.718382 kubelet[2606]: E0129 16:25:36.718332 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:37.960529 kubelet[2606]: E0129 16:25:37.960497 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:38.422844 kubelet[2606]: E0129 16:25:38.422764 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:38.646995 update_engine[1466]: I20250129 16:25:38.646894 1466 update_attempter.cc:509] Updating boot flags... Jan 29 16:25:38.681124 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2990) Jan 29 16:25:38.721647 kubelet[2606]: E0129 16:25:38.721613 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:38.726930 kubelet[2606]: E0129 16:25:38.722155 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:38.740600 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2992) Jan 29 16:25:38.784898 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2992) Jan 29 16:25:39.723184 kubelet[2606]: E0129 16:25:39.723148 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:41.697149 kubelet[2606]: E0129 16:25:41.697097 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:41.726068 kubelet[2606]: E0129 16:25:41.725787 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:42.958642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4085423405.mount: Deactivated successfully. Jan 29 16:25:45.939955 containerd[1484]: time="2025-01-29T16:25:45.939906631Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:45.940672 containerd[1484]: time="2025-01-29T16:25:45.940638314Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 16:25:45.941556 containerd[1484]: time="2025-01-29T16:25:45.941522514Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:45.942944 containerd[1484]: time="2025-01-29T16:25:45.942906610Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.255549997s" Jan 29 16:25:45.942944 containerd[1484]: time="2025-01-29T16:25:45.942938020Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 16:25:45.951382 containerd[1484]: time="2025-01-29T16:25:45.951345325Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:25:45.963907 containerd[1484]: time="2025-01-29T16:25:45.963840416Z" level=info msg="CreateContainer within sandbox \"7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:25:45.978528 containerd[1484]: time="2025-01-29T16:25:45.978472123Z" level=info msg="CreateContainer within sandbox \"7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05\"" Jan 29 16:25:45.981759 containerd[1484]: time="2025-01-29T16:25:45.981715852Z" level=info msg="StartContainer for \"ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05\"" Jan 29 16:25:46.014055 systemd[1]: Started cri-containerd-ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05.scope - libcontainer container ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05. Jan 29 16:25:46.077683 systemd[1]: cri-containerd-ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05.scope: Deactivated successfully. Jan 29 16:25:46.091453 containerd[1484]: time="2025-01-29T16:25:46.091401891Z" level=info msg="StartContainer for \"ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05\" returns successfully" Jan 29 16:25:46.508335 containerd[1484]: time="2025-01-29T16:25:46.508269046Z" level=info msg="shim disconnected" id=ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05 namespace=k8s.io Jan 29 16:25:46.508335 containerd[1484]: time="2025-01-29T16:25:46.508316756Z" level=warning msg="cleaning up after shim disconnected" id=ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05 namespace=k8s.io Jan 29 16:25:46.508335 containerd[1484]: time="2025-01-29T16:25:46.508326004Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:25:46.746805 kubelet[2606]: E0129 16:25:46.746764 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:46.748374 containerd[1484]: time="2025-01-29T16:25:46.748328079Z" level=info msg="CreateContainer within sandbox \"7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:25:46.764324 containerd[1484]: time="2025-01-29T16:25:46.764226496Z" level=info msg="CreateContainer within sandbox \"7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469\"" Jan 29 16:25:46.764818 containerd[1484]: time="2025-01-29T16:25:46.764756156Z" level=info msg="StartContainer for \"4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469\"" Jan 29 16:25:46.797981 systemd[1]: Started cri-containerd-4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469.scope - libcontainer container 4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469. Jan 29 16:25:46.824392 containerd[1484]: time="2025-01-29T16:25:46.824345101Z" level=info msg="StartContainer for \"4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469\" returns successfully" Jan 29 16:25:46.837914 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:25:46.838152 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:25:46.838387 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:25:46.844193 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:25:46.844433 systemd[1]: cri-containerd-4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469.scope: Deactivated successfully. Jan 29 16:25:46.860170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:25:46.874741 containerd[1484]: time="2025-01-29T16:25:46.874656091Z" level=info msg="shim disconnected" id=4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469 namespace=k8s.io Jan 29 16:25:46.874741 containerd[1484]: time="2025-01-29T16:25:46.874732436Z" level=warning msg="cleaning up after shim disconnected" id=4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469 namespace=k8s.io Jan 29 16:25:46.874741 containerd[1484]: time="2025-01-29T16:25:46.874743586Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:25:46.974878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05-rootfs.mount: Deactivated successfully. Jan 29 16:25:47.746325 kubelet[2606]: E0129 16:25:47.746290 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:47.748522 containerd[1484]: time="2025-01-29T16:25:47.748444600Z" level=info msg="CreateContainer within sandbox \"7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:25:47.770107 containerd[1484]: time="2025-01-29T16:25:47.770062442Z" level=info msg="CreateContainer within sandbox \"7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318\"" Jan 29 16:25:47.770608 containerd[1484]: time="2025-01-29T16:25:47.770563006Z" level=info msg="StartContainer for \"037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318\"" Jan 29 16:25:47.799993 systemd[1]: Started cri-containerd-037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318.scope - libcontainer container 037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318. Jan 29 16:25:47.833702 systemd[1]: cri-containerd-037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318.scope: Deactivated successfully. Jan 29 16:25:47.834141 containerd[1484]: time="2025-01-29T16:25:47.834101327Z" level=info msg="StartContainer for \"037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318\" returns successfully" Jan 29 16:25:47.862420 containerd[1484]: time="2025-01-29T16:25:47.862363321Z" level=info msg="shim disconnected" id=037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318 namespace=k8s.io Jan 29 16:25:47.862420 containerd[1484]: time="2025-01-29T16:25:47.862416141Z" level=warning msg="cleaning up after shim disconnected" id=037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318 namespace=k8s.io Jan 29 16:25:47.862420 containerd[1484]: time="2025-01-29T16:25:47.862424747Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:25:47.974708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318-rootfs.mount: Deactivated successfully. Jan 29 16:25:48.750162 kubelet[2606]: E0129 16:25:48.750130 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:48.751970 containerd[1484]: time="2025-01-29T16:25:48.751903319Z" level=info msg="CreateContainer within sandbox \"7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:25:48.771027 containerd[1484]: time="2025-01-29T16:25:48.770982022Z" level=info msg="CreateContainer within sandbox \"7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb\"" Jan 29 16:25:48.771592 containerd[1484]: time="2025-01-29T16:25:48.771521399Z" level=info msg="StartContainer for \"05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb\"" Jan 29 16:25:48.799042 systemd[1]: Started cri-containerd-05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb.scope - libcontainer container 05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb. Jan 29 16:25:48.826158 systemd[1]: cri-containerd-05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb.scope: Deactivated successfully. Jan 29 16:25:48.827378 kubelet[2606]: E0129 16:25:48.827326 2606 cadvisor_stats_provider.go:522] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacbf8f52_2030_4935_b4e3_dc6c908882cc.slice/cri-containerd-05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:25:48.828689 containerd[1484]: time="2025-01-29T16:25:48.828639264Z" level=info msg="StartContainer for \"05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb\" returns successfully" Jan 29 16:25:48.938855 containerd[1484]: time="2025-01-29T16:25:48.938785498Z" level=info msg="shim disconnected" id=05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb namespace=k8s.io Jan 29 16:25:48.938855 containerd[1484]: time="2025-01-29T16:25:48.938844359Z" level=warning msg="cleaning up after shim disconnected" id=05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb namespace=k8s.io Jan 29 16:25:48.938855 containerd[1484]: time="2025-01-29T16:25:48.938852925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:25:48.974872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb-rootfs.mount: Deactivated successfully. Jan 29 16:25:49.452228 containerd[1484]: time="2025-01-29T16:25:49.452146620Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:49.452897 containerd[1484]: time="2025-01-29T16:25:49.452827184Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 16:25:49.454070 containerd[1484]: time="2025-01-29T16:25:49.454038549Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:49.455383 containerd[1484]: time="2025-01-29T16:25:49.455337951Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.503950785s" Jan 29 16:25:49.455425 containerd[1484]: time="2025-01-29T16:25:49.455400438Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 16:25:49.467458 containerd[1484]: time="2025-01-29T16:25:49.467409341Z" level=info msg="CreateContainer within sandbox \"09e9d513cd2177d2627c2eca8c913f0ccf0a933c8bb67f116bb356da32a03643\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:25:49.488224 containerd[1484]: time="2025-01-29T16:25:49.488174632Z" level=info msg="CreateContainer within sandbox \"09e9d513cd2177d2627c2eca8c913f0ccf0a933c8bb67f116bb356da32a03643\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d\"" Jan 29 16:25:49.488634 containerd[1484]: time="2025-01-29T16:25:49.488607207Z" level=info msg="StartContainer for \"b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d\"" Jan 29 16:25:49.519053 systemd[1]: Started cri-containerd-b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d.scope - libcontainer container b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d. Jan 29 16:25:49.690423 containerd[1484]: time="2025-01-29T16:25:49.690018826Z" level=info msg="StartContainer for \"b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d\" returns successfully" Jan 29 16:25:49.762978 kubelet[2606]: E0129 16:25:49.762838 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:49.769812 kubelet[2606]: E0129 16:25:49.769768 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:49.771605 containerd[1484]: time="2025-01-29T16:25:49.771550442Z" level=info msg="CreateContainer within sandbox \"7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:25:49.788079 kubelet[2606]: I0129 16:25:49.788011 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-9ddf2" podStartSLOduration=1.029023243 podStartE2EDuration="15.787988399s" podCreationTimestamp="2025-01-29 16:25:34 +0000 UTC" firstStartedPulling="2025-01-29 16:25:34.6970331 +0000 UTC m=+6.095042411" lastFinishedPulling="2025-01-29 16:25:49.455998256 +0000 UTC m=+20.854007567" observedRunningTime="2025-01-29 16:25:49.787616076 +0000 UTC m=+21.185625387" watchObservedRunningTime="2025-01-29 16:25:49.787988399 +0000 UTC m=+21.185997710" Jan 29 16:25:49.797750 containerd[1484]: time="2025-01-29T16:25:49.797679199Z" level=info msg="CreateContainer within sandbox \"7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e\"" Jan 29 16:25:49.798328 containerd[1484]: time="2025-01-29T16:25:49.798289721Z" level=info msg="StartContainer for \"f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e\"" Jan 29 16:25:49.834141 systemd[1]: Started cri-containerd-f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e.scope - libcontainer container f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e. Jan 29 16:25:49.884473 containerd[1484]: time="2025-01-29T16:25:49.884422365Z" level=info msg="StartContainer for \"f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e\" returns successfully" Jan 29 16:25:50.024137 kubelet[2606]: I0129 16:25:50.024111 2606 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 16:25:50.061556 systemd[1]: Created slice kubepods-burstable-podc440a254_1621_40ab_b7e4_bcb65e973576.slice - libcontainer container kubepods-burstable-podc440a254_1621_40ab_b7e4_bcb65e973576.slice. Jan 29 16:25:50.067770 systemd[1]: Created slice kubepods-burstable-podd641a4d6_8507_4731_ac2a_f1dd2d41298a.slice - libcontainer container kubepods-burstable-podd641a4d6_8507_4731_ac2a_f1dd2d41298a.slice. Jan 29 16:25:50.123674 kubelet[2606]: I0129 16:25:50.123614 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d641a4d6-8507-4731-ac2a-f1dd2d41298a-config-volume\") pod \"coredns-668d6bf9bc-d5lxx\" (UID: \"d641a4d6-8507-4731-ac2a-f1dd2d41298a\") " pod="kube-system/coredns-668d6bf9bc-d5lxx" Jan 29 16:25:50.123674 kubelet[2606]: I0129 16:25:50.123672 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c440a254-1621-40ab-b7e4-bcb65e973576-config-volume\") pod \"coredns-668d6bf9bc-cpzdj\" (UID: \"c440a254-1621-40ab-b7e4-bcb65e973576\") " pod="kube-system/coredns-668d6bf9bc-cpzdj" Jan 29 16:25:50.123819 kubelet[2606]: I0129 16:25:50.123691 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjj29\" (UniqueName: \"kubernetes.io/projected/d641a4d6-8507-4731-ac2a-f1dd2d41298a-kube-api-access-cjj29\") pod \"coredns-668d6bf9bc-d5lxx\" (UID: \"d641a4d6-8507-4731-ac2a-f1dd2d41298a\") " pod="kube-system/coredns-668d6bf9bc-d5lxx" Jan 29 16:25:50.123819 kubelet[2606]: I0129 16:25:50.123708 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdlq4\" (UniqueName: \"kubernetes.io/projected/c440a254-1621-40ab-b7e4-bcb65e973576-kube-api-access-pdlq4\") pod \"coredns-668d6bf9bc-cpzdj\" (UID: \"c440a254-1621-40ab-b7e4-bcb65e973576\") " pod="kube-system/coredns-668d6bf9bc-cpzdj" Jan 29 16:25:50.365940 kubelet[2606]: E0129 16:25:50.365783 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:50.366835 containerd[1484]: time="2025-01-29T16:25:50.366793996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cpzdj,Uid:c440a254-1621-40ab-b7e4-bcb65e973576,Namespace:kube-system,Attempt:0,}" Jan 29 16:25:50.370596 kubelet[2606]: E0129 16:25:50.370460 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:50.370983 containerd[1484]: time="2025-01-29T16:25:50.370952797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d5lxx,Uid:d641a4d6-8507-4731-ac2a-f1dd2d41298a,Namespace:kube-system,Attempt:0,}" Jan 29 16:25:50.774034 kubelet[2606]: E0129 16:25:50.773993 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:50.774444 kubelet[2606]: E0129 16:25:50.774103 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:50.788306 kubelet[2606]: I0129 16:25:50.788240 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ckw8x" podStartSLOduration=6.523892055 podStartE2EDuration="17.788218583s" podCreationTimestamp="2025-01-29 16:25:33 +0000 UTC" firstStartedPulling="2025-01-29 16:25:34.686810604 +0000 UTC m=+6.084819915" lastFinishedPulling="2025-01-29 16:25:45.951137132 +0000 UTC m=+17.349146443" observedRunningTime="2025-01-29 16:25:50.787758835 +0000 UTC m=+22.185768176" watchObservedRunningTime="2025-01-29 16:25:50.788218583 +0000 UTC m=+22.186227904" Jan 29 16:25:51.776537 kubelet[2606]: E0129 16:25:51.776471 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:52.147739 systemd-networkd[1423]: cilium_host: Link UP Jan 29 16:25:52.147920 systemd-networkd[1423]: cilium_net: Link UP Jan 29 16:25:52.148099 systemd-networkd[1423]: cilium_net: Gained carrier Jan 29 16:25:52.148290 systemd-networkd[1423]: cilium_host: Gained carrier Jan 29 16:25:52.253500 systemd-networkd[1423]: cilium_vxlan: Link UP Jan 29 16:25:52.253513 systemd-networkd[1423]: cilium_vxlan: Gained carrier Jan 29 16:25:52.465904 kernel: NET: Registered PF_ALG protocol family Jan 29 16:25:52.557045 systemd-networkd[1423]: cilium_net: Gained IPv6LL Jan 29 16:25:52.777711 kubelet[2606]: E0129 16:25:52.777676 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:53.038025 systemd-networkd[1423]: cilium_host: Gained IPv6LL Jan 29 16:25:53.143071 systemd-networkd[1423]: lxc_health: Link UP Jan 29 16:25:53.143361 systemd-networkd[1423]: lxc_health: Gained carrier Jan 29 16:25:53.470920 systemd-networkd[1423]: lxc794cb1563539: Link UP Jan 29 16:25:53.479889 kernel: eth0: renamed from tmp6d698 Jan 29 16:25:53.508990 kernel: eth0: renamed from tmp9ed4b Jan 29 16:25:53.516443 systemd-networkd[1423]: lxc5e27a0768f2b: Link UP Jan 29 16:25:53.516776 systemd-networkd[1423]: lxc794cb1563539: Gained carrier Jan 29 16:25:53.523225 systemd-networkd[1423]: lxc5e27a0768f2b: Gained carrier Jan 29 16:25:53.741003 systemd-networkd[1423]: cilium_vxlan: Gained IPv6LL Jan 29 16:25:54.189039 systemd-networkd[1423]: lxc_health: Gained IPv6LL Jan 29 16:25:54.516006 kubelet[2606]: E0129 16:25:54.515884 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:54.701012 systemd-networkd[1423]: lxc794cb1563539: Gained IPv6LL Jan 29 16:25:54.781388 kubelet[2606]: E0129 16:25:54.781355 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:55.022028 systemd-networkd[1423]: lxc5e27a0768f2b: Gained IPv6LL Jan 29 16:25:55.783197 kubelet[2606]: E0129 16:25:55.783168 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:57.008097 containerd[1484]: time="2025-01-29T16:25:57.007958341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:57.008097 containerd[1484]: time="2025-01-29T16:25:57.008038341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:57.008097 containerd[1484]: time="2025-01-29T16:25:57.008053049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:57.008907 containerd[1484]: time="2025-01-29T16:25:57.008264728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:57.009302 containerd[1484]: time="2025-01-29T16:25:57.009210918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:57.009302 containerd[1484]: time="2025-01-29T16:25:57.009277393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:57.009386 containerd[1484]: time="2025-01-29T16:25:57.009290758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:57.009445 containerd[1484]: time="2025-01-29T16:25:57.009402348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:57.034993 systemd[1]: Started cri-containerd-6d6989c0a827d5f2d1639abdb638bf428f5de1850f84c115c9de30ae754b51e7.scope - libcontainer container 6d6989c0a827d5f2d1639abdb638bf428f5de1850f84c115c9de30ae754b51e7. Jan 29 16:25:57.038105 systemd[1]: Started cri-containerd-9ed4b7cb054572afafdf29024cc36730c4ae0318c10e674a1db5f7b66d9d8941.scope - libcontainer container 9ed4b7cb054572afafdf29024cc36730c4ae0318c10e674a1db5f7b66d9d8941. Jan 29 16:25:57.048938 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:25:57.051087 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:25:57.080416 containerd[1484]: time="2025-01-29T16:25:57.080374291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cpzdj,Uid:c440a254-1621-40ab-b7e4-bcb65e973576,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ed4b7cb054572afafdf29024cc36730c4ae0318c10e674a1db5f7b66d9d8941\"" Jan 29 16:25:57.081358 kubelet[2606]: E0129 16:25:57.081330 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:57.083921 containerd[1484]: time="2025-01-29T16:25:57.083854670Z" level=info msg="CreateContainer within sandbox \"9ed4b7cb054572afafdf29024cc36730c4ae0318c10e674a1db5f7b66d9d8941\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:25:57.089151 containerd[1484]: time="2025-01-29T16:25:57.089096035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d5lxx,Uid:d641a4d6-8507-4731-ac2a-f1dd2d41298a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d6989c0a827d5f2d1639abdb638bf428f5de1850f84c115c9de30ae754b51e7\"" Jan 29 16:25:57.089674 kubelet[2606]: E0129 16:25:57.089644 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:57.092214 containerd[1484]: time="2025-01-29T16:25:57.092181591Z" level=info msg="CreateContainer within sandbox \"6d6989c0a827d5f2d1639abdb638bf428f5de1850f84c115c9de30ae754b51e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:25:57.109579 containerd[1484]: time="2025-01-29T16:25:57.109541451Z" level=info msg="CreateContainer within sandbox \"6d6989c0a827d5f2d1639abdb638bf428f5de1850f84c115c9de30ae754b51e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4dda2375626c8152b49644d6be5cc0ec813d3014ac44f766bed3a803ab90830\"" Jan 29 16:25:57.110254 containerd[1484]: time="2025-01-29T16:25:57.110223684Z" level=info msg="StartContainer for \"d4dda2375626c8152b49644d6be5cc0ec813d3014ac44f766bed3a803ab90830\"" Jan 29 16:25:57.121633 containerd[1484]: time="2025-01-29T16:25:57.121582932Z" level=info msg="CreateContainer within sandbox \"9ed4b7cb054572afafdf29024cc36730c4ae0318c10e674a1db5f7b66d9d8941\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fabbbea44b03cd3468067a0e0449b0b390c8f72c93a339f4494c454d39f0c315\"" Jan 29 16:25:57.122163 containerd[1484]: time="2025-01-29T16:25:57.122128317Z" level=info msg="StartContainer for \"fabbbea44b03cd3468067a0e0449b0b390c8f72c93a339f4494c454d39f0c315\"" Jan 29 16:25:57.142041 systemd[1]: Started cri-containerd-d4dda2375626c8152b49644d6be5cc0ec813d3014ac44f766bed3a803ab90830.scope - libcontainer container d4dda2375626c8152b49644d6be5cc0ec813d3014ac44f766bed3a803ab90830. Jan 29 16:25:57.168029 systemd[1]: Started cri-containerd-fabbbea44b03cd3468067a0e0449b0b390c8f72c93a339f4494c454d39f0c315.scope - libcontainer container fabbbea44b03cd3468067a0e0449b0b390c8f72c93a339f4494c454d39f0c315. Jan 29 16:25:57.180993 containerd[1484]: time="2025-01-29T16:25:57.180921601Z" level=info msg="StartContainer for \"d4dda2375626c8152b49644d6be5cc0ec813d3014ac44f766bed3a803ab90830\" returns successfully" Jan 29 16:25:57.204757 containerd[1484]: time="2025-01-29T16:25:57.204698998Z" level=info msg="StartContainer for \"fabbbea44b03cd3468067a0e0449b0b390c8f72c93a339f4494c454d39f0c315\" returns successfully" Jan 29 16:25:57.793983 kubelet[2606]: E0129 16:25:57.793568 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:57.796412 kubelet[2606]: E0129 16:25:57.796361 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:57.810698 kubelet[2606]: I0129 16:25:57.810335 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d5lxx" podStartSLOduration=23.810274042 podStartE2EDuration="23.810274042s" podCreationTimestamp="2025-01-29 16:25:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:25:57.809744967 +0000 UTC m=+29.207754278" watchObservedRunningTime="2025-01-29 16:25:57.810274042 +0000 UTC m=+29.208283353" Jan 29 16:25:57.832950 kubelet[2606]: I0129 16:25:57.832442 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cpzdj" podStartSLOduration=23.832412435 podStartE2EDuration="23.832412435s" podCreationTimestamp="2025-01-29 16:25:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:25:57.821237885 +0000 UTC m=+29.219247206" watchObservedRunningTime="2025-01-29 16:25:57.832412435 +0000 UTC m=+29.230421746" Jan 29 16:25:58.013544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2701736959.mount: Deactivated successfully. Jan 29 16:25:58.079790 systemd[1]: Started sshd@9-10.0.0.140:22-10.0.0.1:55856.service - OpenSSH per-connection server daemon (10.0.0.1:55856). Jan 29 16:25:58.122786 sshd[4014]: Accepted publickey for core from 10.0.0.1 port 55856 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:58.124428 sshd-session[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:58.128586 systemd-logind[1465]: New session 10 of user core. Jan 29 16:25:58.134976 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:25:58.413385 sshd[4016]: Connection closed by 10.0.0.1 port 55856 Jan 29 16:25:58.413657 sshd-session[4014]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:58.417158 systemd[1]: sshd@9-10.0.0.140:22-10.0.0.1:55856.service: Deactivated successfully. Jan 29 16:25:58.418959 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:25:58.419569 systemd-logind[1465]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:25:58.420463 systemd-logind[1465]: Removed session 10. Jan 29 16:25:58.797924 kubelet[2606]: E0129 16:25:58.797825 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:58.797924 kubelet[2606]: E0129 16:25:58.797852 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:59.799444 kubelet[2606]: E0129 16:25:59.799416 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:25:59.799855 kubelet[2606]: E0129 16:25:59.799567 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:03.425803 systemd[1]: Started sshd@10-10.0.0.140:22-10.0.0.1:55868.service - OpenSSH per-connection server daemon (10.0.0.1:55868). Jan 29 16:26:03.468341 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 55868 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:03.469993 sshd-session[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:03.474348 systemd-logind[1465]: New session 11 of user core. Jan 29 16:26:03.479993 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:26:03.597721 sshd[4036]: Connection closed by 10.0.0.1 port 55868 Jan 29 16:26:03.598113 sshd-session[4034]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:03.602192 systemd[1]: sshd@10-10.0.0.140:22-10.0.0.1:55868.service: Deactivated successfully. Jan 29 16:26:03.604242 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:26:03.605101 systemd-logind[1465]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:26:03.606031 systemd-logind[1465]: Removed session 11. Jan 29 16:26:08.610277 systemd[1]: Started sshd@11-10.0.0.140:22-10.0.0.1:36220.service - OpenSSH per-connection server daemon (10.0.0.1:36220). Jan 29 16:26:08.655067 sshd[4053]: Accepted publickey for core from 10.0.0.1 port 36220 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:08.657089 sshd-session[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:08.679130 systemd-logind[1465]: New session 12 of user core. Jan 29 16:26:08.688135 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:26:08.797682 sshd[4055]: Connection closed by 10.0.0.1 port 36220 Jan 29 16:26:08.798115 sshd-session[4053]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:08.801791 systemd[1]: sshd@11-10.0.0.140:22-10.0.0.1:36220.service: Deactivated successfully. Jan 29 16:26:08.804006 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:26:08.804733 systemd-logind[1465]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:26:08.805665 systemd-logind[1465]: Removed session 12. Jan 29 16:26:13.812108 systemd[1]: Started sshd@12-10.0.0.140:22-10.0.0.1:36226.service - OpenSSH per-connection server daemon (10.0.0.1:36226). Jan 29 16:26:13.850326 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 36226 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:13.852346 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:13.857131 systemd-logind[1465]: New session 13 of user core. Jan 29 16:26:13.866003 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:26:13.971905 sshd[4071]: Connection closed by 10.0.0.1 port 36226 Jan 29 16:26:13.972367 sshd-session[4069]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:13.976221 systemd[1]: sshd@12-10.0.0.140:22-10.0.0.1:36226.service: Deactivated successfully. Jan 29 16:26:13.978988 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:26:13.979800 systemd-logind[1465]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:26:13.980609 systemd-logind[1465]: Removed session 13. Jan 29 16:26:18.983612 systemd[1]: Started sshd@13-10.0.0.140:22-10.0.0.1:57082.service - OpenSSH per-connection server daemon (10.0.0.1:57082). Jan 29 16:26:19.021788 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 57082 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:19.023664 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:19.028508 systemd-logind[1465]: New session 14 of user core. Jan 29 16:26:19.040156 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:26:19.152979 sshd[4088]: Connection closed by 10.0.0.1 port 57082 Jan 29 16:26:19.153432 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:19.158219 systemd[1]: sshd@13-10.0.0.140:22-10.0.0.1:57082.service: Deactivated successfully. Jan 29 16:26:19.160847 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:26:19.161828 systemd-logind[1465]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:26:19.162999 systemd-logind[1465]: Removed session 14. Jan 29 16:26:24.185376 systemd[1]: Started sshd@14-10.0.0.140:22-10.0.0.1:57090.service - OpenSSH per-connection server daemon (10.0.0.1:57090). Jan 29 16:26:24.220536 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 57090 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:24.222348 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:24.227069 systemd-logind[1465]: New session 15 of user core. Jan 29 16:26:24.241023 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:26:24.349660 sshd[4105]: Connection closed by 10.0.0.1 port 57090 Jan 29 16:26:24.350027 sshd-session[4103]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:24.363778 systemd[1]: sshd@14-10.0.0.140:22-10.0.0.1:57090.service: Deactivated successfully. Jan 29 16:26:24.365878 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:26:24.367501 systemd-logind[1465]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:26:24.373168 systemd[1]: Started sshd@15-10.0.0.140:22-10.0.0.1:57094.service - OpenSSH per-connection server daemon (10.0.0.1:57094). Jan 29 16:26:24.374045 systemd-logind[1465]: Removed session 15. Jan 29 16:26:24.410568 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 57094 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:24.412324 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:24.417261 systemd-logind[1465]: New session 16 of user core. Jan 29 16:26:24.429038 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:26:24.597539 sshd[4122]: Connection closed by 10.0.0.1 port 57094 Jan 29 16:26:24.598333 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:24.610341 systemd[1]: sshd@15-10.0.0.140:22-10.0.0.1:57094.service: Deactivated successfully. Jan 29 16:26:24.615528 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:26:24.617081 systemd-logind[1465]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:26:24.628975 systemd[1]: Started sshd@16-10.0.0.140:22-10.0.0.1:57104.service - OpenSSH per-connection server daemon (10.0.0.1:57104). Jan 29 16:26:24.630793 systemd-logind[1465]: Removed session 16. Jan 29 16:26:24.671422 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 57104 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:24.673225 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:24.677934 systemd-logind[1465]: New session 17 of user core. Jan 29 16:26:24.685008 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:26:24.798083 sshd[4136]: Connection closed by 10.0.0.1 port 57104 Jan 29 16:26:24.798573 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:24.803544 systemd[1]: sshd@16-10.0.0.140:22-10.0.0.1:57104.service: Deactivated successfully. Jan 29 16:26:24.805786 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:26:24.806608 systemd-logind[1465]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:26:24.807578 systemd-logind[1465]: Removed session 17. Jan 29 16:26:29.811945 systemd[1]: Started sshd@17-10.0.0.140:22-10.0.0.1:57024.service - OpenSSH per-connection server daemon (10.0.0.1:57024). Jan 29 16:26:29.851703 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 57024 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:29.853217 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:29.857151 systemd-logind[1465]: New session 18 of user core. Jan 29 16:26:29.870986 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:26:29.973712 sshd[4153]: Connection closed by 10.0.0.1 port 57024 Jan 29 16:26:29.974061 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:29.977603 systemd[1]: sshd@17-10.0.0.140:22-10.0.0.1:57024.service: Deactivated successfully. Jan 29 16:26:29.979764 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:26:29.980512 systemd-logind[1465]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:26:29.981390 systemd-logind[1465]: Removed session 18. Jan 29 16:26:34.987021 systemd[1]: Started sshd@18-10.0.0.140:22-10.0.0.1:57030.service - OpenSSH per-connection server daemon (10.0.0.1:57030). Jan 29 16:26:35.027801 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 57030 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:35.029403 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:35.033477 systemd-logind[1465]: New session 19 of user core. Jan 29 16:26:35.043003 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:26:35.153315 sshd[4170]: Connection closed by 10.0.0.1 port 57030 Jan 29 16:26:35.153807 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:35.165193 systemd[1]: sshd@18-10.0.0.140:22-10.0.0.1:57030.service: Deactivated successfully. Jan 29 16:26:35.167296 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:26:35.169582 systemd-logind[1465]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:26:35.177252 systemd[1]: Started sshd@19-10.0.0.140:22-10.0.0.1:57046.service - OpenSSH per-connection server daemon (10.0.0.1:57046). Jan 29 16:26:35.178512 systemd-logind[1465]: Removed session 19. Jan 29 16:26:35.211551 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 57046 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:35.213310 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:35.218222 systemd-logind[1465]: New session 20 of user core. Jan 29 16:26:35.236143 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:26:35.485501 sshd[4185]: Connection closed by 10.0.0.1 port 57046 Jan 29 16:26:35.485849 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:35.496715 systemd[1]: sshd@19-10.0.0.140:22-10.0.0.1:57046.service: Deactivated successfully. Jan 29 16:26:35.498823 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:26:35.500263 systemd-logind[1465]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:26:35.510507 systemd[1]: Started sshd@20-10.0.0.140:22-10.0.0.1:57062.service - OpenSSH per-connection server daemon (10.0.0.1:57062). Jan 29 16:26:35.511799 systemd-logind[1465]: Removed session 20. Jan 29 16:26:35.548411 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 57062 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:35.549951 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:35.554443 systemd-logind[1465]: New session 21 of user core. Jan 29 16:26:35.571117 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 16:26:36.626606 sshd[4199]: Connection closed by 10.0.0.1 port 57062 Jan 29 16:26:36.627187 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:36.639509 systemd[1]: sshd@20-10.0.0.140:22-10.0.0.1:57062.service: Deactivated successfully. Jan 29 16:26:36.641644 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 16:26:36.644084 systemd-logind[1465]: Session 21 logged out. Waiting for processes to exit. Jan 29 16:26:36.652534 systemd[1]: Started sshd@21-10.0.0.140:22-10.0.0.1:57070.service - OpenSSH per-connection server daemon (10.0.0.1:57070). Jan 29 16:26:36.655028 systemd-logind[1465]: Removed session 21. Jan 29 16:26:36.685832 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 57070 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:36.687320 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:36.691678 systemd-logind[1465]: New session 22 of user core. Jan 29 16:26:36.700020 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 16:26:36.956938 sshd[4220]: Connection closed by 10.0.0.1 port 57070 Jan 29 16:26:36.957680 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:36.968184 systemd[1]: sshd@21-10.0.0.140:22-10.0.0.1:57070.service: Deactivated successfully. Jan 29 16:26:36.970367 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 16:26:36.972130 systemd-logind[1465]: Session 22 logged out. Waiting for processes to exit. Jan 29 16:26:36.973491 systemd[1]: Started sshd@22-10.0.0.140:22-10.0.0.1:57084.service - OpenSSH per-connection server daemon (10.0.0.1:57084). Jan 29 16:26:36.975152 systemd-logind[1465]: Removed session 22. Jan 29 16:26:37.011492 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 57084 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:37.012748 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:37.016809 systemd-logind[1465]: New session 23 of user core. Jan 29 16:26:37.028071 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 16:26:37.129130 sshd[4233]: Connection closed by 10.0.0.1 port 57084 Jan 29 16:26:37.129442 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:37.133078 systemd[1]: sshd@22-10.0.0.140:22-10.0.0.1:57084.service: Deactivated successfully. Jan 29 16:26:37.134902 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 16:26:37.135532 systemd-logind[1465]: Session 23 logged out. Waiting for processes to exit. Jan 29 16:26:37.136358 systemd-logind[1465]: Removed session 23. Jan 29 16:26:40.691734 kubelet[2606]: E0129 16:26:40.691685 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:42.143918 systemd[1]: Started sshd@23-10.0.0.140:22-10.0.0.1:50544.service - OpenSSH per-connection server daemon (10.0.0.1:50544). Jan 29 16:26:42.181496 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 50544 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:42.182854 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:42.186928 systemd-logind[1465]: New session 24 of user core. Jan 29 16:26:42.196994 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 16:26:42.321361 sshd[4248]: Connection closed by 10.0.0.1 port 50544 Jan 29 16:26:42.321703 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:42.325602 systemd[1]: sshd@23-10.0.0.140:22-10.0.0.1:50544.service: Deactivated successfully. Jan 29 16:26:42.327890 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 16:26:42.328519 systemd-logind[1465]: Session 24 logged out. Waiting for processes to exit. Jan 29 16:26:42.329324 systemd-logind[1465]: Removed session 24. Jan 29 16:26:47.334944 systemd[1]: Started sshd@24-10.0.0.140:22-10.0.0.1:43522.service - OpenSSH per-connection server daemon (10.0.0.1:43522). Jan 29 16:26:47.377626 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 43522 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:47.379605 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:47.384850 systemd-logind[1465]: New session 25 of user core. Jan 29 16:26:47.392018 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 16:26:47.505267 sshd[4265]: Connection closed by 10.0.0.1 port 43522 Jan 29 16:26:47.505691 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:47.509730 systemd[1]: sshd@24-10.0.0.140:22-10.0.0.1:43522.service: Deactivated successfully. Jan 29 16:26:47.512164 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 16:26:47.513142 systemd-logind[1465]: Session 25 logged out. Waiting for processes to exit. Jan 29 16:26:47.514121 systemd-logind[1465]: Removed session 25. Jan 29 16:26:47.692244 kubelet[2606]: E0129 16:26:47.692118 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:49.692142 kubelet[2606]: E0129 16:26:49.692074 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:52.518069 systemd[1]: Started sshd@25-10.0.0.140:22-10.0.0.1:43536.service - OpenSSH per-connection server daemon (10.0.0.1:43536). Jan 29 16:26:52.556508 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 43536 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:52.557940 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:52.562085 systemd-logind[1465]: New session 26 of user core. Jan 29 16:26:52.570997 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 16:26:52.699790 sshd[4280]: Connection closed by 10.0.0.1 port 43536 Jan 29 16:26:52.700644 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:52.704246 systemd[1]: sshd@25-10.0.0.140:22-10.0.0.1:43536.service: Deactivated successfully. Jan 29 16:26:52.706347 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 16:26:52.707011 systemd-logind[1465]: Session 26 logged out. Waiting for processes to exit. Jan 29 16:26:52.707751 systemd-logind[1465]: Removed session 26. Jan 29 16:26:53.692170 kubelet[2606]: E0129 16:26:53.692127 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:57.717020 systemd[1]: Started sshd@26-10.0.0.140:22-10.0.0.1:54790.service - OpenSSH per-connection server daemon (10.0.0.1:54790). Jan 29 16:26:57.760883 sshd[4294]: Accepted publickey for core from 10.0.0.1 port 54790 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:57.762776 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:57.768010 systemd-logind[1465]: New session 27 of user core. Jan 29 16:26:57.777081 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 16:26:57.897507 sshd[4296]: Connection closed by 10.0.0.1 port 54790 Jan 29 16:26:57.897963 sshd-session[4294]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:57.913770 systemd[1]: sshd@26-10.0.0.140:22-10.0.0.1:54790.service: Deactivated successfully. Jan 29 16:26:57.915638 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 16:26:57.917547 systemd-logind[1465]: Session 27 logged out. Waiting for processes to exit. Jan 29 16:26:57.924169 systemd[1]: Started sshd@27-10.0.0.140:22-10.0.0.1:54792.service - OpenSSH per-connection server daemon (10.0.0.1:54792). Jan 29 16:26:57.925487 systemd-logind[1465]: Removed session 27. Jan 29 16:26:57.959891 sshd[4308]: Accepted publickey for core from 10.0.0.1 port 54792 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:57.961477 sshd-session[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:57.965890 systemd-logind[1465]: New session 28 of user core. Jan 29 16:26:57.987041 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 16:26:59.607848 containerd[1484]: time="2025-01-29T16:26:59.607764263Z" level=info msg="StopContainer for \"b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d\" with timeout 30 (s)" Jan 29 16:26:59.609106 containerd[1484]: time="2025-01-29T16:26:59.609023460Z" level=info msg="Stop container \"b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d\" with signal terminated" Jan 29 16:26:59.621784 systemd[1]: cri-containerd-b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d.scope: Deactivated successfully. Jan 29 16:26:59.638566 containerd[1484]: time="2025-01-29T16:26:59.638518852Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:26:59.642308 containerd[1484]: time="2025-01-29T16:26:59.642143114Z" level=info msg="StopContainer for \"f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e\" with timeout 2 (s)" Jan 29 16:26:59.642557 containerd[1484]: time="2025-01-29T16:26:59.642527415Z" level=info msg="Stop container \"f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e\" with signal terminated" Jan 29 16:26:59.646273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d-rootfs.mount: Deactivated successfully. Jan 29 16:26:59.649941 systemd-networkd[1423]: lxc_health: Link DOWN Jan 29 16:26:59.649953 systemd-networkd[1423]: lxc_health: Lost carrier Jan 29 16:26:59.654563 containerd[1484]: time="2025-01-29T16:26:59.654498985Z" level=info msg="shim disconnected" id=b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d namespace=k8s.io Jan 29 16:26:59.654563 containerd[1484]: time="2025-01-29T16:26:59.654562466Z" level=warning msg="cleaning up after shim disconnected" id=b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d namespace=k8s.io Jan 29 16:26:59.654563 containerd[1484]: time="2025-01-29T16:26:59.654573386Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:59.673295 containerd[1484]: time="2025-01-29T16:26:59.673229144Z" level=info msg="StopContainer for \"b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d\" returns successfully" Jan 29 16:26:59.675254 systemd[1]: cri-containerd-f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e.scope: Deactivated successfully. Jan 29 16:26:59.675672 systemd[1]: cri-containerd-f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e.scope: Consumed 6.978s CPU time, 121.1M memory peak, 244K read from disk, 13.3M written to disk. Jan 29 16:26:59.679464 containerd[1484]: time="2025-01-29T16:26:59.679406308Z" level=info msg="StopPodSandbox for \"09e9d513cd2177d2627c2eca8c913f0ccf0a933c8bb67f116bb356da32a03643\"" Jan 29 16:26:59.691479 kubelet[2606]: E0129 16:26:59.691450 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:59.693390 containerd[1484]: time="2025-01-29T16:26:59.679465390Z" level=info msg="Container to stop \"b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:59.695893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e-rootfs.mount: Deactivated successfully. Jan 29 16:26:59.699777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-09e9d513cd2177d2627c2eca8c913f0ccf0a933c8bb67f116bb356da32a03643-shm.mount: Deactivated successfully. Jan 29 16:26:59.701304 systemd[1]: cri-containerd-09e9d513cd2177d2627c2eca8c913f0ccf0a933c8bb67f116bb356da32a03643.scope: Deactivated successfully. Jan 29 16:26:59.717630 containerd[1484]: time="2025-01-29T16:26:59.717555540Z" level=info msg="shim disconnected" id=f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e namespace=k8s.io Jan 29 16:26:59.717971 containerd[1484]: time="2025-01-29T16:26:59.717881470Z" level=warning msg="cleaning up after shim disconnected" id=f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e namespace=k8s.io Jan 29 16:26:59.717971 containerd[1484]: time="2025-01-29T16:26:59.717896289Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:59.726528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09e9d513cd2177d2627c2eca8c913f0ccf0a933c8bb67f116bb356da32a03643-rootfs.mount: Deactivated successfully. Jan 29 16:26:59.730065 containerd[1484]: time="2025-01-29T16:26:59.730008224Z" level=info msg="shim disconnected" id=09e9d513cd2177d2627c2eca8c913f0ccf0a933c8bb67f116bb356da32a03643 namespace=k8s.io Jan 29 16:26:59.730065 containerd[1484]: time="2025-01-29T16:26:59.730065674Z" level=warning msg="cleaning up after shim disconnected" id=09e9d513cd2177d2627c2eca8c913f0ccf0a933c8bb67f116bb356da32a03643 namespace=k8s.io Jan 29 16:26:59.730207 containerd[1484]: time="2025-01-29T16:26:59.730074751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:59.739309 containerd[1484]: time="2025-01-29T16:26:59.739266176Z" level=info msg="StopContainer for \"f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e\" returns successfully" Jan 29 16:26:59.739925 containerd[1484]: time="2025-01-29T16:26:59.739898580Z" level=info msg="StopPodSandbox for \"7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db\"" Jan 29 16:26:59.740256 containerd[1484]: time="2025-01-29T16:26:59.739989784Z" level=info msg="Container to stop \"4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:59.740256 containerd[1484]: time="2025-01-29T16:26:59.740026203Z" level=info msg="Container to stop \"05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:59.740256 containerd[1484]: time="2025-01-29T16:26:59.740035500Z" level=info msg="Container to stop \"f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:59.740256 containerd[1484]: time="2025-01-29T16:26:59.740044537Z" level=info msg="Container to stop \"ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:59.740256 containerd[1484]: time="2025-01-29T16:26:59.740054206Z" level=info msg="Container to stop \"037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:59.746688 containerd[1484]: time="2025-01-29T16:26:59.746632143Z" level=info msg="TearDown network for sandbox \"09e9d513cd2177d2627c2eca8c913f0ccf0a933c8bb67f116bb356da32a03643\" successfully" Jan 29 16:26:59.746688 containerd[1484]: time="2025-01-29T16:26:59.746681677Z" level=info msg="StopPodSandbox for \"09e9d513cd2177d2627c2eca8c913f0ccf0a933c8bb67f116bb356da32a03643\" returns successfully" Jan 29 16:26:59.755810 systemd[1]: cri-containerd-7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db.scope: Deactivated successfully. Jan 29 16:26:59.795563 containerd[1484]: time="2025-01-29T16:26:59.795461835Z" level=info msg="shim disconnected" id=7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db namespace=k8s.io Jan 29 16:26:59.795563 containerd[1484]: time="2025-01-29T16:26:59.795529113Z" level=warning msg="cleaning up after shim disconnected" id=7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db namespace=k8s.io Jan 29 16:26:59.795563 containerd[1484]: time="2025-01-29T16:26:59.795539883Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:59.810291 containerd[1484]: time="2025-01-29T16:26:59.810238936Z" level=info msg="TearDown network for sandbox \"7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db\" successfully" Jan 29 16:26:59.810291 containerd[1484]: time="2025-01-29T16:26:59.810285655Z" level=info msg="StopPodSandbox for \"7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db\" returns successfully" Jan 29 16:26:59.845354 kubelet[2606]: I0129 16:26:59.845041 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-bpf-maps\") pod \"acbf8f52-2030-4935-b4e3-dc6c908882cc\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " Jan 29 16:26:59.845354 kubelet[2606]: I0129 16:26:59.845118 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-host-proc-sys-net\") pod \"acbf8f52-2030-4935-b4e3-dc6c908882cc\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " Jan 29 16:26:59.845354 kubelet[2606]: I0129 16:26:59.845143 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-cilium-run\") pod \"acbf8f52-2030-4935-b4e3-dc6c908882cc\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " Jan 29 16:26:59.845354 kubelet[2606]: I0129 16:26:59.845175 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6jxw\" (UniqueName: \"kubernetes.io/projected/acbf8f52-2030-4935-b4e3-dc6c908882cc-kube-api-access-z6jxw\") pod \"acbf8f52-2030-4935-b4e3-dc6c908882cc\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " Jan 29 16:26:59.845354 kubelet[2606]: I0129 16:26:59.845164 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "acbf8f52-2030-4935-b4e3-dc6c908882cc" (UID: "acbf8f52-2030-4935-b4e3-dc6c908882cc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:26:59.845354 kubelet[2606]: I0129 16:26:59.845201 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/acbf8f52-2030-4935-b4e3-dc6c908882cc-clustermesh-secrets\") pod \"acbf8f52-2030-4935-b4e3-dc6c908882cc\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " Jan 29 16:26:59.845741 kubelet[2606]: I0129 16:26:59.845222 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-xtables-lock\") pod \"acbf8f52-2030-4935-b4e3-dc6c908882cc\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " Jan 29 16:26:59.845741 kubelet[2606]: I0129 16:26:59.845233 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "acbf8f52-2030-4935-b4e3-dc6c908882cc" (UID: "acbf8f52-2030-4935-b4e3-dc6c908882cc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:26:59.845741 kubelet[2606]: I0129 16:26:59.845246 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-etc-cni-netd\") pod \"acbf8f52-2030-4935-b4e3-dc6c908882cc\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " Jan 29 16:26:59.845741 kubelet[2606]: I0129 16:26:59.845253 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "acbf8f52-2030-4935-b4e3-dc6c908882cc" (UID: "acbf8f52-2030-4935-b4e3-dc6c908882cc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:26:59.845741 kubelet[2606]: I0129 16:26:59.845270 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-cni-path\") pod \"acbf8f52-2030-4935-b4e3-dc6c908882cc\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " Jan 29 16:26:59.847957 kubelet[2606]: I0129 16:26:59.845296 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvm6v\" (UniqueName: \"kubernetes.io/projected/c69f1ba6-8703-434f-a14e-22d47f68ec03-kube-api-access-jvm6v\") pod \"c69f1ba6-8703-434f-a14e-22d47f68ec03\" (UID: \"c69f1ba6-8703-434f-a14e-22d47f68ec03\") " Jan 29 16:26:59.847957 kubelet[2606]: I0129 16:26:59.845319 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/acbf8f52-2030-4935-b4e3-dc6c908882cc-hubble-tls\") pod \"acbf8f52-2030-4935-b4e3-dc6c908882cc\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " Jan 29 16:26:59.847957 kubelet[2606]: I0129 16:26:59.845339 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-cilium-cgroup\") pod \"acbf8f52-2030-4935-b4e3-dc6c908882cc\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " Jan 29 16:26:59.847957 kubelet[2606]: I0129 16:26:59.845361 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-host-proc-sys-kernel\") pod \"acbf8f52-2030-4935-b4e3-dc6c908882cc\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " Jan 29 16:26:59.847957 kubelet[2606]: I0129 16:26:59.845385 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acbf8f52-2030-4935-b4e3-dc6c908882cc-cilium-config-path\") pod \"acbf8f52-2030-4935-b4e3-dc6c908882cc\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " Jan 29 16:26:59.847957 kubelet[2606]: I0129 16:26:59.845408 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-hostproc\") pod \"acbf8f52-2030-4935-b4e3-dc6c908882cc\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " Jan 29 16:26:59.848119 kubelet[2606]: I0129 16:26:59.845431 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c69f1ba6-8703-434f-a14e-22d47f68ec03-cilium-config-path\") pod \"c69f1ba6-8703-434f-a14e-22d47f68ec03\" (UID: \"c69f1ba6-8703-434f-a14e-22d47f68ec03\") " Jan 29 16:26:59.848119 kubelet[2606]: I0129 16:26:59.845452 2606 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-lib-modules\") pod \"acbf8f52-2030-4935-b4e3-dc6c908882cc\" (UID: \"acbf8f52-2030-4935-b4e3-dc6c908882cc\") " Jan 29 16:26:59.848119 kubelet[2606]: I0129 16:26:59.845497 2606 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 16:26:59.848119 kubelet[2606]: I0129 16:26:59.845515 2606 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 16:26:59.848119 kubelet[2606]: I0129 16:26:59.845528 2606 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 16:26:59.848119 kubelet[2606]: I0129 16:26:59.845581 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "acbf8f52-2030-4935-b4e3-dc6c908882cc" (UID: "acbf8f52-2030-4935-b4e3-dc6c908882cc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:26:59.848262 kubelet[2606]: I0129 16:26:59.845616 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "acbf8f52-2030-4935-b4e3-dc6c908882cc" (UID: "acbf8f52-2030-4935-b4e3-dc6c908882cc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:26:59.848262 kubelet[2606]: I0129 16:26:59.845637 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "acbf8f52-2030-4935-b4e3-dc6c908882cc" (UID: "acbf8f52-2030-4935-b4e3-dc6c908882cc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:26:59.848262 kubelet[2606]: I0129 16:26:59.845656 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-cni-path" (OuterVolumeSpecName: "cni-path") pod "acbf8f52-2030-4935-b4e3-dc6c908882cc" (UID: "acbf8f52-2030-4935-b4e3-dc6c908882cc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:26:59.848262 kubelet[2606]: I0129 16:26:59.845916 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "acbf8f52-2030-4935-b4e3-dc6c908882cc" (UID: "acbf8f52-2030-4935-b4e3-dc6c908882cc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:26:59.861999 kubelet[2606]: I0129 16:26:59.857739 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acbf8f52-2030-4935-b4e3-dc6c908882cc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "acbf8f52-2030-4935-b4e3-dc6c908882cc" (UID: "acbf8f52-2030-4935-b4e3-dc6c908882cc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 16:26:59.861999 kubelet[2606]: I0129 16:26:59.857809 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "acbf8f52-2030-4935-b4e3-dc6c908882cc" (UID: "acbf8f52-2030-4935-b4e3-dc6c908882cc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:26:59.861999 kubelet[2606]: I0129 16:26:59.857828 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-hostproc" (OuterVolumeSpecName: "hostproc") pod "acbf8f52-2030-4935-b4e3-dc6c908882cc" (UID: "acbf8f52-2030-4935-b4e3-dc6c908882cc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:26:59.862134 kubelet[2606]: I0129 16:26:59.862069 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c69f1ba6-8703-434f-a14e-22d47f68ec03-kube-api-access-jvm6v" (OuterVolumeSpecName: "kube-api-access-jvm6v") pod "c69f1ba6-8703-434f-a14e-22d47f68ec03" (UID: "c69f1ba6-8703-434f-a14e-22d47f68ec03"). InnerVolumeSpecName "kube-api-access-jvm6v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 16:26:59.864023 kubelet[2606]: I0129 16:26:59.864001 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acbf8f52-2030-4935-b4e3-dc6c908882cc-kube-api-access-z6jxw" (OuterVolumeSpecName: "kube-api-access-z6jxw") pod "acbf8f52-2030-4935-b4e3-dc6c908882cc" (UID: "acbf8f52-2030-4935-b4e3-dc6c908882cc"). InnerVolumeSpecName "kube-api-access-z6jxw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 16:26:59.865925 kubelet[2606]: I0129 16:26:59.865363 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acbf8f52-2030-4935-b4e3-dc6c908882cc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "acbf8f52-2030-4935-b4e3-dc6c908882cc" (UID: "acbf8f52-2030-4935-b4e3-dc6c908882cc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 16:26:59.866450 kubelet[2606]: I0129 16:26:59.866400 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acbf8f52-2030-4935-b4e3-dc6c908882cc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "acbf8f52-2030-4935-b4e3-dc6c908882cc" (UID: "acbf8f52-2030-4935-b4e3-dc6c908882cc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 29 16:26:59.871181 kubelet[2606]: I0129 16:26:59.871120 2606 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c69f1ba6-8703-434f-a14e-22d47f68ec03-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c69f1ba6-8703-434f-a14e-22d47f68ec03" (UID: "c69f1ba6-8703-434f-a14e-22d47f68ec03"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 16:26:59.945683 kubelet[2606]: I0129 16:26:59.945644 2606 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z6jxw\" (UniqueName: \"kubernetes.io/projected/acbf8f52-2030-4935-b4e3-dc6c908882cc-kube-api-access-z6jxw\") on node \"localhost\" DevicePath \"\"" Jan 29 16:26:59.945683 kubelet[2606]: I0129 16:26:59.945671 2606 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/acbf8f52-2030-4935-b4e3-dc6c908882cc-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 16:26:59.945683 kubelet[2606]: I0129 16:26:59.945680 2606 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 16:26:59.945683 kubelet[2606]: I0129 16:26:59.945689 2606 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 16:26:59.945683 kubelet[2606]: I0129 16:26:59.945697 2606 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 16:26:59.945683 kubelet[2606]: I0129 16:26:59.945714 2606 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jvm6v\" (UniqueName: \"kubernetes.io/projected/c69f1ba6-8703-434f-a14e-22d47f68ec03-kube-api-access-jvm6v\") on node \"localhost\" DevicePath \"\"" Jan 29 16:26:59.945683 kubelet[2606]: I0129 16:26:59.945722 2606 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/acbf8f52-2030-4935-b4e3-dc6c908882cc-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 16:26:59.946155 kubelet[2606]: I0129 16:26:59.945731 2606 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 16:26:59.946155 kubelet[2606]: I0129 16:26:59.945740 2606 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 16:26:59.946155 kubelet[2606]: I0129 16:26:59.945748 2606 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acbf8f52-2030-4935-b4e3-dc6c908882cc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 16:26:59.946155 kubelet[2606]: I0129 16:26:59.945756 2606 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 16:26:59.946155 kubelet[2606]: I0129 16:26:59.945763 2606 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/acbf8f52-2030-4935-b4e3-dc6c908882cc-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 16:26:59.946155 kubelet[2606]: I0129 16:26:59.945772 2606 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c69f1ba6-8703-434f-a14e-22d47f68ec03-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 16:26:59.946807 kubelet[2606]: I0129 16:26:59.946739 2606 scope.go:117] "RemoveContainer" containerID="f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e" Jan 29 16:26:59.952899 systemd[1]: Removed slice kubepods-burstable-podacbf8f52_2030_4935_b4e3_dc6c908882cc.slice - libcontainer container kubepods-burstable-podacbf8f52_2030_4935_b4e3_dc6c908882cc.slice. Jan 29 16:26:59.952993 systemd[1]: kubepods-burstable-podacbf8f52_2030_4935_b4e3_dc6c908882cc.slice: Consumed 7.084s CPU time, 121.4M memory peak, 264K read from disk, 13.3M written to disk. Jan 29 16:26:59.958129 containerd[1484]: time="2025-01-29T16:26:59.958066079Z" level=info msg="RemoveContainer for \"f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e\"" Jan 29 16:26:59.958457 systemd[1]: Removed slice kubepods-besteffort-podc69f1ba6_8703_434f_a14e_22d47f68ec03.slice - libcontainer container kubepods-besteffort-podc69f1ba6_8703_434f_a14e_22d47f68ec03.slice. Jan 29 16:26:59.965270 containerd[1484]: time="2025-01-29T16:26:59.965234139Z" level=info msg="RemoveContainer for \"f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e\" returns successfully" Jan 29 16:26:59.965802 kubelet[2606]: I0129 16:26:59.965756 2606 scope.go:117] "RemoveContainer" containerID="05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb" Jan 29 16:26:59.968525 containerd[1484]: time="2025-01-29T16:26:59.968492014Z" level=info msg="RemoveContainer for \"05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb\"" Jan 29 16:26:59.973014 containerd[1484]: time="2025-01-29T16:26:59.972964922Z" level=info msg="RemoveContainer for \"05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb\" returns successfully" Jan 29 16:26:59.973268 kubelet[2606]: I0129 16:26:59.973237 2606 scope.go:117] "RemoveContainer" containerID="037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318" Jan 29 16:26:59.979761 containerd[1484]: time="2025-01-29T16:26:59.979694928Z" level=info msg="RemoveContainer for \"037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318\"" Jan 29 16:26:59.983742 containerd[1484]: time="2025-01-29T16:26:59.983685438Z" level=info msg="RemoveContainer for \"037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318\" returns successfully" Jan 29 16:26:59.984055 kubelet[2606]: I0129 16:26:59.984018 2606 scope.go:117] "RemoveContainer" containerID="4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469" Jan 29 16:26:59.985456 containerd[1484]: time="2025-01-29T16:26:59.985397949Z" level=info msg="RemoveContainer for \"4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469\"" Jan 29 16:26:59.989247 containerd[1484]: time="2025-01-29T16:26:59.989191394Z" level=info msg="RemoveContainer for \"4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469\" returns successfully" Jan 29 16:26:59.989426 kubelet[2606]: I0129 16:26:59.989383 2606 scope.go:117] "RemoveContainer" containerID="ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05" Jan 29 16:26:59.990317 containerd[1484]: time="2025-01-29T16:26:59.990288012Z" level=info msg="RemoveContainer for \"ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05\"" Jan 29 16:26:59.993585 containerd[1484]: time="2025-01-29T16:26:59.993539514Z" level=info msg="RemoveContainer for \"ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05\" returns successfully" Jan 29 16:26:59.993780 kubelet[2606]: I0129 16:26:59.993730 2606 scope.go:117] "RemoveContainer" containerID="f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e" Jan 29 16:26:59.993992 containerd[1484]: time="2025-01-29T16:26:59.993949545Z" level=error msg="ContainerStatus for \"f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e\": not found" Jan 29 16:27:00.000025 kubelet[2606]: E0129 16:26:59.999999 2606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e\": not found" containerID="f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e" Jan 29 16:27:00.000251 kubelet[2606]: I0129 16:27:00.000131 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e"} err="failed to get container status \"f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9296f46fb441bed461164e02a86b89f063d6669e36ba581b0871620d3a0ef1e\": not found" Jan 29 16:27:00.000251 kubelet[2606]: I0129 16:27:00.000235 2606 scope.go:117] "RemoveContainer" containerID="05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb" Jan 29 16:27:00.000412 containerd[1484]: time="2025-01-29T16:27:00.000386744Z" level=error msg="ContainerStatus for \"05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb\": not found" Jan 29 16:27:00.000509 kubelet[2606]: E0129 16:27:00.000486 2606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb\": not found" containerID="05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb" Jan 29 16:27:00.000571 kubelet[2606]: I0129 16:27:00.000508 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb"} err="failed to get container status \"05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb\": rpc error: code = NotFound desc = an error occurred when try to find container \"05c11718adba0841ec19a4c4f51ab143b444dc0f7b23b714d15951460d846bdb\": not found" Jan 29 16:27:00.000571 kubelet[2606]: I0129 16:27:00.000525 2606 scope.go:117] "RemoveContainer" containerID="037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318" Jan 29 16:27:00.000796 containerd[1484]: time="2025-01-29T16:27:00.000759424Z" level=error msg="ContainerStatus for \"037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318\": not found" Jan 29 16:27:00.000962 kubelet[2606]: E0129 16:27:00.000939 2606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318\": not found" containerID="037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318" Jan 29 16:27:00.001012 kubelet[2606]: I0129 16:27:00.000975 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318"} err="failed to get container status \"037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318\": rpc error: code = NotFound desc = an error occurred when try to find container \"037b83ba5e82c6a3a63d23c77684adea94f621ba5f444d59df4e17a09907c318\": not found" Jan 29 16:27:00.001012 kubelet[2606]: I0129 16:27:00.001001 2606 scope.go:117] "RemoveContainer" containerID="4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469" Jan 29 16:27:00.001198 containerd[1484]: time="2025-01-29T16:27:00.001174063Z" level=error msg="ContainerStatus for \"4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469\": not found" Jan 29 16:27:00.001310 kubelet[2606]: E0129 16:27:00.001286 2606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469\": not found" containerID="4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469" Jan 29 16:27:00.001351 kubelet[2606]: I0129 16:27:00.001308 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469"} err="failed to get container status \"4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469\": rpc error: code = NotFound desc = an error occurred when try to find container \"4fb7778a67919da4e57ca83435a1b8088575ab99bef271caf9f4a9c668cc1469\": not found" Jan 29 16:27:00.001351 kubelet[2606]: I0129 16:27:00.001325 2606 scope.go:117] "RemoveContainer" containerID="ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05" Jan 29 16:27:00.001479 containerd[1484]: time="2025-01-29T16:27:00.001448275Z" level=error msg="ContainerStatus for \"ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05\": not found" Jan 29 16:27:00.001635 kubelet[2606]: E0129 16:27:00.001613 2606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05\": not found" containerID="ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05" Jan 29 16:27:00.001669 kubelet[2606]: I0129 16:27:00.001646 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05"} err="failed to get container status \"ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05\": rpc error: code = NotFound desc = an error occurred when try to find container \"ddad20551a805e4a6b2bf227bf7b60fe366b7d20348b20da40502ef065784d05\": not found" Jan 29 16:27:00.001669 kubelet[2606]: I0129 16:27:00.001663 2606 scope.go:117] "RemoveContainer" containerID="b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d" Jan 29 16:27:00.002747 containerd[1484]: time="2025-01-29T16:27:00.002726088Z" level=info msg="RemoveContainer for \"b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d\"" Jan 29 16:27:00.005996 containerd[1484]: time="2025-01-29T16:27:00.005958852Z" level=info msg="RemoveContainer for \"b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d\" returns successfully" Jan 29 16:27:00.006167 kubelet[2606]: I0129 16:27:00.006093 2606 scope.go:117] "RemoveContainer" containerID="b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d" Jan 29 16:27:00.006325 containerd[1484]: time="2025-01-29T16:27:00.006242853Z" level=error msg="ContainerStatus for \"b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d\": not found" Jan 29 16:27:00.006391 kubelet[2606]: E0129 16:27:00.006339 2606 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d\": not found" containerID="b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d" Jan 29 16:27:00.006391 kubelet[2606]: I0129 16:27:00.006359 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d"} err="failed to get container status \"b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d\": rpc error: code = NotFound desc = an error occurred when try to find container \"b50ef3f699da20cd9232819c8d912adf33f65fa2b1e2c0861fba4a7be4f0df4d\": not found" Jan 29 16:27:00.616304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db-rootfs.mount: Deactivated successfully. Jan 29 16:27:00.616442 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d07b6a4d24d660c8374a0e66a8fc156e505ee1720ff4dfaffef5e760f97f0db-shm.mount: Deactivated successfully. Jan 29 16:27:00.616548 systemd[1]: var-lib-kubelet-pods-c69f1ba6\x2d8703\x2d434f\x2da14e\x2d22d47f68ec03-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djvm6v.mount: Deactivated successfully. Jan 29 16:27:00.616655 systemd[1]: var-lib-kubelet-pods-acbf8f52\x2d2030\x2d4935\x2db4e3\x2ddc6c908882cc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz6jxw.mount: Deactivated successfully. Jan 29 16:27:00.616797 systemd[1]: var-lib-kubelet-pods-acbf8f52\x2d2030\x2d4935\x2db4e3\x2ddc6c908882cc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 16:27:00.616926 systemd[1]: var-lib-kubelet-pods-acbf8f52\x2d2030\x2d4935\x2db4e3\x2ddc6c908882cc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 16:27:00.693780 kubelet[2606]: I0129 16:27:00.693732 2606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acbf8f52-2030-4935-b4e3-dc6c908882cc" path="/var/lib/kubelet/pods/acbf8f52-2030-4935-b4e3-dc6c908882cc/volumes" Jan 29 16:27:00.694658 kubelet[2606]: I0129 16:27:00.694625 2606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c69f1ba6-8703-434f-a14e-22d47f68ec03" path="/var/lib/kubelet/pods/c69f1ba6-8703-434f-a14e-22d47f68ec03/volumes" Jan 29 16:27:01.536040 sshd[4312]: Connection closed by 10.0.0.1 port 54792 Jan 29 16:27:01.536545 sshd-session[4308]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:01.549264 systemd[1]: sshd@27-10.0.0.140:22-10.0.0.1:54792.service: Deactivated successfully. Jan 29 16:27:01.551451 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 16:27:01.552953 systemd-logind[1465]: Session 28 logged out. Waiting for processes to exit. Jan 29 16:27:01.554300 systemd[1]: Started sshd@28-10.0.0.140:22-10.0.0.1:54804.service - OpenSSH per-connection server daemon (10.0.0.1:54804). Jan 29 16:27:01.555102 systemd-logind[1465]: Removed session 28. Jan 29 16:27:01.596576 sshd[4474]: Accepted publickey for core from 10.0.0.1 port 54804 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:01.598185 sshd-session[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:01.603664 systemd-logind[1465]: New session 29 of user core. Jan 29 16:27:01.617065 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 29 16:27:02.300847 sshd[4478]: Connection closed by 10.0.0.1 port 54804 Jan 29 16:27:02.301481 sshd-session[4474]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:02.315172 kubelet[2606]: I0129 16:27:02.314515 2606 memory_manager.go:355] "RemoveStaleState removing state" podUID="c69f1ba6-8703-434f-a14e-22d47f68ec03" containerName="cilium-operator" Jan 29 16:27:02.315172 kubelet[2606]: I0129 16:27:02.314544 2606 memory_manager.go:355] "RemoveStaleState removing state" podUID="acbf8f52-2030-4935-b4e3-dc6c908882cc" containerName="cilium-agent" Jan 29 16:27:02.316722 systemd[1]: sshd@28-10.0.0.140:22-10.0.0.1:54804.service: Deactivated successfully. Jan 29 16:27:02.319087 systemd[1]: session-29.scope: Deactivated successfully. Jan 29 16:27:02.321594 systemd-logind[1465]: Session 29 logged out. Waiting for processes to exit. Jan 29 16:27:02.324062 systemd-logind[1465]: Removed session 29. Jan 29 16:27:02.331319 systemd[1]: Started sshd@29-10.0.0.140:22-10.0.0.1:54814.service - OpenSSH per-connection server daemon (10.0.0.1:54814). Jan 29 16:27:02.347715 systemd[1]: Created slice kubepods-burstable-poded040ca9_165a_424f_92d7_b2311dc326eb.slice - libcontainer container kubepods-burstable-poded040ca9_165a_424f_92d7_b2311dc326eb.slice. Jan 29 16:27:02.363217 kubelet[2606]: I0129 16:27:02.363169 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed040ca9-165a-424f-92d7-b2311dc326eb-cilium-run\") pod \"cilium-d85zr\" (UID: \"ed040ca9-165a-424f-92d7-b2311dc326eb\") " pod="kube-system/cilium-d85zr" Jan 29 16:27:02.363217 kubelet[2606]: I0129 16:27:02.363220 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed040ca9-165a-424f-92d7-b2311dc326eb-hostproc\") pod \"cilium-d85zr\" (UID: \"ed040ca9-165a-424f-92d7-b2311dc326eb\") " pod="kube-system/cilium-d85zr" Jan 29 16:27:02.363404 kubelet[2606]: I0129 16:27:02.363241 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed040ca9-165a-424f-92d7-b2311dc326eb-clustermesh-secrets\") pod \"cilium-d85zr\" (UID: \"ed040ca9-165a-424f-92d7-b2311dc326eb\") " pod="kube-system/cilium-d85zr" Jan 29 16:27:02.363404 kubelet[2606]: I0129 16:27:02.363265 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed040ca9-165a-424f-92d7-b2311dc326eb-hubble-tls\") pod \"cilium-d85zr\" (UID: \"ed040ca9-165a-424f-92d7-b2311dc326eb\") " pod="kube-system/cilium-d85zr" Jan 29 16:27:02.363404 kubelet[2606]: I0129 16:27:02.363292 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed040ca9-165a-424f-92d7-b2311dc326eb-host-proc-sys-kernel\") pod \"cilium-d85zr\" (UID: \"ed040ca9-165a-424f-92d7-b2311dc326eb\") " pod="kube-system/cilium-d85zr" Jan 29 16:27:02.363487 kubelet[2606]: I0129 16:27:02.363374 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed040ca9-165a-424f-92d7-b2311dc326eb-lib-modules\") pod \"cilium-d85zr\" (UID: \"ed040ca9-165a-424f-92d7-b2311dc326eb\") " pod="kube-system/cilium-d85zr" Jan 29 16:27:02.363487 kubelet[2606]: I0129 16:27:02.363451 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed040ca9-165a-424f-92d7-b2311dc326eb-xtables-lock\") pod \"cilium-d85zr\" (UID: \"ed040ca9-165a-424f-92d7-b2311dc326eb\") " pod="kube-system/cilium-d85zr" Jan 29 16:27:02.363487 kubelet[2606]: I0129 16:27:02.363475 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed040ca9-165a-424f-92d7-b2311dc326eb-cilium-cgroup\") pod \"cilium-d85zr\" (UID: \"ed040ca9-165a-424f-92d7-b2311dc326eb\") " pod="kube-system/cilium-d85zr" Jan 29 16:27:02.363590 kubelet[2606]: I0129 16:27:02.363493 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ed040ca9-165a-424f-92d7-b2311dc326eb-cilium-ipsec-secrets\") pod \"cilium-d85zr\" (UID: \"ed040ca9-165a-424f-92d7-b2311dc326eb\") " pod="kube-system/cilium-d85zr" Jan 29 16:27:02.363590 kubelet[2606]: I0129 16:27:02.363516 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed040ca9-165a-424f-92d7-b2311dc326eb-host-proc-sys-net\") pod \"cilium-d85zr\" (UID: \"ed040ca9-165a-424f-92d7-b2311dc326eb\") " pod="kube-system/cilium-d85zr" Jan 29 16:27:02.363590 kubelet[2606]: I0129 16:27:02.363539 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed040ca9-165a-424f-92d7-b2311dc326eb-etc-cni-netd\") pod \"cilium-d85zr\" (UID: \"ed040ca9-165a-424f-92d7-b2311dc326eb\") " pod="kube-system/cilium-d85zr" Jan 29 16:27:02.363590 kubelet[2606]: I0129 16:27:02.363564 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxrrr\" (UniqueName: \"kubernetes.io/projected/ed040ca9-165a-424f-92d7-b2311dc326eb-kube-api-access-jxrrr\") pod \"cilium-d85zr\" (UID: \"ed040ca9-165a-424f-92d7-b2311dc326eb\") " pod="kube-system/cilium-d85zr" Jan 29 16:27:02.363590 kubelet[2606]: I0129 16:27:02.363583 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed040ca9-165a-424f-92d7-b2311dc326eb-bpf-maps\") pod \"cilium-d85zr\" (UID: \"ed040ca9-165a-424f-92d7-b2311dc326eb\") " pod="kube-system/cilium-d85zr" Jan 29 16:27:02.363751 kubelet[2606]: I0129 16:27:02.363605 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed040ca9-165a-424f-92d7-b2311dc326eb-cni-path\") pod \"cilium-d85zr\" (UID: \"ed040ca9-165a-424f-92d7-b2311dc326eb\") " pod="kube-system/cilium-d85zr" Jan 29 16:27:02.363751 kubelet[2606]: I0129 16:27:02.363626 2606 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed040ca9-165a-424f-92d7-b2311dc326eb-cilium-config-path\") pod \"cilium-d85zr\" (UID: \"ed040ca9-165a-424f-92d7-b2311dc326eb\") " pod="kube-system/cilium-d85zr" Jan 29 16:27:02.373477 sshd[4489]: Accepted publickey for core from 10.0.0.1 port 54814 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:02.375293 sshd-session[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:02.379910 systemd-logind[1465]: New session 30 of user core. Jan 29 16:27:02.396043 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 29 16:27:02.448314 sshd[4492]: Connection closed by 10.0.0.1 port 54814 Jan 29 16:27:02.448767 sshd-session[4489]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:02.460763 systemd[1]: sshd@29-10.0.0.140:22-10.0.0.1:54814.service: Deactivated successfully. Jan 29 16:27:02.462794 systemd[1]: session-30.scope: Deactivated successfully. Jan 29 16:27:02.464445 systemd-logind[1465]: Session 30 logged out. Waiting for processes to exit. Jan 29 16:27:02.476228 systemd[1]: Started sshd@30-10.0.0.140:22-10.0.0.1:54830.service - OpenSSH per-connection server daemon (10.0.0.1:54830). Jan 29 16:27:02.484228 systemd-logind[1465]: Removed session 30. Jan 29 16:27:02.510698 sshd[4500]: Accepted publickey for core from 10.0.0.1 port 54830 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:27:02.512558 sshd-session[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:27:02.517483 systemd-logind[1465]: New session 31 of user core. Jan 29 16:27:02.525011 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 29 16:27:02.651483 kubelet[2606]: E0129 16:27:02.651354 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:27:02.652250 containerd[1484]: time="2025-01-29T16:27:02.651913932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d85zr,Uid:ed040ca9-165a-424f-92d7-b2311dc326eb,Namespace:kube-system,Attempt:0,}" Jan 29 16:27:02.671796 containerd[1484]: time="2025-01-29T16:27:02.671694032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:27:02.671796 containerd[1484]: time="2025-01-29T16:27:02.671756300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:27:02.671796 containerd[1484]: time="2025-01-29T16:27:02.671770648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:02.672604 containerd[1484]: time="2025-01-29T16:27:02.672536224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:02.687592 systemd[1]: run-containerd-runc-k8s.io-52bce032a870e90637afbd44bc6aa08c5a919f78c1c0c03f73790af1e558d625-runc.RqPz3P.mount: Deactivated successfully. Jan 29 16:27:02.694004 systemd[1]: Started cri-containerd-52bce032a870e90637afbd44bc6aa08c5a919f78c1c0c03f73790af1e558d625.scope - libcontainer container 52bce032a870e90637afbd44bc6aa08c5a919f78c1c0c03f73790af1e558d625. Jan 29 16:27:02.719130 containerd[1484]: time="2025-01-29T16:27:02.719083976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d85zr,Uid:ed040ca9-165a-424f-92d7-b2311dc326eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"52bce032a870e90637afbd44bc6aa08c5a919f78c1c0c03f73790af1e558d625\"" Jan 29 16:27:02.719904 kubelet[2606]: E0129 16:27:02.719760 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:27:02.721957 containerd[1484]: time="2025-01-29T16:27:02.721884053Z" level=info msg="CreateContainer within sandbox \"52bce032a870e90637afbd44bc6aa08c5a919f78c1c0c03f73790af1e558d625\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:27:02.741372 containerd[1484]: time="2025-01-29T16:27:02.741296513Z" level=info msg="CreateContainer within sandbox \"52bce032a870e90637afbd44bc6aa08c5a919f78c1c0c03f73790af1e558d625\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5b8068073cf14a0c714998aa2919cf85dbee8ecf264f56a6d112e8b68f28d3df\"" Jan 29 16:27:02.742037 containerd[1484]: time="2025-01-29T16:27:02.741848834Z" level=info msg="StartContainer for \"5b8068073cf14a0c714998aa2919cf85dbee8ecf264f56a6d112e8b68f28d3df\"" Jan 29 16:27:02.773099 systemd[1]: Started cri-containerd-5b8068073cf14a0c714998aa2919cf85dbee8ecf264f56a6d112e8b68f28d3df.scope - libcontainer container 5b8068073cf14a0c714998aa2919cf85dbee8ecf264f56a6d112e8b68f28d3df. Jan 29 16:27:02.804618 containerd[1484]: time="2025-01-29T16:27:02.804561919Z" level=info msg="StartContainer for \"5b8068073cf14a0c714998aa2919cf85dbee8ecf264f56a6d112e8b68f28d3df\" returns successfully" Jan 29 16:27:02.813956 systemd[1]: cri-containerd-5b8068073cf14a0c714998aa2919cf85dbee8ecf264f56a6d112e8b68f28d3df.scope: Deactivated successfully. Jan 29 16:27:02.847722 containerd[1484]: time="2025-01-29T16:27:02.847635833Z" level=info msg="shim disconnected" id=5b8068073cf14a0c714998aa2919cf85dbee8ecf264f56a6d112e8b68f28d3df namespace=k8s.io Jan 29 16:27:02.847722 containerd[1484]: time="2025-01-29T16:27:02.847713210Z" level=warning msg="cleaning up after shim disconnected" id=5b8068073cf14a0c714998aa2919cf85dbee8ecf264f56a6d112e8b68f28d3df namespace=k8s.io Jan 29 16:27:02.847722 containerd[1484]: time="2025-01-29T16:27:02.847725804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:27:02.957348 kubelet[2606]: E0129 16:27:02.957215 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:27:02.958767 containerd[1484]: time="2025-01-29T16:27:02.958721912Z" level=info msg="CreateContainer within sandbox \"52bce032a870e90637afbd44bc6aa08c5a919f78c1c0c03f73790af1e558d625\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:27:02.973388 containerd[1484]: time="2025-01-29T16:27:02.973233008Z" level=info msg="CreateContainer within sandbox \"52bce032a870e90637afbd44bc6aa08c5a919f78c1c0c03f73790af1e558d625\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0d3b21fe83db97f9cdc57e7b870ca6c5a8d25b75220bceb60ff707a836ad195b\"" Jan 29 16:27:02.975106 containerd[1484]: time="2025-01-29T16:27:02.975026670Z" level=info msg="StartContainer for \"0d3b21fe83db97f9cdc57e7b870ca6c5a8d25b75220bceb60ff707a836ad195b\"" Jan 29 16:27:03.008133 systemd[1]: Started cri-containerd-0d3b21fe83db97f9cdc57e7b870ca6c5a8d25b75220bceb60ff707a836ad195b.scope - libcontainer container 0d3b21fe83db97f9cdc57e7b870ca6c5a8d25b75220bceb60ff707a836ad195b. Jan 29 16:27:03.037221 containerd[1484]: time="2025-01-29T16:27:03.037178705Z" level=info msg="StartContainer for \"0d3b21fe83db97f9cdc57e7b870ca6c5a8d25b75220bceb60ff707a836ad195b\" returns successfully" Jan 29 16:27:03.043641 systemd[1]: cri-containerd-0d3b21fe83db97f9cdc57e7b870ca6c5a8d25b75220bceb60ff707a836ad195b.scope: Deactivated successfully. Jan 29 16:27:03.069463 containerd[1484]: time="2025-01-29T16:27:03.069160466Z" level=info msg="shim disconnected" id=0d3b21fe83db97f9cdc57e7b870ca6c5a8d25b75220bceb60ff707a836ad195b namespace=k8s.io Jan 29 16:27:03.069801 containerd[1484]: time="2025-01-29T16:27:03.069776038Z" level=warning msg="cleaning up after shim disconnected" id=0d3b21fe83db97f9cdc57e7b870ca6c5a8d25b75220bceb60ff707a836ad195b namespace=k8s.io Jan 29 16:27:03.069801 containerd[1484]: time="2025-01-29T16:27:03.069797518Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:27:03.749820 kubelet[2606]: E0129 16:27:03.749786 2606 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:27:03.959979 kubelet[2606]: E0129 16:27:03.959953 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:27:03.961509 containerd[1484]: time="2025-01-29T16:27:03.961465162Z" level=info msg="CreateContainer within sandbox \"52bce032a870e90637afbd44bc6aa08c5a919f78c1c0c03f73790af1e558d625\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:27:03.979128 containerd[1484]: time="2025-01-29T16:27:03.979078999Z" level=info msg="CreateContainer within sandbox \"52bce032a870e90637afbd44bc6aa08c5a919f78c1c0c03f73790af1e558d625\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f7be098634e4727fba935ed56feef2fc3e92b134b2f2ac08e6c5deac43a9d21a\"" Jan 29 16:27:03.979787 containerd[1484]: time="2025-01-29T16:27:03.979619297Z" level=info msg="StartContainer for \"f7be098634e4727fba935ed56feef2fc3e92b134b2f2ac08e6c5deac43a9d21a\"" Jan 29 16:27:04.011988 systemd[1]: Started cri-containerd-f7be098634e4727fba935ed56feef2fc3e92b134b2f2ac08e6c5deac43a9d21a.scope - libcontainer container f7be098634e4727fba935ed56feef2fc3e92b134b2f2ac08e6c5deac43a9d21a. Jan 29 16:27:04.043981 containerd[1484]: time="2025-01-29T16:27:04.043930075Z" level=info msg="StartContainer for \"f7be098634e4727fba935ed56feef2fc3e92b134b2f2ac08e6c5deac43a9d21a\" returns successfully" Jan 29 16:27:04.044237 systemd[1]: cri-containerd-f7be098634e4727fba935ed56feef2fc3e92b134b2f2ac08e6c5deac43a9d21a.scope: Deactivated successfully. Jan 29 16:27:04.077156 containerd[1484]: time="2025-01-29T16:27:04.077082363Z" level=info msg="shim disconnected" id=f7be098634e4727fba935ed56feef2fc3e92b134b2f2ac08e6c5deac43a9d21a namespace=k8s.io Jan 29 16:27:04.077156 containerd[1484]: time="2025-01-29T16:27:04.077139081Z" level=warning msg="cleaning up after shim disconnected" id=f7be098634e4727fba935ed56feef2fc3e92b134b2f2ac08e6c5deac43a9d21a namespace=k8s.io Jan 29 16:27:04.077156 containerd[1484]: time="2025-01-29T16:27:04.077150984Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:27:04.478995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7be098634e4727fba935ed56feef2fc3e92b134b2f2ac08e6c5deac43a9d21a-rootfs.mount: Deactivated successfully. Jan 29 16:27:04.962602 kubelet[2606]: E0129 16:27:04.962570 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:27:04.970004 containerd[1484]: time="2025-01-29T16:27:04.969954164Z" level=info msg="CreateContainer within sandbox \"52bce032a870e90637afbd44bc6aa08c5a919f78c1c0c03f73790af1e558d625\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:27:05.134536 containerd[1484]: time="2025-01-29T16:27:05.134492096Z" level=info msg="CreateContainer within sandbox \"52bce032a870e90637afbd44bc6aa08c5a919f78c1c0c03f73790af1e558d625\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4c151925c35f6fd395c8a5c98297743c2a8ee5eb561e9a7b8ae1dfb09c9fdfd6\"" Jan 29 16:27:05.135053 containerd[1484]: time="2025-01-29T16:27:05.135023407Z" level=info msg="StartContainer for \"4c151925c35f6fd395c8a5c98297743c2a8ee5eb561e9a7b8ae1dfb09c9fdfd6\"" Jan 29 16:27:05.169065 systemd[1]: Started cri-containerd-4c151925c35f6fd395c8a5c98297743c2a8ee5eb561e9a7b8ae1dfb09c9fdfd6.scope - libcontainer container 4c151925c35f6fd395c8a5c98297743c2a8ee5eb561e9a7b8ae1dfb09c9fdfd6. Jan 29 16:27:05.191958 systemd[1]: cri-containerd-4c151925c35f6fd395c8a5c98297743c2a8ee5eb561e9a7b8ae1dfb09c9fdfd6.scope: Deactivated successfully. Jan 29 16:27:05.195210 containerd[1484]: time="2025-01-29T16:27:05.195178707Z" level=info msg="StartContainer for \"4c151925c35f6fd395c8a5c98297743c2a8ee5eb561e9a7b8ae1dfb09c9fdfd6\" returns successfully" Jan 29 16:27:05.219459 containerd[1484]: time="2025-01-29T16:27:05.219292526Z" level=info msg="shim disconnected" id=4c151925c35f6fd395c8a5c98297743c2a8ee5eb561e9a7b8ae1dfb09c9fdfd6 namespace=k8s.io Jan 29 16:27:05.219459 containerd[1484]: time="2025-01-29T16:27:05.219348873Z" level=warning msg="cleaning up after shim disconnected" id=4c151925c35f6fd395c8a5c98297743c2a8ee5eb561e9a7b8ae1dfb09c9fdfd6 namespace=k8s.io Jan 29 16:27:05.219459 containerd[1484]: time="2025-01-29T16:27:05.219358802Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:27:05.478748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c151925c35f6fd395c8a5c98297743c2a8ee5eb561e9a7b8ae1dfb09c9fdfd6-rootfs.mount: Deactivated successfully. Jan 29 16:27:05.969910 kubelet[2606]: E0129 16:27:05.969853 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:27:05.971561 containerd[1484]: time="2025-01-29T16:27:05.971525196Z" level=info msg="CreateContainer within sandbox \"52bce032a870e90637afbd44bc6aa08c5a919f78c1c0c03f73790af1e558d625\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:27:06.048663 containerd[1484]: time="2025-01-29T16:27:06.048519382Z" level=info msg="CreateContainer within sandbox \"52bce032a870e90637afbd44bc6aa08c5a919f78c1c0c03f73790af1e558d625\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d8923ddf35dd9c8ce077c90003594c667bc5909a0372114fdb392159392af71\"" Jan 29 16:27:06.049300 containerd[1484]: time="2025-01-29T16:27:06.049220333Z" level=info msg="StartContainer for \"4d8923ddf35dd9c8ce077c90003594c667bc5909a0372114fdb392159392af71\"" Jan 29 16:27:06.085053 systemd[1]: Started cri-containerd-4d8923ddf35dd9c8ce077c90003594c667bc5909a0372114fdb392159392af71.scope - libcontainer container 4d8923ddf35dd9c8ce077c90003594c667bc5909a0372114fdb392159392af71. Jan 29 16:27:06.167581 containerd[1484]: time="2025-01-29T16:27:06.167528799Z" level=info msg="StartContainer for \"4d8923ddf35dd9c8ce077c90003594c667bc5909a0372114fdb392159392af71\" returns successfully" Jan 29 16:27:06.672909 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 16:27:06.973809 kubelet[2606]: E0129 16:27:06.973672 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:27:06.988677 kubelet[2606]: I0129 16:27:06.988607 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d85zr" podStartSLOduration=4.988586597 podStartE2EDuration="4.988586597s" podCreationTimestamp="2025-01-29 16:27:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:27:06.98794111 +0000 UTC m=+98.385950421" watchObservedRunningTime="2025-01-29 16:27:06.988586597 +0000 UTC m=+98.386595908" Jan 29 16:27:08.652408 kubelet[2606]: E0129 16:27:08.652364 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:27:09.870922 systemd-networkd[1423]: lxc_health: Link UP Jan 29 16:27:09.872811 systemd-networkd[1423]: lxc_health: Gained carrier Jan 29 16:27:10.653800 kubelet[2606]: E0129 16:27:10.653756 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:27:10.968765 systemd[1]: run-containerd-runc-k8s.io-4d8923ddf35dd9c8ce077c90003594c667bc5909a0372114fdb392159392af71-runc.6d934h.mount: Deactivated successfully. Jan 29 16:27:10.979785 kubelet[2606]: E0129 16:27:10.979758 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:27:11.309055 systemd-networkd[1423]: lxc_health: Gained IPv6LL Jan 29 16:27:13.692037 kubelet[2606]: E0129 16:27:13.691995 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:27:14.692510 kubelet[2606]: E0129 16:27:14.692458 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:27:17.266381 systemd[1]: run-containerd-runc-k8s.io-4d8923ddf35dd9c8ce077c90003594c667bc5909a0372114fdb392159392af71-runc.JRpkss.mount: Deactivated successfully. Jan 29 16:27:17.311656 sshd[4505]: Connection closed by 10.0.0.1 port 54830 Jan 29 16:27:17.312027 sshd-session[4500]: pam_unix(sshd:session): session closed for user core Jan 29 16:27:17.315516 systemd[1]: sshd@30-10.0.0.140:22-10.0.0.1:54830.service: Deactivated successfully. Jan 29 16:27:17.317427 systemd[1]: session-31.scope: Deactivated successfully. Jan 29 16:27:17.318083 systemd-logind[1465]: Session 31 logged out. Waiting for processes to exit. Jan 29 16:27:17.318910 systemd-logind[1465]: Removed session 31.