Feb 13 15:55:30.882618 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:06:02 -00 2025 Feb 13 15:55:30.882639 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:55:30.882662 kernel: BIOS-provided physical RAM map: Feb 13 15:55:30.882669 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 15:55:30.882675 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 15:55:30.882682 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 15:55:30.882696 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 15:55:30.882703 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 15:55:30.882717 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 15:55:30.882727 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 15:55:30.882734 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:55:30.882747 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 15:55:30.882754 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:55:30.882771 kernel: NX (Execute Disable) protection: active Feb 13 15:55:30.882779 kernel: APIC: Static calls initialized Feb 13 15:55:30.882797 kernel: SMBIOS 2.8 present. Feb 13 15:55:30.882824 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 15:55:30.882838 kernel: Hypervisor detected: KVM Feb 13 15:55:30.882846 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:55:30.882860 kernel: kvm-clock: using sched offset of 2380567154 cycles Feb 13 15:55:30.882875 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:55:30.882883 kernel: tsc: Detected 2794.748 MHz processor Feb 13 15:55:30.882890 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:55:30.882898 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:55:30.882906 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 15:55:30.882917 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 15:55:30.882924 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:55:30.882931 kernel: Using GB pages for direct mapping Feb 13 15:55:30.882939 kernel: ACPI: Early table checksum verification disabled Feb 13 15:55:30.882946 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 15:55:30.882954 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:55:30.882961 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:55:30.882968 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:55:30.882976 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 15:55:30.882986 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:55:30.882993 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:55:30.883000 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:55:30.883008 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:55:30.883015 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 15:55:30.883024 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 15:55:30.883039 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 15:55:30.883052 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 15:55:30.883061 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 15:55:30.883072 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 15:55:30.883083 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 15:55:30.883095 kernel: No NUMA configuration found Feb 13 15:55:30.883106 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 15:55:30.883114 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 15:55:30.883124 kernel: Zone ranges: Feb 13 15:55:30.883132 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:55:30.883139 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 15:55:30.883147 kernel: Normal empty Feb 13 15:55:30.883154 kernel: Movable zone start for each node Feb 13 15:55:30.883162 kernel: Early memory node ranges Feb 13 15:55:30.883170 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 15:55:30.883177 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 15:55:30.883185 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 15:55:30.883196 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:55:30.883207 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 15:55:30.883217 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 15:55:30.883227 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:55:30.883236 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:55:30.883246 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:55:30.883256 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:55:30.883267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:55:30.883276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:55:30.883283 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:55:30.883294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:55:30.883302 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:55:30.883310 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:55:30.883317 kernel: TSC deadline timer available Feb 13 15:55:30.883325 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:55:30.883333 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:55:30.883341 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:55:30.883348 kernel: kvm-guest: setup PV sched yield Feb 13 15:55:30.883386 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 15:55:30.883397 kernel: Booting paravirtualized kernel on KVM Feb 13 15:55:30.883405 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:55:30.883413 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:55:30.883421 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:55:30.883428 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:55:30.883436 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:55:30.883443 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:55:30.883451 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:55:30.883460 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:55:30.883471 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:55:30.883478 kernel: random: crng init done Feb 13 15:55:30.883486 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:55:30.883494 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:55:30.883501 kernel: Fallback order for Node 0: 0 Feb 13 15:55:30.883509 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 15:55:30.883517 kernel: Policy zone: DMA32 Feb 13 15:55:30.883524 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:55:30.883535 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 138948K reserved, 0K cma-reserved) Feb 13 15:55:30.883543 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:55:30.883550 kernel: ftrace: allocating 37890 entries in 149 pages Feb 13 15:55:30.883558 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:55:30.883565 kernel: Dynamic Preempt: voluntary Feb 13 15:55:30.883573 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:55:30.883582 kernel: rcu: RCU event tracing is enabled. Feb 13 15:55:30.883589 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:55:30.883597 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:55:30.883607 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:55:30.883615 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:55:30.883623 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:55:30.883630 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:55:30.883638 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:55:30.883646 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:55:30.883653 kernel: Console: colour VGA+ 80x25 Feb 13 15:55:30.883661 kernel: printk: console [ttyS0] enabled Feb 13 15:55:30.883668 kernel: ACPI: Core revision 20230628 Feb 13 15:55:30.883676 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:55:30.883686 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:55:30.883694 kernel: x2apic enabled Feb 13 15:55:30.883701 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:55:30.883709 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:55:30.883717 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:55:30.883725 kernel: kvm-guest: setup PV IPIs Feb 13 15:55:30.883741 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:55:30.883749 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:55:30.883765 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 15:55:30.883773 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:55:30.883781 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:55:30.883791 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:55:30.883799 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:55:30.883807 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:55:30.883816 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:55:30.883825 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:55:30.883835 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:55:30.883843 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:55:30.883851 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:55:30.883859 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:55:30.883868 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:55:30.883877 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:55:30.883885 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:55:30.883893 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:55:30.883904 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:55:30.883912 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:55:30.883920 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:55:30.883928 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:55:30.883936 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:55:30.883948 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:55:30.883963 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:55:30.883981 kernel: landlock: Up and running. Feb 13 15:55:30.883996 kernel: SELinux: Initializing. Feb 13 15:55:30.884017 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:55:30.884032 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:55:30.884050 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:55:30.884065 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:55:30.884080 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:55:30.884095 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:55:30.884115 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:55:30.884129 kernel: ... version: 0 Feb 13 15:55:30.884137 kernel: ... bit width: 48 Feb 13 15:55:30.884148 kernel: ... generic registers: 6 Feb 13 15:55:30.884156 kernel: ... value mask: 0000ffffffffffff Feb 13 15:55:30.884164 kernel: ... max period: 00007fffffffffff Feb 13 15:55:30.884172 kernel: ... fixed-purpose events: 0 Feb 13 15:55:30.884180 kernel: ... event mask: 000000000000003f Feb 13 15:55:30.884188 kernel: signal: max sigframe size: 1776 Feb 13 15:55:30.884196 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:55:30.884205 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:55:30.884213 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:55:30.884223 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:55:30.884231 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:55:30.884239 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:55:30.884247 kernel: smpboot: Max logical packages: 1 Feb 13 15:55:30.884255 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 15:55:30.884263 kernel: devtmpfs: initialized Feb 13 15:55:30.884271 kernel: x86/mm: Memory block size: 128MB Feb 13 15:55:30.884279 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:55:30.884287 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:55:30.884299 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:55:30.884310 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:55:30.884321 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:55:30.884331 kernel: audit: type=2000 audit(1739462130.243:1): state=initialized audit_enabled=0 res=1 Feb 13 15:55:30.884342 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:55:30.884382 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:55:30.884395 kernel: cpuidle: using governor menu Feb 13 15:55:30.884406 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:55:30.884416 kernel: dca service started, version 1.12.1 Feb 13 15:55:30.884431 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 15:55:30.884443 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 15:55:30.884451 kernel: PCI: Using configuration type 1 for base access Feb 13 15:55:30.884460 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:55:30.884468 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:55:30.884476 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:55:30.884484 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:55:30.884492 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:55:30.884500 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:55:30.884511 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:55:30.884519 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:55:30.884527 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:55:30.884535 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:55:30.884543 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:55:30.884551 kernel: ACPI: Interpreter enabled Feb 13 15:55:30.884559 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:55:30.884567 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:55:30.884575 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:55:30.884586 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:55:30.884594 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:55:30.884602 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:55:30.884800 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:55:30.884931 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:55:30.885053 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:55:30.885064 kernel: PCI host bridge to bus 0000:00 Feb 13 15:55:30.885188 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:55:30.885305 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:55:30.885443 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:55:30.885619 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 15:55:30.885776 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 15:55:30.885891 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 15:55:30.886001 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:55:30.886158 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:55:30.886304 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:55:30.886472 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 15:55:30.886662 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 15:55:30.886810 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 15:55:30.886932 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:55:30.887063 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:55:30.887190 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 15:55:30.887340 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 15:55:30.887507 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 15:55:30.887648 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:55:30.887800 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 15:55:30.887926 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 15:55:30.888049 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 15:55:30.888186 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:55:30.888310 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 15:55:30.888470 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 15:55:30.888596 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 15:55:30.888718 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 15:55:30.888885 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:55:30.889016 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:55:30.889146 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:55:30.889267 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 15:55:30.889404 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 15:55:30.889547 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:55:30.889671 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 15:55:30.889681 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:55:30.889694 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:55:30.889702 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:55:30.889710 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:55:30.889718 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:55:30.889726 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:55:30.889734 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:55:30.889742 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:55:30.889750 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:55:30.889769 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:55:30.889779 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:55:30.889787 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:55:30.889795 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:55:30.889803 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:55:30.889812 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:55:30.889821 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:55:30.889829 kernel: iommu: Default domain type: Translated Feb 13 15:55:30.889837 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:55:30.889845 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:55:30.889855 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:55:30.889863 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 15:55:30.889871 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 15:55:30.890020 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:55:30.890148 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:55:30.890272 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:55:30.890282 kernel: vgaarb: loaded Feb 13 15:55:30.890291 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:55:30.890303 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:55:30.890311 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:55:30.890319 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:55:30.890327 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:55:30.890335 kernel: pnp: PnP ACPI init Feb 13 15:55:30.890553 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 15:55:30.890570 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:55:30.890580 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:55:30.890592 kernel: NET: Registered PF_INET protocol family Feb 13 15:55:30.890600 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:55:30.890609 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:55:30.890617 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:55:30.890625 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:55:30.890633 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:55:30.890641 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:55:30.890649 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:55:30.890657 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:55:30.890667 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:55:30.890675 kernel: NET: Registered PF_XDP protocol family Feb 13 15:55:30.890800 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:55:30.890911 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:55:30.891019 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:55:30.891128 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 15:55:30.891236 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 15:55:30.891345 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 15:55:30.891376 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:55:30.891403 kernel: Initialise system trusted keyrings Feb 13 15:55:30.891412 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:55:30.891420 kernel: Key type asymmetric registered Feb 13 15:55:30.891428 kernel: Asymmetric key parser 'x509' registered Feb 13 15:55:30.891436 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:55:30.891444 kernel: io scheduler mq-deadline registered Feb 13 15:55:30.891453 kernel: io scheduler kyber registered Feb 13 15:55:30.891461 kernel: io scheduler bfq registered Feb 13 15:55:30.891469 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:55:30.891480 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:55:30.891488 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:55:30.891496 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:55:30.891504 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:55:30.891512 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:55:30.891520 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:55:30.891528 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:55:30.891537 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:55:30.891678 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:55:30.891695 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:55:30.891821 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:55:30.891936 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:55:30 UTC (1739462130) Feb 13 15:55:30.892048 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 15:55:30.892058 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:55:30.892066 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:55:30.892075 kernel: Segment Routing with IPv6 Feb 13 15:55:30.892083 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:55:30.892095 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:55:30.892103 kernel: Key type dns_resolver registered Feb 13 15:55:30.892111 kernel: IPI shorthand broadcast: enabled Feb 13 15:55:30.892119 kernel: sched_clock: Marking stable (563002212, 137903380)->(852028467, -151122875) Feb 13 15:55:30.892127 kernel: registered taskstats version 1 Feb 13 15:55:30.892135 kernel: Loading compiled-in X.509 certificates Feb 13 15:55:30.892143 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 3d19ae6dcd850c11d55bf09bd44e00c45ed399eb' Feb 13 15:55:30.892151 kernel: Key type .fscrypt registered Feb 13 15:55:30.892159 kernel: Key type fscrypt-provisioning registered Feb 13 15:55:30.892169 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:55:30.892177 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:55:30.892185 kernel: ima: No architecture policies found Feb 13 15:55:30.892193 kernel: clk: Disabling unused clocks Feb 13 15:55:30.892201 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 15:55:30.892209 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:55:30.892217 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 15:55:30.892225 kernel: Run /init as init process Feb 13 15:55:30.892236 kernel: with arguments: Feb 13 15:55:30.892244 kernel: /init Feb 13 15:55:30.892251 kernel: with environment: Feb 13 15:55:30.892259 kernel: HOME=/ Feb 13 15:55:30.892267 kernel: TERM=linux Feb 13 15:55:30.892275 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:55:30.892285 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:55:30.892295 systemd[1]: Detected virtualization kvm. Feb 13 15:55:30.892307 systemd[1]: Detected architecture x86-64. Feb 13 15:55:30.892315 systemd[1]: Running in initrd. Feb 13 15:55:30.892324 systemd[1]: No hostname configured, using default hostname. Feb 13 15:55:30.892332 systemd[1]: Hostname set to . Feb 13 15:55:30.892341 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:55:30.892362 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:55:30.892371 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:55:30.892380 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:55:30.892392 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:55:30.892412 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:55:30.892423 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:55:30.892433 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:55:30.892444 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:55:30.892455 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:55:30.892464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:55:30.892473 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:55:30.892482 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:55:30.892491 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:55:30.892500 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:55:30.892509 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:55:30.892518 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:55:30.892529 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:55:30.892538 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:55:30.892547 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:55:30.892556 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:55:30.892565 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:55:30.892574 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:55:30.892583 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:55:30.892592 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:55:30.892601 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:55:30.892613 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:55:30.892622 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:55:30.892631 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:55:30.892640 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:55:30.892649 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:55:30.892658 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:55:30.892667 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:55:30.892676 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:55:30.892710 systemd-journald[194]: Collecting audit messages is disabled. Feb 13 15:55:30.892742 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:55:30.892755 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:55:30.892774 systemd-journald[194]: Journal started Feb 13 15:55:30.892795 systemd-journald[194]: Runtime Journal (/run/log/journal/40831c46a01241cfb77e6e39e439ed81) is 6.0M, max 48.3M, 42.3M free. Feb 13 15:55:30.884446 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 15:55:30.924007 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:55:30.924032 kernel: Bridge firewalling registered Feb 13 15:55:30.911291 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 15:55:30.925475 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:55:30.926091 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:55:30.939593 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:55:30.940372 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:55:30.943292 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:55:30.945305 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:55:30.950784 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:55:30.956326 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:55:30.959886 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:55:30.960168 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:55:30.968480 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:55:30.975457 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:55:30.976483 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:55:30.993076 dracut-cmdline[234]: dracut-dracut-053 Feb 13 15:55:30.996103 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:55:30.997008 systemd-resolved[223]: Positive Trust Anchors: Feb 13 15:55:30.997017 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:55:30.997047 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:55:30.999435 systemd-resolved[223]: Defaulting to hostname 'linux'. Feb 13 15:55:31.000487 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:55:31.001850 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:55:31.084394 kernel: SCSI subsystem initialized Feb 13 15:55:31.093392 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:55:31.104388 kernel: iscsi: registered transport (tcp) Feb 13 15:55:31.125543 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:55:31.125608 kernel: QLogic iSCSI HBA Driver Feb 13 15:55:31.173739 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:55:31.185521 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:55:31.210261 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:55:31.210323 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:55:31.210335 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:55:31.251389 kernel: raid6: avx2x4 gen() 29761 MB/s Feb 13 15:55:31.268384 kernel: raid6: avx2x2 gen() 29961 MB/s Feb 13 15:55:31.285575 kernel: raid6: avx2x1 gen() 25772 MB/s Feb 13 15:55:31.285597 kernel: raid6: using algorithm avx2x2 gen() 29961 MB/s Feb 13 15:55:31.303588 kernel: raid6: .... xor() 19902 MB/s, rmw enabled Feb 13 15:55:31.303611 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:55:31.324390 kernel: xor: automatically using best checksumming function avx Feb 13 15:55:31.769388 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:55:31.780657 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:55:31.789475 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:55:31.801490 systemd-udevd[417]: Using default interface naming scheme 'v255'. Feb 13 15:55:31.806340 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:55:31.819537 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:55:31.832924 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Feb 13 15:55:31.867847 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:55:31.877520 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:55:31.939195 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:55:31.953539 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:55:31.968096 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:55:31.972849 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:55:31.975596 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:55:32.027318 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:55:32.027343 kernel: libata version 3.00 loaded. Feb 13 15:55:32.027439 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:55:32.027464 kernel: AES CTR mode by8 optimization enabled Feb 13 15:55:32.027479 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:55:32.030534 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:55:32.036822 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:55:32.036844 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:55:32.037043 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:55:32.037224 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:55:32.037241 kernel: GPT:9289727 != 19775487 Feb 13 15:55:32.037264 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:55:32.037281 kernel: scsi host0: ahci Feb 13 15:55:32.037496 kernel: scsi host1: ahci Feb 13 15:55:32.037686 kernel: GPT:9289727 != 19775487 Feb 13 15:55:32.037701 kernel: scsi host2: ahci Feb 13 15:55:32.037907 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:55:32.037923 kernel: scsi host3: ahci Feb 13 15:55:32.038106 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:55:32.038119 kernel: scsi host4: ahci Feb 13 15:55:32.038302 kernel: scsi host5: ahci Feb 13 15:55:32.038506 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 15:55:32.038519 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 15:55:32.038531 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 15:55:32.038545 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 15:55:32.038559 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 15:55:32.038578 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 15:55:31.975973 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:55:31.977582 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:55:31.987544 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:55:32.012808 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:55:32.018879 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:55:32.019024 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:55:32.056232 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (468) Feb 13 15:55:32.027459 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:55:32.062548 kernel: BTRFS: device fsid 0e178e67-0100-48b1-87c9-422b9a68652a devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (467) Feb 13 15:55:32.031048 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:55:32.031224 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:55:32.037415 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:55:32.051158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:55:32.069698 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:55:32.081991 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:55:32.116574 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:55:32.122075 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:55:32.126046 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:55:32.126111 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:55:32.142508 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:55:32.145867 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:55:32.153542 disk-uuid[556]: Primary Header is updated. Feb 13 15:55:32.153542 disk-uuid[556]: Secondary Entries is updated. Feb 13 15:55:32.153542 disk-uuid[556]: Secondary Header is updated. Feb 13 15:55:32.156375 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:55:32.175030 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:55:32.342516 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:55:32.342584 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:55:32.342597 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:55:32.343373 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:55:32.344382 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:55:32.345383 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:55:32.346535 kernel: ata3.00: applying bridge limits Feb 13 15:55:32.346553 kernel: ata3.00: configured for UDMA/100 Feb 13 15:55:32.347384 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:55:32.351378 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:55:32.398392 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:55:32.411992 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:55:32.412012 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:55:33.165377 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:55:33.165839 disk-uuid[559]: The operation has completed successfully. Feb 13 15:55:33.192681 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:55:33.192807 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:55:33.224499 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:55:33.227676 sh[594]: Success Feb 13 15:55:33.240382 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:55:33.277421 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:55:33.298066 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:55:33.302990 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:55:33.313169 kernel: BTRFS info (device dm-0): first mount of filesystem 0e178e67-0100-48b1-87c9-422b9a68652a Feb 13 15:55:33.313228 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:55:33.313243 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:55:33.314186 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:55:33.314932 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:55:33.319880 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:55:33.321724 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:55:33.333482 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:55:33.335441 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:55:33.345578 kernel: BTRFS info (device vda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:55:33.345624 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:55:33.345635 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:55:33.348385 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:55:33.357881 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:55:33.359976 kernel: BTRFS info (device vda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:55:33.369241 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:55:33.376533 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:55:33.472491 ignition[691]: Ignition 2.20.0 Feb 13 15:55:33.472505 ignition[691]: Stage: fetch-offline Feb 13 15:55:33.472550 ignition[691]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:33.472563 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:55:33.472676 ignition[691]: parsed url from cmdline: "" Feb 13 15:55:33.472681 ignition[691]: no config URL provided Feb 13 15:55:33.477567 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:55:33.472687 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:55:33.472708 ignition[691]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:55:33.472746 ignition[691]: op(1): [started] loading QEMU firmware config module Feb 13 15:55:33.472753 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:55:33.485552 ignition[691]: op(1): [finished] loading QEMU firmware config module Feb 13 15:55:33.486941 ignition[691]: parsing config with SHA512: 6507207c1ba69e973f4fd56ac8630d3f7326b83385d4d631f7cee7b6c0c4850eea1835d8ea3253dbd0f51e04d91fb71e884c194d5d5cbcf01f5d9ed5a6cb9e0a Feb 13 15:55:33.488559 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:55:33.489998 unknown[691]: fetched base config from "system" Feb 13 15:55:33.490458 ignition[691]: fetch-offline: fetch-offline passed Feb 13 15:55:33.490007 unknown[691]: fetched user config from "qemu" Feb 13 15:55:33.490540 ignition[691]: Ignition finished successfully Feb 13 15:55:33.494178 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:55:33.510117 systemd-networkd[783]: lo: Link UP Feb 13 15:55:33.510127 systemd-networkd[783]: lo: Gained carrier Feb 13 15:55:33.511773 systemd-networkd[783]: Enumeration completed Feb 13 15:55:33.512162 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:55:33.512166 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:55:33.512929 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:55:33.512979 systemd-networkd[783]: eth0: Link UP Feb 13 15:55:33.512984 systemd-networkd[783]: eth0: Gained carrier Feb 13 15:55:33.512991 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:55:33.522001 systemd[1]: Reached target network.target - Network. Feb 13 15:55:33.523743 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:55:33.536411 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:55:33.538059 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:55:33.561546 ignition[786]: Ignition 2.20.0 Feb 13 15:55:33.561559 ignition[786]: Stage: kargs Feb 13 15:55:33.561753 ignition[786]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:33.561765 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:55:33.565579 ignition[786]: kargs: kargs passed Feb 13 15:55:33.565623 ignition[786]: Ignition finished successfully Feb 13 15:55:33.569559 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:55:33.594535 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:55:33.614212 ignition[796]: Ignition 2.20.0 Feb 13 15:55:33.614224 ignition[796]: Stage: disks Feb 13 15:55:33.614391 ignition[796]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:33.614402 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:55:33.617169 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:55:33.615007 ignition[796]: disks: disks passed Feb 13 15:55:33.619493 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:55:33.615051 ignition[796]: Ignition finished successfully Feb 13 15:55:33.621612 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:55:33.623050 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:55:33.624668 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:55:33.626766 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:55:33.634590 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:55:33.646810 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:55:33.652992 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:55:33.662473 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:55:33.748335 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:55:33.750006 kernel: EXT4-fs (vda9): mounted filesystem e45e00fd-a630-4f0f-91bb-bc879e42a47e r/w with ordered data mode. Quota mode: none. Feb 13 15:55:33.749024 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:55:33.757431 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:55:33.759501 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:55:33.759814 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:55:33.759853 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:55:33.767921 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (815) Feb 13 15:55:33.759872 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:55:33.771708 kernel: BTRFS info (device vda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:55:33.771723 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:55:33.771734 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:55:33.773370 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:55:33.774902 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:55:33.781748 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:55:33.782690 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:55:33.818785 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:55:33.823520 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:55:33.828123 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:55:33.832513 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:55:33.916461 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:55:33.930499 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:55:33.932105 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:55:33.942372 kernel: BTRFS info (device vda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:55:33.957333 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:55:34.007564 ignition[932]: INFO : Ignition 2.20.0 Feb 13 15:55:34.007564 ignition[932]: INFO : Stage: mount Feb 13 15:55:34.029035 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:34.029035 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:55:34.029035 ignition[932]: INFO : mount: mount passed Feb 13 15:55:34.029035 ignition[932]: INFO : Ignition finished successfully Feb 13 15:55:34.010588 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:55:34.039515 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:55:34.312450 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:55:34.329656 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:55:34.337203 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Feb 13 15:55:34.337228 kernel: BTRFS info (device vda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:55:34.337240 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:55:34.338844 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:55:34.341379 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:55:34.342928 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:55:34.379292 ignition[959]: INFO : Ignition 2.20.0 Feb 13 15:55:34.379292 ignition[959]: INFO : Stage: files Feb 13 15:55:34.381311 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:34.381311 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:55:34.381311 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:55:34.381311 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:55:34.381311 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:55:34.388658 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:55:34.388658 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:55:34.388658 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:55:34.388658 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:55:34.388658 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:55:34.388658 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:55:34.388658 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:55:34.388658 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:55:34.388658 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:55:34.388658 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:55:34.388658 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:55:34.383624 unknown[959]: wrote ssh authorized keys file for user: core Feb 13 15:55:34.751578 systemd-networkd[783]: eth0: Gained IPv6LL Feb 13 15:55:34.876036 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 15:55:35.561241 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:55:35.561241 ignition[959]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 15:55:35.565695 ignition[959]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:55:35.565695 ignition[959]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:55:35.565695 ignition[959]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 15:55:35.565695 ignition[959]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:55:35.591692 ignition[959]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:55:35.597747 ignition[959]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:55:35.599496 ignition[959]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:55:35.599496 ignition[959]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:55:35.599496 ignition[959]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:55:35.599496 ignition[959]: INFO : files: files passed Feb 13 15:55:35.599496 ignition[959]: INFO : Ignition finished successfully Feb 13 15:55:35.608983 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:55:35.622491 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:55:35.625178 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:55:35.628029 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:55:35.628145 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:55:35.642404 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:55:35.646336 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:55:35.646336 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:55:35.650008 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:55:35.655045 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:55:35.655300 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:55:35.671479 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:55:35.694311 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:55:35.694446 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:55:35.695749 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:55:35.698692 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:55:35.699749 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:55:35.700489 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:55:35.718114 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:55:35.726531 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:55:35.735693 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:55:35.736951 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:55:35.739166 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:55:35.741181 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:55:35.741292 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:55:35.743605 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:55:35.745151 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:55:35.747179 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:55:35.749219 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:55:35.751273 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:55:35.753434 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:55:35.755547 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:55:35.757830 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:55:35.759875 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:55:35.762064 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:55:35.763963 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:55:35.764104 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:55:35.766412 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:55:35.767851 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:55:35.769911 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:55:35.770034 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:55:35.772120 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:55:35.772228 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:55:35.774596 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:55:35.774714 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:55:35.776727 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:55:35.778639 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:55:35.783435 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:55:35.786052 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:55:35.787944 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:55:35.790104 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:55:35.790199 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:55:35.792803 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:55:35.792892 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:55:35.794851 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:55:35.794961 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:55:35.797141 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:55:35.797242 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:55:35.809539 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:55:35.812560 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:55:35.813567 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:55:35.813746 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:55:35.815983 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:55:35.816100 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:55:35.824192 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:55:35.825345 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:55:35.845202 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:55:35.859971 ignition[1014]: INFO : Ignition 2.20.0 Feb 13 15:55:35.859971 ignition[1014]: INFO : Stage: umount Feb 13 15:55:35.861660 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:55:35.861660 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:55:35.861660 ignition[1014]: INFO : umount: umount passed Feb 13 15:55:35.861660 ignition[1014]: INFO : Ignition finished successfully Feb 13 15:55:35.863410 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:55:35.863544 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:55:35.865149 systemd[1]: Stopped target network.target - Network. Feb 13 15:55:35.866596 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:55:35.866658 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:55:35.868491 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:55:35.868538 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:55:35.870462 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:55:35.870507 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:55:35.872425 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:55:35.872471 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:55:35.874429 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:55:35.876445 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:55:35.883388 systemd-networkd[783]: eth0: DHCPv6 lease lost Feb 13 15:55:35.884301 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:55:35.884463 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:55:35.886640 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:55:35.886784 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:55:35.889679 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:55:35.889740 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:55:35.895532 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:55:35.896775 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:55:35.896846 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:55:35.899153 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:55:35.899199 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:55:35.901306 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:55:35.901362 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:55:35.903423 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:55:35.903472 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:55:35.906077 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:55:35.914651 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:55:35.914789 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:55:35.920225 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:55:35.921396 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:55:35.924365 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:55:35.924421 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:55:35.927837 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:55:35.927886 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:55:35.931228 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:55:35.932241 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:55:35.934593 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:55:35.934655 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:55:35.937873 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:55:35.937926 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:55:35.951480 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:55:35.953925 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:55:35.955074 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:55:35.957783 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:55:35.957840 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:55:35.986203 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:55:35.987249 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:55:35.989860 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:55:35.989912 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:55:35.993648 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:55:35.994764 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:55:36.180235 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:55:36.181277 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:55:36.183750 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:55:36.185804 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:55:36.185871 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:55:36.200491 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:55:36.208303 systemd[1]: Switching root. Feb 13 15:55:36.243200 systemd-journald[194]: Journal stopped Feb 13 15:55:37.384220 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Feb 13 15:55:37.384284 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:55:37.384303 kernel: SELinux: policy capability open_perms=1 Feb 13 15:55:37.384314 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:55:37.384327 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:55:37.384338 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:55:37.384362 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:55:37.384385 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:55:37.384407 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:55:37.384424 kernel: audit: type=1403 audit(1739462136.631:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:55:37.384437 systemd[1]: Successfully loaded SELinux policy in 39.571ms. Feb 13 15:55:37.384458 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.480ms. Feb 13 15:55:37.384471 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:55:37.384483 systemd[1]: Detected virtualization kvm. Feb 13 15:55:37.384496 systemd[1]: Detected architecture x86-64. Feb 13 15:55:37.384508 systemd[1]: Detected first boot. Feb 13 15:55:37.384523 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:55:37.384543 zram_generator::config[1060]: No configuration found. Feb 13 15:55:37.384556 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:55:37.384569 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:55:37.384581 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:55:37.384603 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:55:37.384616 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:55:37.384629 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:55:37.384641 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:55:37.384653 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:55:37.384666 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:55:37.384678 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:55:37.384695 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:55:37.384714 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:55:37.384729 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:55:37.384742 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:55:37.384754 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:55:37.384766 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:55:37.384779 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:55:37.384791 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:55:37.384803 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:55:37.384816 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:55:37.384828 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:55:37.384843 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:55:37.384855 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:55:37.384867 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:55:37.384880 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:55:37.384892 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:55:37.384904 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:55:37.384916 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:55:37.384928 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:55:37.384943 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:55:37.384955 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:55:37.384967 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:55:37.384985 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:55:37.384997 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:55:37.385010 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:55:37.385022 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:55:37.385034 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:55:37.385047 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:55:37.385077 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:55:37.385089 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:55:37.385102 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:55:37.385115 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:55:37.385127 systemd[1]: Reached target machines.target - Containers. Feb 13 15:55:37.385139 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:55:37.385151 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:55:37.385164 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:55:37.385178 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:55:37.385191 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:55:37.385203 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:55:37.385216 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:55:37.385228 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:55:37.385240 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:55:37.385253 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:55:37.385273 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:55:37.385288 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:55:37.385300 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:55:37.385312 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:55:37.385329 kernel: fuse: init (API version 7.39) Feb 13 15:55:37.385341 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:55:37.385378 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:55:37.385393 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:55:37.385407 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:55:37.385419 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:55:37.385435 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:55:37.385447 systemd[1]: Stopped verity-setup.service. Feb 13 15:55:37.385460 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:55:37.385472 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:55:37.385484 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:55:37.385496 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:55:37.385509 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:55:37.385524 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:55:37.385536 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:55:37.385548 kernel: loop: module loaded Feb 13 15:55:37.385560 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:55:37.385599 systemd-journald[1130]: Collecting audit messages is disabled. Feb 13 15:55:37.385624 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:55:37.385636 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:55:37.385649 systemd-journald[1130]: Journal started Feb 13 15:55:37.385671 systemd-journald[1130]: Runtime Journal (/run/log/journal/40831c46a01241cfb77e6e39e439ed81) is 6.0M, max 48.3M, 42.3M free. Feb 13 15:55:37.132951 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:55:37.158105 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:55:37.158572 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:55:37.388421 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:55:37.390161 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:55:37.390339 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:55:37.391880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:55:37.392097 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:55:37.393604 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:55:37.393774 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:55:37.395260 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:55:37.395453 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:55:37.396918 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:55:37.399300 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:55:37.413614 kernel: ACPI: bus type drm_connector registered Feb 13 15:55:37.413534 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:55:37.415328 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:55:37.415681 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:55:37.420421 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:55:37.429529 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:55:37.432085 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:55:37.433268 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:55:37.433306 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:55:37.435449 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:55:37.503755 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:55:37.506575 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:55:37.508613 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:55:37.511065 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:55:37.514571 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:55:37.516639 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:55:37.518527 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:55:37.560420 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:55:37.562608 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:55:37.565074 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:55:37.567294 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:55:37.570255 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:55:37.571716 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:55:37.573013 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:55:37.574652 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:55:37.620556 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:55:37.626225 systemd-journald[1130]: Time spent on flushing to /var/log/journal/40831c46a01241cfb77e6e39e439ed81 is 27.208ms for 939 entries. Feb 13 15:55:37.626225 systemd-journald[1130]: System Journal (/var/log/journal/40831c46a01241cfb77e6e39e439ed81) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:55:37.938118 systemd-journald[1130]: Received client request to flush runtime journal. Feb 13 15:55:37.938185 kernel: loop0: detected capacity change from 0 to 141000 Feb 13 15:55:37.938212 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:55:37.938229 kernel: loop1: detected capacity change from 0 to 138184 Feb 13 15:55:37.633243 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:55:37.668012 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:55:37.715929 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:55:37.789045 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Feb 13 15:55:37.789060 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Feb 13 15:55:37.799180 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:55:37.809624 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:55:37.815948 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:55:37.818233 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:55:37.826695 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:55:37.910057 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:55:37.916604 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:55:37.940309 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:55:37.945531 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Feb 13 15:55:37.945549 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Feb 13 15:55:37.948409 kernel: loop2: detected capacity change from 0 to 211296 Feb 13 15:55:37.951240 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:55:37.988279 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:55:37.989104 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:55:38.169388 kernel: loop3: detected capacity change from 0 to 141000 Feb 13 15:55:38.182378 kernel: loop4: detected capacity change from 0 to 138184 Feb 13 15:55:38.192374 kernel: loop5: detected capacity change from 0 to 211296 Feb 13 15:55:38.197450 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:55:38.198012 (sd-merge)[1201]: Merged extensions into '/usr'. Feb 13 15:55:38.203648 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:55:38.203665 systemd[1]: Reloading... Feb 13 15:55:38.263487 zram_generator::config[1227]: No configuration found. Feb 13 15:55:38.410318 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:55:38.464269 systemd[1]: Reloading finished in 260 ms. Feb 13 15:55:38.481100 ldconfig[1162]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:55:38.500216 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:55:38.501826 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:55:38.528663 systemd[1]: Starting ensure-sysext.service... Feb 13 15:55:38.533490 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:55:38.536994 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:55:38.537010 systemd[1]: Reloading... Feb 13 15:55:38.571096 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:55:38.571429 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:55:38.572507 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:55:38.572828 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Feb 13 15:55:38.572899 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Feb 13 15:55:38.576871 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:55:38.576883 systemd-tmpfiles[1265]: Skipping /boot Feb 13 15:55:38.600393 zram_generator::config[1294]: No configuration found. Feb 13 15:55:38.610016 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:55:38.610121 systemd-tmpfiles[1265]: Skipping /boot Feb 13 15:55:38.711881 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:55:38.761082 systemd[1]: Reloading finished in 223 ms. Feb 13 15:55:38.781728 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:55:38.792943 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:55:38.811581 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:55:38.814291 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:55:38.816694 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:55:38.820714 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:55:38.824746 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:55:38.827588 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:55:38.831845 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:55:38.832254 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:55:38.836434 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:55:38.838733 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:55:38.847404 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:55:38.847571 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:55:38.849526 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:55:38.849577 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:55:38.850527 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:55:38.850724 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:55:38.854824 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:55:38.855006 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:55:38.857560 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Feb 13 15:55:38.858631 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:55:38.858924 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:55:38.866871 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:55:38.871319 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:55:38.875238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:55:38.875464 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:55:38.879497 augenrules[1365]: No rules Feb 13 15:55:38.881667 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:55:38.886028 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:55:38.890609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:55:38.894604 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:55:38.906705 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:55:38.908318 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:55:38.909940 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:55:38.912128 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:55:38.912451 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:55:38.915539 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:55:38.917282 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:55:38.920985 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:55:38.921158 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:55:38.922909 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:55:38.923086 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:55:38.926018 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:55:38.938484 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:55:38.940676 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1395) Feb 13 15:55:38.941305 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:55:38.982534 systemd[1]: Finished ensure-sysext.service. Feb 13 15:55:38.988522 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:55:38.993050 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:55:38.999187 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:55:39.000498 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:55:39.002507 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:55:39.014604 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:55:39.019501 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:55:39.024111 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:55:39.026206 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:55:39.031493 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:55:39.035863 augenrules[1405]: /sbin/augenrules: No change Feb 13 15:55:39.036573 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:55:39.037890 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:55:39.037917 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:55:39.038510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:55:39.038716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:55:39.041958 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:55:39.042127 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:55:39.044050 systemd-resolved[1334]: Positive Trust Anchors: Feb 13 15:55:39.044062 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:55:39.044093 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:55:39.045800 augenrules[1434]: No rules Feb 13 15:55:39.045822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:55:39.049400 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 15:55:39.046001 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:55:39.047885 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:55:39.048101 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:55:39.057710 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:55:39.057894 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:55:39.059788 systemd-resolved[1334]: Defaulting to hostname 'linux'. Feb 13 15:55:39.061810 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:55:39.067909 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:55:39.072172 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:55:39.073636 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:55:39.073703 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:55:39.075998 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:55:39.080488 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:55:39.096394 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 15:55:39.102084 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:55:39.102454 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:55:39.102667 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:55:39.133699 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:55:39.169747 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:55:39.227973 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:55:39.238320 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:55:39.241376 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:55:39.268035 kernel: kvm_amd: TSC scaling supported Feb 13 15:55:39.268126 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:55:39.268141 kernel: kvm_amd: Nested Paging enabled Feb 13 15:55:39.270041 kernel: kvm_amd: LBR virtualization supported Feb 13 15:55:39.270076 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:55:39.270090 kernel: kvm_amd: Virtual GIF supported Feb 13 15:55:39.276212 systemd-networkd[1421]: lo: Link UP Feb 13 15:55:39.276230 systemd-networkd[1421]: lo: Gained carrier Feb 13 15:55:39.278938 systemd-networkd[1421]: Enumeration completed Feb 13 15:55:39.279123 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:55:39.279320 systemd[1]: Reached target network.target - Network. Feb 13 15:55:39.280050 systemd-networkd[1421]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:55:39.280062 systemd-networkd[1421]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:55:39.281385 systemd-networkd[1421]: eth0: Link UP Feb 13 15:55:39.281395 systemd-networkd[1421]: eth0: Gained carrier Feb 13 15:55:39.281408 systemd-networkd[1421]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:55:39.290997 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:55:39.294389 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:55:39.295433 systemd-networkd[1421]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:55:39.297057 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. Feb 13 15:55:39.772332 systemd-resolved[1334]: Clock change detected. Flushing caches. Feb 13 15:55:39.772420 systemd-timesyncd[1426]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:55:39.772474 systemd-timesyncd[1426]: Initial clock synchronization to Thu 2025-02-13 15:55:39.771879 UTC. Feb 13 15:55:39.802577 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:55:39.825599 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:55:39.827461 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:55:39.834601 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:55:39.870369 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:55:39.872007 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:55:39.873232 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:55:39.874506 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:55:39.875900 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:55:39.877514 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:55:39.878859 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:55:39.880324 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:55:39.881704 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:55:39.881729 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:55:39.882730 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:55:39.884680 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:55:39.887983 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:55:39.893738 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:55:39.896561 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:55:39.898259 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:55:39.899519 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:55:39.900850 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:55:39.901930 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:55:39.901964 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:55:39.903005 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:55:39.905138 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:55:39.908628 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:55:39.912611 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:55:39.913670 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:55:39.916175 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:55:39.918584 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:55:39.923580 jq[1468]: false Feb 13 15:55:39.925768 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:55:39.929821 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:55:39.942404 extend-filesystems[1469]: Found loop3 Feb 13 15:55:39.942475 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:55:39.944622 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:55:39.945160 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:55:39.946126 extend-filesystems[1469]: Found loop4 Feb 13 15:55:39.946126 extend-filesystems[1469]: Found loop5 Feb 13 15:55:39.946126 extend-filesystems[1469]: Found sr0 Feb 13 15:55:39.946126 extend-filesystems[1469]: Found vda Feb 13 15:55:39.946126 extend-filesystems[1469]: Found vda1 Feb 13 15:55:39.946126 extend-filesystems[1469]: Found vda2 Feb 13 15:55:39.946126 extend-filesystems[1469]: Found vda3 Feb 13 15:55:39.946126 extend-filesystems[1469]: Found usr Feb 13 15:55:39.946126 extend-filesystems[1469]: Found vda4 Feb 13 15:55:39.946126 extend-filesystems[1469]: Found vda6 Feb 13 15:55:39.946126 extend-filesystems[1469]: Found vda7 Feb 13 15:55:39.946126 extend-filesystems[1469]: Found vda9 Feb 13 15:55:39.946126 extend-filesystems[1469]: Checking size of /dev/vda9 Feb 13 15:55:39.945758 dbus-daemon[1467]: [system] SELinux support is enabled Feb 13 15:55:39.947077 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:55:39.951452 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:55:39.954042 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:55:39.958956 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:55:39.968235 update_engine[1480]: I20250213 15:55:39.968135 1480 main.cc:92] Flatcar Update Engine starting Feb 13 15:55:39.971092 update_engine[1480]: I20250213 15:55:39.969532 1480 update_check_scheduler.cc:74] Next update check in 2m37s Feb 13 15:55:39.970623 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:55:39.970845 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:55:39.971174 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:55:39.971479 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:55:39.971991 extend-filesystems[1469]: Resized partition /dev/vda9 Feb 13 15:55:39.974513 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:55:39.974764 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:55:39.978678 extend-filesystems[1490]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:55:39.979789 jq[1482]: true Feb 13 15:55:39.984331 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:55:39.995327 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1372) Feb 13 15:55:40.002669 (ntainerd)[1491]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:55:40.007933 jq[1494]: true Feb 13 15:55:40.010362 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:55:40.009231 systemd-logind[1476]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:55:40.009251 systemd-logind[1476]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:55:40.011722 systemd-logind[1476]: New seat seat0. Feb 13 15:55:40.020905 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:55:40.035509 extend-filesystems[1490]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:55:40.035509 extend-filesystems[1490]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:55:40.035509 extend-filesystems[1490]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:55:40.039118 extend-filesystems[1469]: Resized filesystem in /dev/vda9 Feb 13 15:55:40.040838 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:55:40.041077 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:55:40.043507 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:55:40.046109 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:55:40.046265 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:55:40.050837 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:55:40.050974 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:55:40.061582 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:55:40.070918 bash[1519]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:55:40.074127 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:55:40.076203 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:55:40.138742 locksmithd[1518]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:55:40.178991 sshd_keygen[1489]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:55:40.206729 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:55:40.218792 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:55:40.256274 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:55:40.256551 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:55:40.259403 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:55:40.296435 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:55:40.312572 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:55:40.314714 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:55:40.315953 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:55:40.357796 containerd[1491]: time="2025-02-13T15:55:40.357706916Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:55:40.381913 containerd[1491]: time="2025-02-13T15:55:40.381861154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:55:40.383873 containerd[1491]: time="2025-02-13T15:55:40.383830958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:55:40.383873 containerd[1491]: time="2025-02-13T15:55:40.383860884Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:55:40.383939 containerd[1491]: time="2025-02-13T15:55:40.383876283Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:55:40.384127 containerd[1491]: time="2025-02-13T15:55:40.384097958Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:55:40.384127 containerd[1491]: time="2025-02-13T15:55:40.384118337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:55:40.384227 containerd[1491]: time="2025-02-13T15:55:40.384199619Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:55:40.384227 containerd[1491]: time="2025-02-13T15:55:40.384215759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:55:40.384466 containerd[1491]: time="2025-02-13T15:55:40.384435351Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:55:40.384466 containerd[1491]: time="2025-02-13T15:55:40.384455659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:55:40.384513 containerd[1491]: time="2025-02-13T15:55:40.384468664Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:55:40.384513 containerd[1491]: time="2025-02-13T15:55:40.384486267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:55:40.384613 containerd[1491]: time="2025-02-13T15:55:40.384592005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:55:40.384893 containerd[1491]: time="2025-02-13T15:55:40.384863885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:55:40.385011 containerd[1491]: time="2025-02-13T15:55:40.384984140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:55:40.385011 containerd[1491]: time="2025-02-13T15:55:40.385001182Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:55:40.385148 containerd[1491]: time="2025-02-13T15:55:40.385120676Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:55:40.385206 containerd[1491]: time="2025-02-13T15:55:40.385186800Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:55:40.391107 containerd[1491]: time="2025-02-13T15:55:40.391062379Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:55:40.391147 containerd[1491]: time="2025-02-13T15:55:40.391119015Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:55:40.391147 containerd[1491]: time="2025-02-13T15:55:40.391134555Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:55:40.391195 containerd[1491]: time="2025-02-13T15:55:40.391150765Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:55:40.391195 containerd[1491]: time="2025-02-13T15:55:40.391165012Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:55:40.391337 containerd[1491]: time="2025-02-13T15:55:40.391300426Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:55:40.391576 containerd[1491]: time="2025-02-13T15:55:40.391547008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:55:40.391689 containerd[1491]: time="2025-02-13T15:55:40.391662024Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:55:40.391689 containerd[1491]: time="2025-02-13T15:55:40.391681731Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:55:40.391738 containerd[1491]: time="2025-02-13T15:55:40.391694344Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:55:40.391738 containerd[1491]: time="2025-02-13T15:55:40.391708250Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:55:40.391775 containerd[1491]: time="2025-02-13T15:55:40.391740250Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:55:40.391775 containerd[1491]: time="2025-02-13T15:55:40.391753455Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:55:40.391775 containerd[1491]: time="2025-02-13T15:55:40.391765738Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:55:40.391826 containerd[1491]: time="2025-02-13T15:55:40.391778332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:55:40.391826 containerd[1491]: time="2025-02-13T15:55:40.391790465Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:55:40.391826 containerd[1491]: time="2025-02-13T15:55:40.391801846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:55:40.391826 containerd[1491]: time="2025-02-13T15:55:40.391811985Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:55:40.391899 containerd[1491]: time="2025-02-13T15:55:40.391840108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.391899 containerd[1491]: time="2025-02-13T15:55:40.391854024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.391899 containerd[1491]: time="2025-02-13T15:55:40.391865926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.391899 containerd[1491]: time="2025-02-13T15:55:40.391879031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.391899 containerd[1491]: time="2025-02-13T15:55:40.391891524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.391993 containerd[1491]: time="2025-02-13T15:55:40.391905070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.391993 containerd[1491]: time="2025-02-13T15:55:40.391917182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.391993 containerd[1491]: time="2025-02-13T15:55:40.391929786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.391993 containerd[1491]: time="2025-02-13T15:55:40.391942450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.391993 containerd[1491]: time="2025-02-13T15:55:40.391957187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.391993 containerd[1491]: time="2025-02-13T15:55:40.391968148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.391993 containerd[1491]: time="2025-02-13T15:55:40.391981813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.391993 containerd[1491]: time="2025-02-13T15:55:40.391994267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.392135 containerd[1491]: time="2025-02-13T15:55:40.392008664Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:55:40.392135 containerd[1491]: time="2025-02-13T15:55:40.392031116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.392135 containerd[1491]: time="2025-02-13T15:55:40.392044501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.392950 containerd[1491]: time="2025-02-13T15:55:40.392772516Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:55:40.393061 containerd[1491]: time="2025-02-13T15:55:40.392985866Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:55:40.393061 containerd[1491]: time="2025-02-13T15:55:40.393007176Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:55:40.393061 containerd[1491]: time="2025-02-13T15:55:40.393020592Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:55:40.393061 containerd[1491]: time="2025-02-13T15:55:40.393033886Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:55:40.393061 containerd[1491]: time="2025-02-13T15:55:40.393043675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.393061 containerd[1491]: time="2025-02-13T15:55:40.393057371Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:55:40.393192 containerd[1491]: time="2025-02-13T15:55:40.393068361Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:55:40.393192 containerd[1491]: time="2025-02-13T15:55:40.393080885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:55:40.393420 containerd[1491]: time="2025-02-13T15:55:40.393378894Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:55:40.393639 containerd[1491]: time="2025-02-13T15:55:40.393425210Z" level=info msg="Connect containerd service" Feb 13 15:55:40.393639 containerd[1491]: time="2025-02-13T15:55:40.393464464Z" level=info msg="using legacy CRI server" Feb 13 15:55:40.393639 containerd[1491]: time="2025-02-13T15:55:40.393471718Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:55:40.393639 containerd[1491]: time="2025-02-13T15:55:40.393586463Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:55:40.394258 containerd[1491]: time="2025-02-13T15:55:40.394229709Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:55:40.394422 containerd[1491]: time="2025-02-13T15:55:40.394373729Z" level=info msg="Start subscribing containerd event" Feb 13 15:55:40.394463 containerd[1491]: time="2025-02-13T15:55:40.394448339Z" level=info msg="Start recovering state" Feb 13 15:55:40.394630 containerd[1491]: time="2025-02-13T15:55:40.394607177Z" level=info msg="Start event monitor" Feb 13 15:55:40.394660 containerd[1491]: time="2025-02-13T15:55:40.394637253Z" level=info msg="Start snapshots syncer" Feb 13 15:55:40.394660 containerd[1491]: time="2025-02-13T15:55:40.394652051Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:55:40.394708 containerd[1491]: time="2025-02-13T15:55:40.394660808Z" level=info msg="Start streaming server" Feb 13 15:55:40.394708 containerd[1491]: time="2025-02-13T15:55:40.394610664Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:55:40.394761 containerd[1491]: time="2025-02-13T15:55:40.394740487Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:55:40.394905 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:55:40.395381 containerd[1491]: time="2025-02-13T15:55:40.395352244Z" level=info msg="containerd successfully booted in 0.038684s" Feb 13 15:55:41.049529 systemd-networkd[1421]: eth0: Gained IPv6LL Feb 13 15:55:41.052812 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:55:41.055015 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:55:41.077579 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:55:41.080279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:55:41.082966 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:55:41.119133 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:55:41.122355 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:55:41.122587 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:55:41.125059 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:55:42.208037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:55:42.209985 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:55:42.212382 systemd[1]: Startup finished in 698ms (kernel) + 5.935s (initrd) + 5.146s (userspace) = 11.780s. Feb 13 15:55:42.233824 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:55:42.276838 agetty[1545]: failed to open credentials directory Feb 13 15:55:42.279165 agetty[1544]: failed to open credentials directory Feb 13 15:55:42.994823 kubelet[1572]: E0213 15:55:42.994678 1572 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:55:42.999701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:55:42.999919 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:55:43.000267 systemd[1]: kubelet.service: Consumed 1.774s CPU time. Feb 13 15:55:49.789014 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:55:49.805039 systemd[1]: Started sshd@0-10.0.0.115:22-10.0.0.1:43384.service - OpenSSH per-connection server daemon (10.0.0.1:43384). Feb 13 15:55:49.932091 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 43384 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:55:49.934791 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:55:49.948513 systemd-logind[1476]: New session 1 of user core. Feb 13 15:55:49.950048 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:55:49.964798 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:55:49.983043 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:55:49.995905 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:55:50.000885 (systemd)[1590]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:55:50.113001 systemd[1590]: Queued start job for default target default.target. Feb 13 15:55:50.123593 systemd[1590]: Created slice app.slice - User Application Slice. Feb 13 15:55:50.123623 systemd[1590]: Reached target paths.target - Paths. Feb 13 15:55:50.123638 systemd[1590]: Reached target timers.target - Timers. Feb 13 15:55:50.125142 systemd[1590]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:55:50.136882 systemd[1590]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:55:50.136997 systemd[1590]: Reached target sockets.target - Sockets. Feb 13 15:55:50.137015 systemd[1590]: Reached target basic.target - Basic System. Feb 13 15:55:50.137050 systemd[1590]: Reached target default.target - Main User Target. Feb 13 15:55:50.137082 systemd[1590]: Startup finished in 127ms. Feb 13 15:55:50.138125 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:55:50.140157 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:55:50.206354 systemd[1]: Started sshd@1-10.0.0.115:22-10.0.0.1:43396.service - OpenSSH per-connection server daemon (10.0.0.1:43396). Feb 13 15:55:50.248684 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 43396 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:55:50.250099 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:55:50.254464 systemd-logind[1476]: New session 2 of user core. Feb 13 15:55:50.264460 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:55:50.318019 sshd[1603]: Connection closed by 10.0.0.1 port 43396 Feb 13 15:55:50.318448 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Feb 13 15:55:50.331102 systemd[1]: sshd@1-10.0.0.115:22-10.0.0.1:43396.service: Deactivated successfully. Feb 13 15:55:50.333065 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:55:50.334598 systemd-logind[1476]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:55:50.343545 systemd[1]: Started sshd@2-10.0.0.115:22-10.0.0.1:43398.service - OpenSSH per-connection server daemon (10.0.0.1:43398). Feb 13 15:55:50.344323 systemd-logind[1476]: Removed session 2. Feb 13 15:55:50.378676 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 43398 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:55:50.380573 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:55:50.384766 systemd-logind[1476]: New session 3 of user core. Feb 13 15:55:50.396428 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:55:50.446194 sshd[1610]: Connection closed by 10.0.0.1 port 43398 Feb 13 15:55:50.446707 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Feb 13 15:55:50.457203 systemd[1]: sshd@2-10.0.0.115:22-10.0.0.1:43398.service: Deactivated successfully. Feb 13 15:55:50.459179 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:55:50.460568 systemd-logind[1476]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:55:50.468662 systemd[1]: Started sshd@3-10.0.0.115:22-10.0.0.1:43402.service - OpenSSH per-connection server daemon (10.0.0.1:43402). Feb 13 15:55:50.469680 systemd-logind[1476]: Removed session 3. Feb 13 15:55:50.505676 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 43402 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:55:50.507085 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:55:50.510865 systemd-logind[1476]: New session 4 of user core. Feb 13 15:55:50.520443 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:55:50.574014 sshd[1617]: Connection closed by 10.0.0.1 port 43402 Feb 13 15:55:50.574426 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Feb 13 15:55:50.587381 systemd[1]: sshd@3-10.0.0.115:22-10.0.0.1:43402.service: Deactivated successfully. Feb 13 15:55:50.589131 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:55:50.590763 systemd-logind[1476]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:55:50.598551 systemd[1]: Started sshd@4-10.0.0.115:22-10.0.0.1:43408.service - OpenSSH per-connection server daemon (10.0.0.1:43408). Feb 13 15:55:50.599390 systemd-logind[1476]: Removed session 4. Feb 13 15:55:50.633603 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 43408 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:55:50.634911 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:55:50.638704 systemd-logind[1476]: New session 5 of user core. Feb 13 15:55:50.653429 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:55:50.738635 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:55:50.738964 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:55:50.757723 sudo[1625]: pam_unix(sudo:session): session closed for user root Feb 13 15:55:50.759068 sshd[1624]: Connection closed by 10.0.0.1 port 43408 Feb 13 15:55:50.759506 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Feb 13 15:55:50.770041 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:43408.service: Deactivated successfully. Feb 13 15:55:50.771838 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:55:50.773467 systemd-logind[1476]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:55:50.786524 systemd[1]: Started sshd@5-10.0.0.115:22-10.0.0.1:43418.service - OpenSSH per-connection server daemon (10.0.0.1:43418). Feb 13 15:55:50.787304 systemd-logind[1476]: Removed session 5. Feb 13 15:55:50.821973 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 43418 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:55:50.823419 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:55:50.827136 systemd-logind[1476]: New session 6 of user core. Feb 13 15:55:50.836436 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:55:50.891256 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:55:50.891595 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:55:50.895414 sudo[1634]: pam_unix(sudo:session): session closed for user root Feb 13 15:55:50.901591 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:55:50.901974 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:55:50.920643 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:55:50.949438 augenrules[1656]: No rules Feb 13 15:55:50.951276 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:55:50.951557 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:55:50.952862 sudo[1633]: pam_unix(sudo:session): session closed for user root Feb 13 15:55:50.954399 sshd[1632]: Connection closed by 10.0.0.1 port 43418 Feb 13 15:55:50.954733 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Feb 13 15:55:50.965752 systemd[1]: sshd@5-10.0.0.115:22-10.0.0.1:43418.service: Deactivated successfully. Feb 13 15:55:50.967294 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:55:50.968589 systemd-logind[1476]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:55:50.978530 systemd[1]: Started sshd@6-10.0.0.115:22-10.0.0.1:43430.service - OpenSSH per-connection server daemon (10.0.0.1:43430). Feb 13 15:55:50.979403 systemd-logind[1476]: Removed session 6. Feb 13 15:55:51.012926 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 43430 ssh2: RSA SHA256:BMYEU6LDfDktEqJx0pX16p0f2YyMm2MrdyJYyvbpZNM Feb 13 15:55:51.014228 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:55:51.017851 systemd-logind[1476]: New session 7 of user core. Feb 13 15:55:51.027415 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:55:51.079480 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:55:51.079819 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:55:51.101626 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:55:51.121454 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:55:51.121748 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:55:51.619820 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:55:51.619977 systemd[1]: kubelet.service: Consumed 1.774s CPU time. Feb 13 15:55:51.633511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:55:51.651455 systemd[1]: Reloading requested from client PID 1715 ('systemctl') (unit session-7.scope)... Feb 13 15:55:51.651473 systemd[1]: Reloading... Feb 13 15:55:51.756833 zram_generator::config[1756]: No configuration found. Feb 13 15:55:52.440514 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:55:52.518013 systemd[1]: Reloading finished in 866 ms. Feb 13 15:55:52.564277 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:55:52.567769 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:55:52.568027 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:55:52.569694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:55:52.719662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:55:52.724428 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:55:52.767469 kubelet[1803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:55:52.767469 kubelet[1803]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:55:52.767469 kubelet[1803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:55:52.767799 kubelet[1803]: I0213 15:55:52.767520 1803 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:55:53.098932 kubelet[1803]: I0213 15:55:53.098835 1803 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:55:53.098932 kubelet[1803]: I0213 15:55:53.098873 1803 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:55:53.099495 kubelet[1803]: I0213 15:55:53.099470 1803 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:55:53.113800 kubelet[1803]: I0213 15:55:53.113762 1803 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:55:53.126404 kubelet[1803]: I0213 15:55:53.125895 1803 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:55:53.128407 kubelet[1803]: I0213 15:55:53.128367 1803 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:55:53.128569 kubelet[1803]: I0213 15:55:53.128538 1803 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:55:53.128569 kubelet[1803]: I0213 15:55:53.128564 1803 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:55:53.128745 kubelet[1803]: I0213 15:55:53.128574 1803 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:55:53.128745 kubelet[1803]: I0213 15:55:53.128718 1803 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:55:53.128868 kubelet[1803]: I0213 15:55:53.128835 1803 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:55:53.128868 kubelet[1803]: I0213 15:55:53.128863 1803 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:55:53.128925 kubelet[1803]: I0213 15:55:53.128896 1803 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:55:53.128925 kubelet[1803]: I0213 15:55:53.128916 1803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:55:53.129337 kubelet[1803]: E0213 15:55:53.129246 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:53.129458 kubelet[1803]: E0213 15:55:53.129351 1803 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:53.130909 kubelet[1803]: I0213 15:55:53.130884 1803 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:55:53.133830 kubelet[1803]: I0213 15:55:53.133785 1803 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:55:53.135092 kubelet[1803]: W0213 15:55:53.135038 1803 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:55:53.135740 kubelet[1803]: W0213 15:55:53.135699 1803 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 15:55:53.135740 kubelet[1803]: W0213 15:55:53.135693 1803 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.0.0.115" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 15:55:53.135740 kubelet[1803]: E0213 15:55:53.135737 1803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 15:55:53.135855 kubelet[1803]: E0213 15:55:53.135754 1803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.115" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 15:55:53.135989 kubelet[1803]: I0213 15:55:53.135956 1803 server.go:1256] "Started kubelet" Feb 13 15:55:53.136926 kubelet[1803]: I0213 15:55:53.136101 1803 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:55:53.136926 kubelet[1803]: I0213 15:55:53.136422 1803 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:55:53.137621 kubelet[1803]: I0213 15:55:53.137022 1803 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:55:53.137621 kubelet[1803]: I0213 15:55:53.137083 1803 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:55:53.140435 kubelet[1803]: I0213 15:55:53.140133 1803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:55:53.140514 kubelet[1803]: I0213 15:55:53.140493 1803 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:55:53.141414 kubelet[1803]: I0213 15:55:53.140860 1803 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:55:53.141414 kubelet[1803]: I0213 15:55:53.140935 1803 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:55:53.143009 kubelet[1803]: I0213 15:55:53.142993 1803 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:55:53.143386 kubelet[1803]: I0213 15:55:53.143357 1803 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:55:53.145375 kubelet[1803]: I0213 15:55:53.145360 1803 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:55:53.145779 kubelet[1803]: E0213 15:55:53.145752 1803 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:55:53.146644 kubelet[1803]: E0213 15:55:53.146604 1803 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.115.1823cf9cb9e73e39 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.115,UID:10.0.0.115,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.115,},FirstTimestamp:2025-02-13 15:55:53.135930937 +0000 UTC m=+0.407449745,LastTimestamp:2025-02-13 15:55:53.135930937 +0000 UTC m=+0.407449745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.115,}" Feb 13 15:55:53.147329 kubelet[1803]: E0213 15:55:53.146877 1803 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.115\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 15:55:53.147329 kubelet[1803]: W0213 15:55:53.146970 1803 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 15:55:53.147329 kubelet[1803]: E0213 15:55:53.146987 1803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 15:55:53.157898 kubelet[1803]: E0213 15:55:53.157845 1803 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.115.1823cf9cba7ce658 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.115,UID:10.0.0.115,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.115,},FirstTimestamp:2025-02-13 15:55:53.14573884 +0000 UTC m=+0.417257648,LastTimestamp:2025-02-13 15:55:53.14573884 +0000 UTC m=+0.417257648,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.115,}" Feb 13 15:55:53.158733 kubelet[1803]: I0213 15:55:53.158647 1803 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:55:53.158733 kubelet[1803]: I0213 15:55:53.158661 1803 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:55:53.158733 kubelet[1803]: I0213 15:55:53.158693 1803 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:55:53.159196 kubelet[1803]: E0213 15:55:53.159170 1803 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.115.1823cf9cbb36453d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.115,UID:10.0.0.115,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.115 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.115,},FirstTimestamp:2025-02-13 15:55:53.157887293 +0000 UTC m=+0.429406101,LastTimestamp:2025-02-13 15:55:53.157887293 +0000 UTC m=+0.429406101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.115,}" Feb 13 15:55:53.162450 kubelet[1803]: E0213 15:55:53.162426 1803 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.115.1823cf9cbb3ac52a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.115,UID:10.0.0.115,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.115 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.115,},FirstTimestamp:2025-02-13 15:55:53.158182186 +0000 UTC m=+0.429700994,LastTimestamp:2025-02-13 15:55:53.158182186 +0000 UTC m=+0.429700994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.115,}" Feb 13 15:55:53.165853 kubelet[1803]: E0213 15:55:53.165827 1803 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.115.1823cf9cbb3ad029 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.115,UID:10.0.0.115,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.115 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.115,},FirstTimestamp:2025-02-13 15:55:53.158185001 +0000 UTC m=+0.429703809,LastTimestamp:2025-02-13 15:55:53.158185001 +0000 UTC m=+0.429703809,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.115,}" Feb 13 15:55:53.242487 kubelet[1803]: I0213 15:55:53.242457 1803 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.115" Feb 13 15:55:53.284907 kubelet[1803]: I0213 15:55:53.284872 1803 policy_none.go:49] "None policy: Start" Feb 13 15:55:53.285683 kubelet[1803]: I0213 15:55:53.285660 1803 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:55:53.285737 kubelet[1803]: I0213 15:55:53.285689 1803 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:55:53.286696 kubelet[1803]: I0213 15:55:53.286672 1803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:55:53.288449 kubelet[1803]: I0213 15:55:53.288425 1803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:55:53.288499 kubelet[1803]: I0213 15:55:53.288462 1803 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:55:53.288499 kubelet[1803]: I0213 15:55:53.288485 1803 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:55:53.288621 kubelet[1803]: E0213 15:55:53.288607 1803 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:55:53.311087 kubelet[1803]: I0213 15:55:53.311030 1803 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.115" Feb 13 15:55:53.322504 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:55:53.328834 kubelet[1803]: E0213 15:55:53.328793 1803 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Feb 13 15:55:53.334559 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:55:53.337466 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:55:53.348558 kubelet[1803]: I0213 15:55:53.348515 1803 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:55:53.348998 kubelet[1803]: I0213 15:55:53.348865 1803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:55:53.351056 kubelet[1803]: E0213 15:55:53.350999 1803 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.115\" not found" Feb 13 15:55:53.429925 kubelet[1803]: E0213 15:55:53.429864 1803 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Feb 13 15:55:53.530475 kubelet[1803]: E0213 15:55:53.530406 1803 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Feb 13 15:55:53.631491 kubelet[1803]: E0213 15:55:53.631300 1803 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Feb 13 15:55:53.732042 kubelet[1803]: E0213 15:55:53.731995 1803 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Feb 13 15:55:53.832721 kubelet[1803]: E0213 15:55:53.832652 1803 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Feb 13 15:55:53.933540 kubelet[1803]: E0213 15:55:53.933382 1803 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Feb 13 15:55:54.034102 kubelet[1803]: E0213 15:55:54.034028 1803 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Feb 13 15:55:54.102740 kubelet[1803]: I0213 15:55:54.102663 1803 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 15:55:54.102915 kubelet[1803]: W0213 15:55:54.102887 1803 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Feb 13 15:55:54.130165 kubelet[1803]: E0213 15:55:54.130102 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:54.134278 kubelet[1803]: E0213 15:55:54.134246 1803 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Feb 13 15:55:54.235054 kubelet[1803]: E0213 15:55:54.234941 1803 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Feb 13 15:55:54.258637 sudo[1667]: pam_unix(sudo:session): session closed for user root Feb 13 15:55:54.260004 sshd[1666]: Connection closed by 10.0.0.1 port 43430 Feb 13 15:55:54.260354 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Feb 13 15:55:54.264136 systemd[1]: sshd@6-10.0.0.115:22-10.0.0.1:43430.service: Deactivated successfully. Feb 13 15:55:54.266200 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:55:54.266896 systemd-logind[1476]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:55:54.267960 systemd-logind[1476]: Removed session 7. Feb 13 15:55:54.335728 kubelet[1803]: E0213 15:55:54.335686 1803 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Feb 13 15:55:54.436258 kubelet[1803]: E0213 15:55:54.436193 1803 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Feb 13 15:55:54.536931 kubelet[1803]: E0213 15:55:54.536816 1803 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Feb 13 15:55:54.637554 kubelet[1803]: E0213 15:55:54.637518 1803 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Feb 13 15:55:54.739295 kubelet[1803]: I0213 15:55:54.739258 1803 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 15:55:54.739733 containerd[1491]: time="2025-02-13T15:55:54.739685600Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:55:54.740125 kubelet[1803]: I0213 15:55:54.739958 1803 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 15:55:55.130250 kubelet[1803]: I0213 15:55:55.130180 1803 apiserver.go:52] "Watching apiserver" Feb 13 15:55:55.130669 kubelet[1803]: E0213 15:55:55.130207 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:55.133662 kubelet[1803]: I0213 15:55:55.133645 1803 topology_manager.go:215] "Topology Admit Handler" podUID="8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" podNamespace="kube-system" podName="cilium-lkm9p" Feb 13 15:55:55.133758 kubelet[1803]: I0213 15:55:55.133748 1803 topology_manager.go:215] "Topology Admit Handler" podUID="be7a95dd-418b-4d09-8ae4-bc88d585ec36" podNamespace="kube-system" podName="kube-proxy-6jtjp" Feb 13 15:55:55.141617 kubelet[1803]: I0213 15:55:55.141573 1803 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:55:55.143698 systemd[1]: Created slice kubepods-besteffort-podbe7a95dd_418b_4d09_8ae4_bc88d585ec36.slice - libcontainer container kubepods-besteffort-podbe7a95dd_418b_4d09_8ae4_bc88d585ec36.slice. Feb 13 15:55:55.154775 kubelet[1803]: I0213 15:55:55.154732 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-cilium-config-path\") pod \"cilium-lkm9p\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " pod="kube-system/cilium-lkm9p" Feb 13 15:55:55.154775 kubelet[1803]: I0213 15:55:55.154781 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-bpf-maps\") pod \"cilium-lkm9p\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " pod="kube-system/cilium-lkm9p" Feb 13 15:55:55.154885 kubelet[1803]: I0213 15:55:55.154811 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-xtables-lock\") pod \"cilium-lkm9p\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " pod="kube-system/cilium-lkm9p" Feb 13 15:55:55.154885 kubelet[1803]: I0213 15:55:55.154841 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-host-proc-sys-net\") pod \"cilium-lkm9p\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " pod="kube-system/cilium-lkm9p" Feb 13 15:55:55.154885 kubelet[1803]: I0213 15:55:55.154871 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-host-proc-sys-kernel\") pod \"cilium-lkm9p\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " pod="kube-system/cilium-lkm9p" Feb 13 15:55:55.154975 kubelet[1803]: I0213 15:55:55.154897 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-cilium-run\") pod \"cilium-lkm9p\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " pod="kube-system/cilium-lkm9p" Feb 13 15:55:55.154975 kubelet[1803]: I0213 15:55:55.154958 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-cilium-cgroup\") pod \"cilium-lkm9p\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " pod="kube-system/cilium-lkm9p" Feb 13 15:55:55.155030 kubelet[1803]: I0213 15:55:55.155011 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-clustermesh-secrets\") pod \"cilium-lkm9p\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " pod="kube-system/cilium-lkm9p" Feb 13 15:55:55.155059 kubelet[1803]: I0213 15:55:55.155045 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6r29\" (UniqueName: \"kubernetes.io/projected/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-kube-api-access-v6r29\") pod \"cilium-lkm9p\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " pod="kube-system/cilium-lkm9p" Feb 13 15:55:55.155083 kubelet[1803]: I0213 15:55:55.155069 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-hubble-tls\") pod \"cilium-lkm9p\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " pod="kube-system/cilium-lkm9p" Feb 13 15:55:55.155132 kubelet[1803]: I0213 15:55:55.155115 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be7a95dd-418b-4d09-8ae4-bc88d585ec36-kube-proxy\") pod \"kube-proxy-6jtjp\" (UID: \"be7a95dd-418b-4d09-8ae4-bc88d585ec36\") " pod="kube-system/kube-proxy-6jtjp" Feb 13 15:55:55.155155 kubelet[1803]: I0213 15:55:55.155147 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be7a95dd-418b-4d09-8ae4-bc88d585ec36-xtables-lock\") pod \"kube-proxy-6jtjp\" (UID: \"be7a95dd-418b-4d09-8ae4-bc88d585ec36\") " pod="kube-system/kube-proxy-6jtjp" Feb 13 15:55:55.155190 kubelet[1803]: I0213 15:55:55.155177 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be7a95dd-418b-4d09-8ae4-bc88d585ec36-lib-modules\") pod \"kube-proxy-6jtjp\" (UID: \"be7a95dd-418b-4d09-8ae4-bc88d585ec36\") " pod="kube-system/kube-proxy-6jtjp" Feb 13 15:55:55.155219 kubelet[1803]: I0213 15:55:55.155205 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-hostproc\") pod \"cilium-lkm9p\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " pod="kube-system/cilium-lkm9p" Feb 13 15:55:55.155241 kubelet[1803]: I0213 15:55:55.155230 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-cni-path\") pod \"cilium-lkm9p\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " pod="kube-system/cilium-lkm9p" Feb 13 15:55:55.155278 kubelet[1803]: I0213 15:55:55.155258 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-etc-cni-netd\") pod \"cilium-lkm9p\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " pod="kube-system/cilium-lkm9p" Feb 13 15:55:55.155325 kubelet[1803]: I0213 15:55:55.155292 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-lib-modules\") pod \"cilium-lkm9p\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " pod="kube-system/cilium-lkm9p" Feb 13 15:55:55.155392 kubelet[1803]: I0213 15:55:55.155373 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq52s\" (UniqueName: \"kubernetes.io/projected/be7a95dd-418b-4d09-8ae4-bc88d585ec36-kube-api-access-qq52s\") pod \"kube-proxy-6jtjp\" (UID: \"be7a95dd-418b-4d09-8ae4-bc88d585ec36\") " pod="kube-system/kube-proxy-6jtjp" Feb 13 15:55:55.159058 systemd[1]: Created slice kubepods-burstable-pod8ba71f8d_cc7b_49f0_bb1d_0c74f526dd67.slice - libcontainer container kubepods-burstable-pod8ba71f8d_cc7b_49f0_bb1d_0c74f526dd67.slice. Feb 13 15:55:55.456459 kubelet[1803]: E0213 15:55:55.456328 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:55:55.456904 containerd[1491]: time="2025-02-13T15:55:55.456857302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6jtjp,Uid:be7a95dd-418b-4d09-8ae4-bc88d585ec36,Namespace:kube-system,Attempt:0,}" Feb 13 15:55:55.471464 kubelet[1803]: E0213 15:55:55.471411 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:55:55.471887 containerd[1491]: time="2025-02-13T15:55:55.471849478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lkm9p,Uid:8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67,Namespace:kube-system,Attempt:0,}" Feb 13 15:55:56.131363 kubelet[1803]: E0213 15:55:56.131302 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:56.623359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2767556085.mount: Deactivated successfully. Feb 13 15:55:56.631547 containerd[1491]: time="2025-02-13T15:55:56.631491321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:55:56.634147 containerd[1491]: time="2025-02-13T15:55:56.634102808Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:55:56.636091 containerd[1491]: time="2025-02-13T15:55:56.636019853Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:55:56.637109 containerd[1491]: time="2025-02-13T15:55:56.637073389Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:55:56.638003 containerd[1491]: time="2025-02-13T15:55:56.637956004Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:55:56.639937 containerd[1491]: time="2025-02-13T15:55:56.639867048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:55:56.640713 containerd[1491]: time="2025-02-13T15:55:56.640686715Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.168740766s" Feb 13 15:55:56.669173 containerd[1491]: time="2025-02-13T15:55:56.669102816Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.212103126s" Feb 13 15:55:57.131936 kubelet[1803]: E0213 15:55:57.131894 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:57.343551 containerd[1491]: time="2025-02-13T15:55:57.342890819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:55:57.343551 containerd[1491]: time="2025-02-13T15:55:57.343532251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:55:57.343718 containerd[1491]: time="2025-02-13T15:55:57.343558320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:57.343746 containerd[1491]: time="2025-02-13T15:55:57.343693303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:57.402528 containerd[1491]: time="2025-02-13T15:55:57.402301515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:55:57.402528 containerd[1491]: time="2025-02-13T15:55:57.402416621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:55:57.402528 containerd[1491]: time="2025-02-13T15:55:57.402430186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:57.402693 containerd[1491]: time="2025-02-13T15:55:57.402543990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:57.404555 systemd[1]: Started cri-containerd-fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a.scope - libcontainer container fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a. Feb 13 15:55:57.426862 systemd[1]: Started cri-containerd-3954858fd38857d0f0bea51881ee5a9f8bb60a6ce37fab96c903fcc5cec6aa2f.scope - libcontainer container 3954858fd38857d0f0bea51881ee5a9f8bb60a6ce37fab96c903fcc5cec6aa2f. Feb 13 15:55:57.427247 containerd[1491]: time="2025-02-13T15:55:57.427183427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lkm9p,Uid:8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a\"" Feb 13 15:55:57.428443 kubelet[1803]: E0213 15:55:57.428418 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:55:57.430142 containerd[1491]: time="2025-02-13T15:55:57.429437845Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:55:57.448438 containerd[1491]: time="2025-02-13T15:55:57.448396549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6jtjp,Uid:be7a95dd-418b-4d09-8ae4-bc88d585ec36,Namespace:kube-system,Attempt:0,} returns sandbox id \"3954858fd38857d0f0bea51881ee5a9f8bb60a6ce37fab96c903fcc5cec6aa2f\"" Feb 13 15:55:57.449051 kubelet[1803]: E0213 15:55:57.449011 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:55:58.132726 kubelet[1803]: E0213 15:55:58.132694 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:59.133255 kubelet[1803]: E0213 15:55:59.133215 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:00.133809 kubelet[1803]: E0213 15:56:00.133765 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:01.134545 kubelet[1803]: E0213 15:56:01.134506 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:01.786773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2804467507.mount: Deactivated successfully. Feb 13 15:56:02.135207 kubelet[1803]: E0213 15:56:02.135056 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:03.135477 kubelet[1803]: E0213 15:56:03.135420 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:04.103254 containerd[1491]: time="2025-02-13T15:56:04.103184512Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:04.103961 containerd[1491]: time="2025-02-13T15:56:04.103918709Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 15:56:04.105056 containerd[1491]: time="2025-02-13T15:56:04.105014915Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:04.106583 containerd[1491]: time="2025-02-13T15:56:04.106544323Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.677068457s" Feb 13 15:56:04.106583 containerd[1491]: time="2025-02-13T15:56:04.106579860Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 15:56:04.107262 containerd[1491]: time="2025-02-13T15:56:04.107226743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:56:04.108598 containerd[1491]: time="2025-02-13T15:56:04.108550325Z" level=info msg="CreateContainer within sandbox \"fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:56:04.125007 containerd[1491]: time="2025-02-13T15:56:04.124941194Z" level=info msg="CreateContainer within sandbox \"fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff\"" Feb 13 15:56:04.125814 containerd[1491]: time="2025-02-13T15:56:04.125770078Z" level=info msg="StartContainer for \"0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff\"" Feb 13 15:56:04.136351 kubelet[1803]: E0213 15:56:04.136266 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:04.157483 systemd[1]: Started cri-containerd-0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff.scope - libcontainer container 0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff. Feb 13 15:56:04.183867 containerd[1491]: time="2025-02-13T15:56:04.183803422Z" level=info msg="StartContainer for \"0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff\" returns successfully" Feb 13 15:56:04.195150 systemd[1]: cri-containerd-0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff.scope: Deactivated successfully. Feb 13 15:56:04.305409 kubelet[1803]: E0213 15:56:04.305370 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:04.846643 containerd[1491]: time="2025-02-13T15:56:04.846578181Z" level=info msg="shim disconnected" id=0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff namespace=k8s.io Feb 13 15:56:04.846643 containerd[1491]: time="2025-02-13T15:56:04.846637201Z" level=warning msg="cleaning up after shim disconnected" id=0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff namespace=k8s.io Feb 13 15:56:04.846643 containerd[1491]: time="2025-02-13T15:56:04.846645487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:56:05.119956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff-rootfs.mount: Deactivated successfully. Feb 13 15:56:05.137198 kubelet[1803]: E0213 15:56:05.137159 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:05.311336 kubelet[1803]: E0213 15:56:05.311267 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:05.314004 containerd[1491]: time="2025-02-13T15:56:05.313929341Z" level=info msg="CreateContainer within sandbox \"fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:56:05.452459 containerd[1491]: time="2025-02-13T15:56:05.452345280Z" level=info msg="CreateContainer within sandbox \"fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7\"" Feb 13 15:56:05.452984 containerd[1491]: time="2025-02-13T15:56:05.452945376Z" level=info msg="StartContainer for \"27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7\"" Feb 13 15:56:05.483464 systemd[1]: Started cri-containerd-27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7.scope - libcontainer container 27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7. Feb 13 15:56:05.512606 containerd[1491]: time="2025-02-13T15:56:05.512469725Z" level=info msg="StartContainer for \"27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7\" returns successfully" Feb 13 15:56:05.526172 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:56:05.527106 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:56:05.527197 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:56:05.535673 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:56:05.536016 systemd[1]: cri-containerd-27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7.scope: Deactivated successfully. Feb 13 15:56:05.561566 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:56:05.629514 containerd[1491]: time="2025-02-13T15:56:05.629454664Z" level=info msg="shim disconnected" id=27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7 namespace=k8s.io Feb 13 15:56:05.629706 containerd[1491]: time="2025-02-13T15:56:05.629684104Z" level=warning msg="cleaning up after shim disconnected" id=27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7 namespace=k8s.io Feb 13 15:56:05.629706 containerd[1491]: time="2025-02-13T15:56:05.629702218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:56:06.058445 containerd[1491]: time="2025-02-13T15:56:06.058396653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:06.059157 containerd[1491]: time="2025-02-13T15:56:06.059112666Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620592" Feb 13 15:56:06.060234 containerd[1491]: time="2025-02-13T15:56:06.060193473Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:06.062194 containerd[1491]: time="2025-02-13T15:56:06.062154641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:06.062863 containerd[1491]: time="2025-02-13T15:56:06.062826981Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 1.955570844s" Feb 13 15:56:06.062863 containerd[1491]: time="2025-02-13T15:56:06.062854463Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 15:56:06.064230 containerd[1491]: time="2025-02-13T15:56:06.064204505Z" level=info msg="CreateContainer within sandbox \"3954858fd38857d0f0bea51881ee5a9f8bb60a6ce37fab96c903fcc5cec6aa2f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:56:06.080966 containerd[1491]: time="2025-02-13T15:56:06.080924621Z" level=info msg="CreateContainer within sandbox \"3954858fd38857d0f0bea51881ee5a9f8bb60a6ce37fab96c903fcc5cec6aa2f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"78c3eff2db72ba2b27a7490703c5fe46145a086e5d86c1ffe5cd0161d5e2d51a\"" Feb 13 15:56:06.081616 containerd[1491]: time="2025-02-13T15:56:06.081380215Z" level=info msg="StartContainer for \"78c3eff2db72ba2b27a7490703c5fe46145a086e5d86c1ffe5cd0161d5e2d51a\"" Feb 13 15:56:06.117474 systemd[1]: Started cri-containerd-78c3eff2db72ba2b27a7490703c5fe46145a086e5d86c1ffe5cd0161d5e2d51a.scope - libcontainer container 78c3eff2db72ba2b27a7490703c5fe46145a086e5d86c1ffe5cd0161d5e2d51a. Feb 13 15:56:06.120633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7-rootfs.mount: Deactivated successfully. Feb 13 15:56:06.120864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1314154248.mount: Deactivated successfully. Feb 13 15:56:06.138048 kubelet[1803]: E0213 15:56:06.137737 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:06.149887 containerd[1491]: time="2025-02-13T15:56:06.149850091Z" level=info msg="StartContainer for \"78c3eff2db72ba2b27a7490703c5fe46145a086e5d86c1ffe5cd0161d5e2d51a\" returns successfully" Feb 13 15:56:06.314483 kubelet[1803]: E0213 15:56:06.314364 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:06.316621 kubelet[1803]: E0213 15:56:06.316602 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:06.318704 containerd[1491]: time="2025-02-13T15:56:06.318662003Z" level=info msg="CreateContainer within sandbox \"fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:56:06.321750 kubelet[1803]: I0213 15:56:06.321649 1803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6jtjp" podStartSLOduration=4.707988967 podStartE2EDuration="13.321608138s" podCreationTimestamp="2025-02-13 15:55:53 +0000 UTC" firstStartedPulling="2025-02-13 15:55:57.449434155 +0000 UTC m=+4.720952963" lastFinishedPulling="2025-02-13 15:56:06.063053326 +0000 UTC m=+13.334572134" observedRunningTime="2025-02-13 15:56:06.321606395 +0000 UTC m=+13.593125223" watchObservedRunningTime="2025-02-13 15:56:06.321608138 +0000 UTC m=+13.593126947" Feb 13 15:56:06.336770 containerd[1491]: time="2025-02-13T15:56:06.336718826Z" level=info msg="CreateContainer within sandbox \"fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01\"" Feb 13 15:56:06.337436 containerd[1491]: time="2025-02-13T15:56:06.337379956Z" level=info msg="StartContainer for \"1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01\"" Feb 13 15:56:06.368452 systemd[1]: Started cri-containerd-1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01.scope - libcontainer container 1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01. Feb 13 15:56:06.398092 containerd[1491]: time="2025-02-13T15:56:06.398049193Z" level=info msg="StartContainer for \"1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01\" returns successfully" Feb 13 15:56:06.399432 systemd[1]: cri-containerd-1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01.scope: Deactivated successfully. Feb 13 15:56:06.720924 containerd[1491]: time="2025-02-13T15:56:06.720745784Z" level=info msg="shim disconnected" id=1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01 namespace=k8s.io Feb 13 15:56:06.720924 containerd[1491]: time="2025-02-13T15:56:06.720828399Z" level=warning msg="cleaning up after shim disconnected" id=1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01 namespace=k8s.io Feb 13 15:56:06.720924 containerd[1491]: time="2025-02-13T15:56:06.720836825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:56:07.119619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01-rootfs.mount: Deactivated successfully. Feb 13 15:56:07.138528 kubelet[1803]: E0213 15:56:07.138487 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:07.322774 kubelet[1803]: E0213 15:56:07.322735 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:07.322774 kubelet[1803]: E0213 15:56:07.322744 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:07.324650 containerd[1491]: time="2025-02-13T15:56:07.324616627Z" level=info msg="CreateContainer within sandbox \"fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:56:07.340621 containerd[1491]: time="2025-02-13T15:56:07.340584042Z" level=info msg="CreateContainer within sandbox \"fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6\"" Feb 13 15:56:07.341078 containerd[1491]: time="2025-02-13T15:56:07.341056368Z" level=info msg="StartContainer for \"358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6\"" Feb 13 15:56:07.379505 systemd[1]: Started cri-containerd-358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6.scope - libcontainer container 358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6. Feb 13 15:56:07.402670 systemd[1]: cri-containerd-358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6.scope: Deactivated successfully. Feb 13 15:56:07.405361 containerd[1491]: time="2025-02-13T15:56:07.405299877Z" level=info msg="StartContainer for \"358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6\" returns successfully" Feb 13 15:56:07.427440 containerd[1491]: time="2025-02-13T15:56:07.427370608Z" level=info msg="shim disconnected" id=358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6 namespace=k8s.io Feb 13 15:56:07.427440 containerd[1491]: time="2025-02-13T15:56:07.427437954Z" level=warning msg="cleaning up after shim disconnected" id=358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6 namespace=k8s.io Feb 13 15:56:07.427440 containerd[1491]: time="2025-02-13T15:56:07.427447121Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:56:08.119897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6-rootfs.mount: Deactivated successfully. Feb 13 15:56:08.139055 kubelet[1803]: E0213 15:56:08.139025 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:08.327253 kubelet[1803]: E0213 15:56:08.327221 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:08.329424 containerd[1491]: time="2025-02-13T15:56:08.329357916Z" level=info msg="CreateContainer within sandbox \"fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:56:08.349414 containerd[1491]: time="2025-02-13T15:56:08.349353825Z" level=info msg="CreateContainer within sandbox \"fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187\"" Feb 13 15:56:08.350006 containerd[1491]: time="2025-02-13T15:56:08.349957097Z" level=info msg="StartContainer for \"e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187\"" Feb 13 15:56:08.383476 systemd[1]: Started cri-containerd-e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187.scope - libcontainer container e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187. Feb 13 15:56:08.412486 containerd[1491]: time="2025-02-13T15:56:08.412436678Z" level=info msg="StartContainer for \"e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187\" returns successfully" Feb 13 15:56:08.501755 kubelet[1803]: I0213 15:56:08.501694 1803 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:56:08.881338 kernel: Initializing XFRM netlink socket Feb 13 15:56:09.140146 kubelet[1803]: E0213 15:56:09.140075 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:09.332252 kubelet[1803]: E0213 15:56:09.332211 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:09.344853 kubelet[1803]: I0213 15:56:09.344824 1803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-lkm9p" podStartSLOduration=9.666740728 podStartE2EDuration="16.3447932s" podCreationTimestamp="2025-02-13 15:55:53 +0000 UTC" firstStartedPulling="2025-02-13 15:55:57.428962273 +0000 UTC m=+4.700481081" lastFinishedPulling="2025-02-13 15:56:04.107014735 +0000 UTC m=+11.378533553" observedRunningTime="2025-02-13 15:56:09.344666161 +0000 UTC m=+16.616184969" watchObservedRunningTime="2025-02-13 15:56:09.3447932 +0000 UTC m=+16.616312008" Feb 13 15:56:10.141290 kubelet[1803]: E0213 15:56:10.141237 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:10.334489 kubelet[1803]: E0213 15:56:10.334452 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:10.471360 systemd-networkd[1421]: cilium_host: Link UP Feb 13 15:56:10.471542 systemd-networkd[1421]: cilium_net: Link UP Feb 13 15:56:10.471738 systemd-networkd[1421]: cilium_net: Gained carrier Feb 13 15:56:10.471907 systemd-networkd[1421]: cilium_host: Gained carrier Feb 13 15:56:10.570050 systemd-networkd[1421]: cilium_vxlan: Link UP Feb 13 15:56:10.570062 systemd-networkd[1421]: cilium_vxlan: Gained carrier Feb 13 15:56:10.659580 kubelet[1803]: I0213 15:56:10.659523 1803 topology_manager.go:215] "Topology Admit Handler" podUID="d22e4b9c-a3a6-4f04-b911-9411e7f9a8c2" podNamespace="default" podName="nginx-deployment-6d5f899847-lmwwt" Feb 13 15:56:10.665564 systemd[1]: Created slice kubepods-besteffort-podd22e4b9c_a3a6_4f04_b911_9411e7f9a8c2.slice - libcontainer container kubepods-besteffort-podd22e4b9c_a3a6_4f04_b911_9411e7f9a8c2.slice. Feb 13 15:56:10.754646 kubelet[1803]: I0213 15:56:10.754517 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fskwh\" (UniqueName: \"kubernetes.io/projected/d22e4b9c-a3a6-4f04-b911-9411e7f9a8c2-kube-api-access-fskwh\") pod \"nginx-deployment-6d5f899847-lmwwt\" (UID: \"d22e4b9c-a3a6-4f04-b911-9411e7f9a8c2\") " pod="default/nginx-deployment-6d5f899847-lmwwt" Feb 13 15:56:10.783379 kernel: NET: Registered PF_ALG protocol family Feb 13 15:56:10.913468 systemd-networkd[1421]: cilium_net: Gained IPv6LL Feb 13 15:56:10.969628 containerd[1491]: time="2025-02-13T15:56:10.969580045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-lmwwt,Uid:d22e4b9c-a3a6-4f04-b911-9411e7f9a8c2,Namespace:default,Attempt:0,}" Feb 13 15:56:11.141806 kubelet[1803]: E0213 15:56:11.141655 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:11.321430 systemd-networkd[1421]: cilium_host: Gained IPv6LL Feb 13 15:56:11.335657 kubelet[1803]: E0213 15:56:11.335624 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:11.415199 systemd-networkd[1421]: lxc_health: Link UP Feb 13 15:56:11.416058 systemd-networkd[1421]: lxc_health: Gained carrier Feb 13 15:56:11.511060 systemd-networkd[1421]: lxc88bbabc8f4d6: Link UP Feb 13 15:56:11.521332 kernel: eth0: renamed from tmpb7681 Feb 13 15:56:11.526805 systemd-networkd[1421]: lxc88bbabc8f4d6: Gained carrier Feb 13 15:56:11.961459 systemd-networkd[1421]: cilium_vxlan: Gained IPv6LL Feb 13 15:56:12.142205 kubelet[1803]: E0213 15:56:12.142037 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:12.337766 kubelet[1803]: E0213 15:56:12.337629 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:13.129274 kubelet[1803]: E0213 15:56:13.129207 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:13.142468 kubelet[1803]: E0213 15:56:13.142429 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:13.305515 systemd-networkd[1421]: lxc_health: Gained IPv6LL Feb 13 15:56:13.433920 systemd-networkd[1421]: lxc88bbabc8f4d6: Gained IPv6LL Feb 13 15:56:13.712372 kubelet[1803]: I0213 15:56:13.712198 1803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:56:13.713184 kubelet[1803]: E0213 15:56:13.713149 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:14.143029 kubelet[1803]: E0213 15:56:14.142992 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:14.339737 kubelet[1803]: E0213 15:56:14.339703 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:14.813726 containerd[1491]: time="2025-02-13T15:56:14.813604925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:56:14.814378 containerd[1491]: time="2025-02-13T15:56:14.814265063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:56:14.814378 containerd[1491]: time="2025-02-13T15:56:14.814285002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:56:14.814491 containerd[1491]: time="2025-02-13T15:56:14.814409630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:56:14.839473 systemd[1]: Started cri-containerd-b7681565fee05d40ba78bc11c32c694139fc640d470eab0fcdf893817301a8d3.scope - libcontainer container b7681565fee05d40ba78bc11c32c694139fc640d470eab0fcdf893817301a8d3. Feb 13 15:56:14.851949 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:56:14.877591 containerd[1491]: time="2025-02-13T15:56:14.877537890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-lmwwt,Uid:d22e4b9c-a3a6-4f04-b911-9411e7f9a8c2,Namespace:default,Attempt:0,} returns sandbox id \"b7681565fee05d40ba78bc11c32c694139fc640d470eab0fcdf893817301a8d3\"" Feb 13 15:56:14.879199 containerd[1491]: time="2025-02-13T15:56:14.879172839Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:56:15.143932 kubelet[1803]: E0213 15:56:15.143871 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:16.144525 kubelet[1803]: E0213 15:56:16.144468 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:17.145236 kubelet[1803]: E0213 15:56:17.145184 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:17.326823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1859674236.mount: Deactivated successfully. Feb 13 15:56:18.145489 kubelet[1803]: E0213 15:56:18.145402 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:19.491067 kubelet[1803]: E0213 15:56:19.145821 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:20.146856 kubelet[1803]: E0213 15:56:20.146785 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:21.142626 containerd[1491]: time="2025-02-13T15:56:21.142566116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:21.147235 kubelet[1803]: E0213 15:56:21.147205 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:21.190780 containerd[1491]: time="2025-02-13T15:56:21.190720913Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 15:56:21.223089 containerd[1491]: time="2025-02-13T15:56:21.223057645Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:21.250285 containerd[1491]: time="2025-02-13T15:56:21.250236128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:21.251233 containerd[1491]: time="2025-02-13T15:56:21.251207758Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 6.372008106s" Feb 13 15:56:21.251291 containerd[1491]: time="2025-02-13T15:56:21.251238756Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 15:56:21.252596 containerd[1491]: time="2025-02-13T15:56:21.252569880Z" level=info msg="CreateContainer within sandbox \"b7681565fee05d40ba78bc11c32c694139fc640d470eab0fcdf893817301a8d3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 15:56:21.465557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346178608.mount: Deactivated successfully. Feb 13 15:56:21.563596 containerd[1491]: time="2025-02-13T15:56:21.563547020Z" level=info msg="CreateContainer within sandbox \"b7681565fee05d40ba78bc11c32c694139fc640d470eab0fcdf893817301a8d3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"ecf2913af391f7a8f282511d8b66f54f7063e20428311c2db710d9673118ebe2\"" Feb 13 15:56:21.564075 containerd[1491]: time="2025-02-13T15:56:21.564003168Z" level=info msg="StartContainer for \"ecf2913af391f7a8f282511d8b66f54f7063e20428311c2db710d9673118ebe2\"" Feb 13 15:56:21.592443 systemd[1]: Started cri-containerd-ecf2913af391f7a8f282511d8b66f54f7063e20428311c2db710d9673118ebe2.scope - libcontainer container ecf2913af391f7a8f282511d8b66f54f7063e20428311c2db710d9673118ebe2. Feb 13 15:56:21.713830 containerd[1491]: time="2025-02-13T15:56:21.713778421Z" level=info msg="StartContainer for \"ecf2913af391f7a8f282511d8b66f54f7063e20428311c2db710d9673118ebe2\" returns successfully" Feb 13 15:56:22.147620 kubelet[1803]: E0213 15:56:22.147570 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:22.401676 kubelet[1803]: I0213 15:56:22.401559 1803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-lmwwt" podStartSLOduration=6.028881769 podStartE2EDuration="12.401524049s" podCreationTimestamp="2025-02-13 15:56:10 +0000 UTC" firstStartedPulling="2025-02-13 15:56:14.878849619 +0000 UTC m=+22.150368427" lastFinishedPulling="2025-02-13 15:56:21.251491899 +0000 UTC m=+28.523010707" observedRunningTime="2025-02-13 15:56:22.401285496 +0000 UTC m=+29.672804304" watchObservedRunningTime="2025-02-13 15:56:22.401524049 +0000 UTC m=+29.673042858" Feb 13 15:56:23.148505 kubelet[1803]: E0213 15:56:23.148455 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:24.149382 kubelet[1803]: E0213 15:56:24.149307 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:25.149744 kubelet[1803]: E0213 15:56:25.149686 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:25.435475 update_engine[1480]: I20250213 15:56:25.435263 1480 update_attempter.cc:509] Updating boot flags... Feb 13 15:56:25.517364 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2999) Feb 13 15:56:25.545354 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2998) Feb 13 15:56:25.580401 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2998) Feb 13 15:56:26.150528 kubelet[1803]: E0213 15:56:26.150460 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:27.151579 kubelet[1803]: E0213 15:56:27.151519 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:28.151824 kubelet[1803]: E0213 15:56:28.151769 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:28.517995 kubelet[1803]: I0213 15:56:28.517863 1803 topology_manager.go:215] "Topology Admit Handler" podUID="69d11fab-23dc-4bf6-b35c-2d74d937e4c6" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 15:56:28.523540 systemd[1]: Created slice kubepods-besteffort-pod69d11fab_23dc_4bf6_b35c_2d74d937e4c6.slice - libcontainer container kubepods-besteffort-pod69d11fab_23dc_4bf6_b35c_2d74d937e4c6.slice. Feb 13 15:56:28.554474 kubelet[1803]: I0213 15:56:28.554433 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq5h4\" (UniqueName: \"kubernetes.io/projected/69d11fab-23dc-4bf6-b35c-2d74d937e4c6-kube-api-access-rq5h4\") pod \"nfs-server-provisioner-0\" (UID: \"69d11fab-23dc-4bf6-b35c-2d74d937e4c6\") " pod="default/nfs-server-provisioner-0" Feb 13 15:56:28.554474 kubelet[1803]: I0213 15:56:28.554473 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/69d11fab-23dc-4bf6-b35c-2d74d937e4c6-data\") pod \"nfs-server-provisioner-0\" (UID: \"69d11fab-23dc-4bf6-b35c-2d74d937e4c6\") " pod="default/nfs-server-provisioner-0" Feb 13 15:56:28.826674 containerd[1491]: time="2025-02-13T15:56:28.826536907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:69d11fab-23dc-4bf6-b35c-2d74d937e4c6,Namespace:default,Attempt:0,}" Feb 13 15:56:29.127380 systemd-networkd[1421]: lxc88eba2527a14: Link UP Feb 13 15:56:29.141221 kernel: eth0: renamed from tmp096bd Feb 13 15:56:29.146939 systemd-networkd[1421]: lxc88eba2527a14: Gained carrier Feb 13 15:56:29.152755 kubelet[1803]: E0213 15:56:29.152723 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:29.433261 containerd[1491]: time="2025-02-13T15:56:29.433176219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:56:29.433261 containerd[1491]: time="2025-02-13T15:56:29.433228017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:56:29.433261 containerd[1491]: time="2025-02-13T15:56:29.433238647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:56:29.433521 containerd[1491]: time="2025-02-13T15:56:29.433340440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:56:29.454437 systemd[1]: Started cri-containerd-096bd5293f68cc1c53b896c5cc17405450dd066778475787dd764c88386bd1f9.scope - libcontainer container 096bd5293f68cc1c53b896c5cc17405450dd066778475787dd764c88386bd1f9. Feb 13 15:56:29.465700 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:56:29.487621 containerd[1491]: time="2025-02-13T15:56:29.487582685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:69d11fab-23dc-4bf6-b35c-2d74d937e4c6,Namespace:default,Attempt:0,} returns sandbox id \"096bd5293f68cc1c53b896c5cc17405450dd066778475787dd764c88386bd1f9\"" Feb 13 15:56:29.489084 containerd[1491]: time="2025-02-13T15:56:29.489052927Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 15:56:30.153270 kubelet[1803]: E0213 15:56:30.153226 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:30.970380 systemd-networkd[1421]: lxc88eba2527a14: Gained IPv6LL Feb 13 15:56:31.154267 kubelet[1803]: E0213 15:56:31.154212 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:32.155308 kubelet[1803]: E0213 15:56:32.155241 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:33.129439 kubelet[1803]: E0213 15:56:33.129392 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:33.155622 kubelet[1803]: E0213 15:56:33.155574 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:33.159149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount372741725.mount: Deactivated successfully. Feb 13 15:56:34.156177 kubelet[1803]: E0213 15:56:34.156124 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:35.157235 kubelet[1803]: E0213 15:56:35.157171 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:36.158331 kubelet[1803]: E0213 15:56:36.158254 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:37.159375 kubelet[1803]: E0213 15:56:37.159244 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:38.160071 kubelet[1803]: E0213 15:56:38.159984 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:38.613258 containerd[1491]: time="2025-02-13T15:56:38.613102024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:38.649211 containerd[1491]: time="2025-02-13T15:56:38.649093464Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 15:56:38.671705 containerd[1491]: time="2025-02-13T15:56:38.671622668Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:38.701796 containerd[1491]: time="2025-02-13T15:56:38.701714787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:38.703019 containerd[1491]: time="2025-02-13T15:56:38.702964764Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 9.213882481s" Feb 13 15:56:38.703089 containerd[1491]: time="2025-02-13T15:56:38.703017432Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 15:56:38.705226 containerd[1491]: time="2025-02-13T15:56:38.705190679Z" level=info msg="CreateContainer within sandbox \"096bd5293f68cc1c53b896c5cc17405450dd066778475787dd764c88386bd1f9\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 15:56:38.983736 containerd[1491]: time="2025-02-13T15:56:38.983681423Z" level=info msg="CreateContainer within sandbox \"096bd5293f68cc1c53b896c5cc17405450dd066778475787dd764c88386bd1f9\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"d2f7f5396391861e95fd03d5f9837ab30def31e9b3c86ed7dd4f1a9d90d247a0\"" Feb 13 15:56:38.984476 containerd[1491]: time="2025-02-13T15:56:38.984436245Z" level=info msg="StartContainer for \"d2f7f5396391861e95fd03d5f9837ab30def31e9b3c86ed7dd4f1a9d90d247a0\"" Feb 13 15:56:39.064533 systemd[1]: Started cri-containerd-d2f7f5396391861e95fd03d5f9837ab30def31e9b3c86ed7dd4f1a9d90d247a0.scope - libcontainer container d2f7f5396391861e95fd03d5f9837ab30def31e9b3c86ed7dd4f1a9d90d247a0. Feb 13 15:56:39.160688 kubelet[1803]: E0213 15:56:39.160632 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:39.365082 containerd[1491]: time="2025-02-13T15:56:39.364962032Z" level=info msg="StartContainer for \"d2f7f5396391861e95fd03d5f9837ab30def31e9b3c86ed7dd4f1a9d90d247a0\" returns successfully" Feb 13 15:56:39.400559 kubelet[1803]: I0213 15:56:39.400515 1803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.186008855 podStartE2EDuration="11.400471221s" podCreationTimestamp="2025-02-13 15:56:28 +0000 UTC" firstStartedPulling="2025-02-13 15:56:29.488800559 +0000 UTC m=+36.760319367" lastFinishedPulling="2025-02-13 15:56:38.703262925 +0000 UTC m=+45.974781733" observedRunningTime="2025-02-13 15:56:39.400165025 +0000 UTC m=+46.671683823" watchObservedRunningTime="2025-02-13 15:56:39.400471221 +0000 UTC m=+46.671990099" Feb 13 15:56:40.161367 kubelet[1803]: E0213 15:56:40.161284 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:41.162406 kubelet[1803]: E0213 15:56:41.162306 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:42.163361 kubelet[1803]: E0213 15:56:42.163289 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:43.164417 kubelet[1803]: E0213 15:56:43.164339 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:44.165474 kubelet[1803]: E0213 15:56:44.165407 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:45.165917 kubelet[1803]: E0213 15:56:45.165826 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:46.166339 kubelet[1803]: E0213 15:56:46.166254 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:47.167357 kubelet[1803]: E0213 15:56:47.167236 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:48.168341 kubelet[1803]: E0213 15:56:48.168252 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:48.734980 kubelet[1803]: I0213 15:56:48.734927 1803 topology_manager.go:215] "Topology Admit Handler" podUID="2e968ed6-9887-4b6e-b25d-87d7feb276b5" podNamespace="default" podName="test-pod-1" Feb 13 15:56:48.741656 systemd[1]: Created slice kubepods-besteffort-pod2e968ed6_9887_4b6e_b25d_87d7feb276b5.slice - libcontainer container kubepods-besteffort-pod2e968ed6_9887_4b6e_b25d_87d7feb276b5.slice. Feb 13 15:56:48.778048 kubelet[1803]: I0213 15:56:48.777973 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-14c467c6-684b-46c2-9095-4c3035e258a1\" (UniqueName: \"kubernetes.io/nfs/2e968ed6-9887-4b6e-b25d-87d7feb276b5-pvc-14c467c6-684b-46c2-9095-4c3035e258a1\") pod \"test-pod-1\" (UID: \"2e968ed6-9887-4b6e-b25d-87d7feb276b5\") " pod="default/test-pod-1" Feb 13 15:56:48.778048 kubelet[1803]: I0213 15:56:48.778039 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9246k\" (UniqueName: \"kubernetes.io/projected/2e968ed6-9887-4b6e-b25d-87d7feb276b5-kube-api-access-9246k\") pod \"test-pod-1\" (UID: \"2e968ed6-9887-4b6e-b25d-87d7feb276b5\") " pod="default/test-pod-1" Feb 13 15:56:48.907354 kernel: FS-Cache: Loaded Feb 13 15:56:48.977523 kernel: RPC: Registered named UNIX socket transport module. Feb 13 15:56:48.977666 kernel: RPC: Registered udp transport module. Feb 13 15:56:48.977685 kernel: RPC: Registered tcp transport module. Feb 13 15:56:48.977704 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 15:56:48.978797 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 15:56:49.168998 kubelet[1803]: E0213 15:56:49.168931 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:49.189450 kernel: NFS: Registering the id_resolver key type Feb 13 15:56:49.189554 kernel: Key type id_resolver registered Feb 13 15:56:49.189579 kernel: Key type id_legacy registered Feb 13 15:56:49.217592 nfsidmap[3199]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 15:56:49.222714 nfsidmap[3202]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 15:56:49.344337 containerd[1491]: time="2025-02-13T15:56:49.344278641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2e968ed6-9887-4b6e-b25d-87d7feb276b5,Namespace:default,Attempt:0,}" Feb 13 15:56:49.373248 systemd-networkd[1421]: lxcf5850b69fac6: Link UP Feb 13 15:56:49.381449 kernel: eth0: renamed from tmp3310e Feb 13 15:56:49.393365 systemd-networkd[1421]: lxcf5850b69fac6: Gained carrier Feb 13 15:56:49.589414 containerd[1491]: time="2025-02-13T15:56:49.588891748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:56:49.589414 containerd[1491]: time="2025-02-13T15:56:49.588996885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:56:49.589414 containerd[1491]: time="2025-02-13T15:56:49.589013676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:56:49.589414 containerd[1491]: time="2025-02-13T15:56:49.589156405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:56:49.611515 systemd[1]: Started cri-containerd-3310e5f93d709389f2cdc229e9795ce91af69b7e7f3d31386e2bf3d368ac5c7a.scope - libcontainer container 3310e5f93d709389f2cdc229e9795ce91af69b7e7f3d31386e2bf3d368ac5c7a. Feb 13 15:56:49.625517 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:56:49.650382 containerd[1491]: time="2025-02-13T15:56:49.650289076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2e968ed6-9887-4b6e-b25d-87d7feb276b5,Namespace:default,Attempt:0,} returns sandbox id \"3310e5f93d709389f2cdc229e9795ce91af69b7e7f3d31386e2bf3d368ac5c7a\"" Feb 13 15:56:49.652342 containerd[1491]: time="2025-02-13T15:56:49.652191984Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:56:50.096138 containerd[1491]: time="2025-02-13T15:56:50.096055704Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:56:50.100156 containerd[1491]: time="2025-02-13T15:56:50.100039281Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 15:56:50.103094 containerd[1491]: time="2025-02-13T15:56:50.103033971Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 450.802473ms" Feb 13 15:56:50.103094 containerd[1491]: time="2025-02-13T15:56:50.103091018Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 15:56:50.108078 containerd[1491]: time="2025-02-13T15:56:50.105641693Z" level=info msg="CreateContainer within sandbox \"3310e5f93d709389f2cdc229e9795ce91af69b7e7f3d31386e2bf3d368ac5c7a\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 15:56:50.122034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount64518282.mount: Deactivated successfully. Feb 13 15:56:50.126949 containerd[1491]: time="2025-02-13T15:56:50.126791439Z" level=info msg="CreateContainer within sandbox \"3310e5f93d709389f2cdc229e9795ce91af69b7e7f3d31386e2bf3d368ac5c7a\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c330c39cba4c154779e3e12778a2af7c031eaeea1cfd86fedc0e74ca8d07aa3a\"" Feb 13 15:56:50.127478 containerd[1491]: time="2025-02-13T15:56:50.127453513Z" level=info msg="StartContainer for \"c330c39cba4c154779e3e12778a2af7c031eaeea1cfd86fedc0e74ca8d07aa3a\"" Feb 13 15:56:50.160503 systemd[1]: Started cri-containerd-c330c39cba4c154779e3e12778a2af7c031eaeea1cfd86fedc0e74ca8d07aa3a.scope - libcontainer container c330c39cba4c154779e3e12778a2af7c031eaeea1cfd86fedc0e74ca8d07aa3a. Feb 13 15:56:50.169390 kubelet[1803]: E0213 15:56:50.169342 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:50.187749 containerd[1491]: time="2025-02-13T15:56:50.187701413Z" level=info msg="StartContainer for \"c330c39cba4c154779e3e12778a2af7c031eaeea1cfd86fedc0e74ca8d07aa3a\" returns successfully" Feb 13 15:56:50.414387 kubelet[1803]: I0213 15:56:50.414293 1803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=21.962670931 podStartE2EDuration="22.41424772s" podCreationTimestamp="2025-02-13 15:56:28 +0000 UTC" firstStartedPulling="2025-02-13 15:56:49.651846103 +0000 UTC m=+56.923364911" lastFinishedPulling="2025-02-13 15:56:50.103422892 +0000 UTC m=+57.374941700" observedRunningTime="2025-02-13 15:56:50.414032174 +0000 UTC m=+57.685550982" watchObservedRunningTime="2025-02-13 15:56:50.41424772 +0000 UTC m=+57.685766528" Feb 13 15:56:51.170532 kubelet[1803]: E0213 15:56:51.170460 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:51.193620 systemd-networkd[1421]: lxcf5850b69fac6: Gained IPv6LL Feb 13 15:56:52.170942 kubelet[1803]: E0213 15:56:52.170876 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:53.130076 kubelet[1803]: E0213 15:56:53.129999 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:53.171805 kubelet[1803]: E0213 15:56:53.171741 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:54.172938 kubelet[1803]: E0213 15:56:54.172864 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:55.173306 kubelet[1803]: E0213 15:56:55.173213 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:56.174576 kubelet[1803]: E0213 15:56:56.174140 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:56.620455 containerd[1491]: time="2025-02-13T15:56:56.620306873Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:56:56.628101 containerd[1491]: time="2025-02-13T15:56:56.628055057Z" level=info msg="StopContainer for \"e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187\" with timeout 2 (s)" Feb 13 15:56:56.628304 containerd[1491]: time="2025-02-13T15:56:56.628284348Z" level=info msg="Stop container \"e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187\" with signal terminated" Feb 13 15:56:56.635002 systemd-networkd[1421]: lxc_health: Link DOWN Feb 13 15:56:56.635011 systemd-networkd[1421]: lxc_health: Lost carrier Feb 13 15:56:56.662453 systemd[1]: cri-containerd-e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187.scope: Deactivated successfully. Feb 13 15:56:56.662848 systemd[1]: cri-containerd-e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187.scope: Consumed 6.820s CPU time. Feb 13 15:56:56.684913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187-rootfs.mount: Deactivated successfully. Feb 13 15:56:56.716469 containerd[1491]: time="2025-02-13T15:56:56.716384208Z" level=info msg="shim disconnected" id=e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187 namespace=k8s.io Feb 13 15:56:56.716469 containerd[1491]: time="2025-02-13T15:56:56.716463468Z" level=warning msg="cleaning up after shim disconnected" id=e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187 namespace=k8s.io Feb 13 15:56:56.716469 containerd[1491]: time="2025-02-13T15:56:56.716475160Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:56:56.735361 containerd[1491]: time="2025-02-13T15:56:56.735286322Z" level=info msg="StopContainer for \"e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187\" returns successfully" Feb 13 15:56:56.736090 containerd[1491]: time="2025-02-13T15:56:56.736053524Z" level=info msg="StopPodSandbox for \"fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a\"" Feb 13 15:56:56.736158 containerd[1491]: time="2025-02-13T15:56:56.736109770Z" level=info msg="Container to stop \"358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:56:56.736187 containerd[1491]: time="2025-02-13T15:56:56.736156808Z" level=info msg="Container to stop \"1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:56:56.736187 containerd[1491]: time="2025-02-13T15:56:56.736168690Z" level=info msg="Container to stop \"27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:56:56.736187 containerd[1491]: time="2025-02-13T15:56:56.736180723Z" level=info msg="Container to stop \"e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:56:56.736278 containerd[1491]: time="2025-02-13T15:56:56.736191954Z" level=info msg="Container to stop \"0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:56:56.738154 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a-shm.mount: Deactivated successfully. Feb 13 15:56:56.742455 systemd[1]: cri-containerd-fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a.scope: Deactivated successfully. Feb 13 15:56:56.762789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a-rootfs.mount: Deactivated successfully. Feb 13 15:56:56.767097 containerd[1491]: time="2025-02-13T15:56:56.767029256Z" level=info msg="shim disconnected" id=fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a namespace=k8s.io Feb 13 15:56:56.767429 containerd[1491]: time="2025-02-13T15:56:56.767133011Z" level=warning msg="cleaning up after shim disconnected" id=fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a namespace=k8s.io Feb 13 15:56:56.767429 containerd[1491]: time="2025-02-13T15:56:56.767144613Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:56:56.782802 containerd[1491]: time="2025-02-13T15:56:56.782743343Z" level=info msg="TearDown network for sandbox \"fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a\" successfully" Feb 13 15:56:56.782868 containerd[1491]: time="2025-02-13T15:56:56.782797915Z" level=info msg="StopPodSandbox for \"fc13ba591ff67836066f6e88c465f4c463d81ae3d2b0ddeb8bfa9944a574b42a\" returns successfully" Feb 13 15:56:56.829831 kubelet[1803]: I0213 15:56:56.829779 1803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-host-proc-sys-kernel\") pod \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " Feb 13 15:56:56.829831 kubelet[1803]: I0213 15:56:56.829836 1803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6r29\" (UniqueName: \"kubernetes.io/projected/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-kube-api-access-v6r29\") pod \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " Feb 13 15:56:56.830006 kubelet[1803]: I0213 15:56:56.829859 1803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-hubble-tls\") pod \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " Feb 13 15:56:56.830006 kubelet[1803]: I0213 15:56:56.829878 1803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-xtables-lock\") pod \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " Feb 13 15:56:56.830006 kubelet[1803]: I0213 15:56:56.829896 1803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-host-proc-sys-net\") pod \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " Feb 13 15:56:56.830006 kubelet[1803]: I0213 15:56:56.829893 1803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" (UID: "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:56.830006 kubelet[1803]: I0213 15:56:56.829913 1803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-cilium-cgroup\") pod \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " Feb 13 15:56:56.830127 kubelet[1803]: I0213 15:56:56.829957 1803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" (UID: "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:56.830127 kubelet[1803]: I0213 15:56:56.829972 1803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-cni-path\") pod \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " Feb 13 15:56:56.830127 kubelet[1803]: I0213 15:56:56.829992 1803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-etc-cni-netd\") pod \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " Feb 13 15:56:56.830127 kubelet[1803]: I0213 15:56:56.830017 1803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-cilium-config-path\") pod \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " Feb 13 15:56:56.830127 kubelet[1803]: I0213 15:56:56.830034 1803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-hostproc\") pod \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " Feb 13 15:56:56.830127 kubelet[1803]: I0213 15:56:56.830050 1803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-bpf-maps\") pod \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " Feb 13 15:56:56.830275 kubelet[1803]: I0213 15:56:56.830064 1803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-cilium-run\") pod \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " Feb 13 15:56:56.830275 kubelet[1803]: I0213 15:56:56.830083 1803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-clustermesh-secrets\") pod \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " Feb 13 15:56:56.830275 kubelet[1803]: I0213 15:56:56.830100 1803 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-lib-modules\") pod \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\" (UID: \"8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67\") " Feb 13 15:56:56.830275 kubelet[1803]: I0213 15:56:56.830140 1803 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-cilium-cgroup\") on node \"10.0.0.115\" DevicePath \"\"" Feb 13 15:56:56.830275 kubelet[1803]: I0213 15:56:56.830153 1803 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-host-proc-sys-kernel\") on node \"10.0.0.115\" DevicePath \"\"" Feb 13 15:56:56.830275 kubelet[1803]: I0213 15:56:56.830169 1803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" (UID: "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:56.830434 kubelet[1803]: I0213 15:56:56.830186 1803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-cni-path" (OuterVolumeSpecName: "cni-path") pod "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" (UID: "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:56.830434 kubelet[1803]: I0213 15:56:56.830201 1803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" (UID: "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:56.830932 kubelet[1803]: I0213 15:56:56.830574 1803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" (UID: "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:56.830932 kubelet[1803]: I0213 15:56:56.830640 1803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" (UID: "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:56.830932 kubelet[1803]: I0213 15:56:56.830659 1803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" (UID: "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:56.830932 kubelet[1803]: I0213 15:56:56.830674 1803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-hostproc" (OuterVolumeSpecName: "hostproc") pod "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" (UID: "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:56.830932 kubelet[1803]: I0213 15:56:56.830692 1803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" (UID: "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:56.833762 kubelet[1803]: I0213 15:56:56.833581 1803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" (UID: "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:56:56.833762 kubelet[1803]: I0213 15:56:56.833725 1803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-kube-api-access-v6r29" (OuterVolumeSpecName: "kube-api-access-v6r29") pod "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" (UID: "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67"). InnerVolumeSpecName "kube-api-access-v6r29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:56:56.834619 systemd[1]: var-lib-kubelet-pods-8ba71f8d\x2dcc7b\x2d49f0\x2dbb1d\x2d0c74f526dd67-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv6r29.mount: Deactivated successfully. Feb 13 15:56:56.834778 systemd[1]: var-lib-kubelet-pods-8ba71f8d\x2dcc7b\x2d49f0\x2dbb1d\x2d0c74f526dd67-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:56:56.834896 kubelet[1803]: I0213 15:56:56.834876 1803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" (UID: "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:56:56.835102 kubelet[1803]: I0213 15:56:56.835059 1803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" (UID: "8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:56:56.930740 kubelet[1803]: I0213 15:56:56.930666 1803 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-cni-path\") on node \"10.0.0.115\" DevicePath \"\"" Feb 13 15:56:56.930740 kubelet[1803]: I0213 15:56:56.930721 1803 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-etc-cni-netd\") on node \"10.0.0.115\" DevicePath \"\"" Feb 13 15:56:56.930740 kubelet[1803]: I0213 15:56:56.930732 1803 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-xtables-lock\") on node \"10.0.0.115\" DevicePath \"\"" Feb 13 15:56:56.930740 kubelet[1803]: I0213 15:56:56.930743 1803 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-host-proc-sys-net\") on node \"10.0.0.115\" DevicePath \"\"" Feb 13 15:56:56.930740 kubelet[1803]: I0213 15:56:56.930753 1803 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-hostproc\") on node \"10.0.0.115\" DevicePath \"\"" Feb 13 15:56:56.930740 kubelet[1803]: I0213 15:56:56.930763 1803 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-cilium-config-path\") on node \"10.0.0.115\" DevicePath \"\"" Feb 13 15:56:56.931019 kubelet[1803]: I0213 15:56:56.930772 1803 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-clustermesh-secrets\") on node \"10.0.0.115\" DevicePath \"\"" Feb 13 15:56:56.931019 kubelet[1803]: I0213 15:56:56.930781 1803 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-lib-modules\") on node \"10.0.0.115\" DevicePath \"\"" Feb 13 15:56:56.931019 kubelet[1803]: I0213 15:56:56.930790 1803 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-bpf-maps\") on node \"10.0.0.115\" DevicePath \"\"" Feb 13 15:56:56.931019 kubelet[1803]: I0213 15:56:56.930798 1803 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-cilium-run\") on node \"10.0.0.115\" DevicePath \"\"" Feb 13 15:56:56.931019 kubelet[1803]: I0213 15:56:56.930808 1803 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-hubble-tls\") on node \"10.0.0.115\" DevicePath \"\"" Feb 13 15:56:56.931019 kubelet[1803]: I0213 15:56:56.930817 1803 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-v6r29\" (UniqueName: \"kubernetes.io/projected/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67-kube-api-access-v6r29\") on node \"10.0.0.115\" DevicePath \"\"" Feb 13 15:56:57.175215 kubelet[1803]: E0213 15:56:57.175139 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:57.298400 systemd[1]: Removed slice kubepods-burstable-pod8ba71f8d_cc7b_49f0_bb1d_0c74f526dd67.slice - libcontainer container kubepods-burstable-pod8ba71f8d_cc7b_49f0_bb1d_0c74f526dd67.slice. Feb 13 15:56:57.298517 systemd[1]: kubepods-burstable-pod8ba71f8d_cc7b_49f0_bb1d_0c74f526dd67.slice: Consumed 6.918s CPU time. Feb 13 15:56:57.420063 kubelet[1803]: I0213 15:56:57.420009 1803 scope.go:117] "RemoveContainer" containerID="e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187" Feb 13 15:56:57.421852 containerd[1491]: time="2025-02-13T15:56:57.421812932Z" level=info msg="RemoveContainer for \"e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187\"" Feb 13 15:56:57.426908 containerd[1491]: time="2025-02-13T15:56:57.426845394Z" level=info msg="RemoveContainer for \"e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187\" returns successfully" Feb 13 15:56:57.427232 kubelet[1803]: I0213 15:56:57.427123 1803 scope.go:117] "RemoveContainer" containerID="358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6" Feb 13 15:56:57.428453 containerd[1491]: time="2025-02-13T15:56:57.428410543Z" level=info msg="RemoveContainer for \"358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6\"" Feb 13 15:56:57.432807 containerd[1491]: time="2025-02-13T15:56:57.432742268Z" level=info msg="RemoveContainer for \"358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6\" returns successfully" Feb 13 15:56:57.433280 kubelet[1803]: I0213 15:56:57.433234 1803 scope.go:117] "RemoveContainer" containerID="1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01" Feb 13 15:56:57.435244 containerd[1491]: time="2025-02-13T15:56:57.435129813Z" level=info msg="RemoveContainer for \"1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01\"" Feb 13 15:56:57.441002 containerd[1491]: time="2025-02-13T15:56:57.440919256Z" level=info msg="RemoveContainer for \"1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01\" returns successfully" Feb 13 15:56:57.441542 kubelet[1803]: I0213 15:56:57.441306 1803 scope.go:117] "RemoveContainer" containerID="27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7" Feb 13 15:56:57.443267 containerd[1491]: time="2025-02-13T15:56:57.443190331Z" level=info msg="RemoveContainer for \"27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7\"" Feb 13 15:56:57.450137 containerd[1491]: time="2025-02-13T15:56:57.450064482Z" level=info msg="RemoveContainer for \"27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7\" returns successfully" Feb 13 15:56:57.450538 kubelet[1803]: I0213 15:56:57.450490 1803 scope.go:117] "RemoveContainer" containerID="0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff" Feb 13 15:56:57.453286 containerd[1491]: time="2025-02-13T15:56:57.453247300Z" level=info msg="RemoveContainer for \"0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff\"" Feb 13 15:56:57.459078 containerd[1491]: time="2025-02-13T15:56:57.459011105Z" level=info msg="RemoveContainer for \"0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff\" returns successfully" Feb 13 15:56:57.459467 kubelet[1803]: I0213 15:56:57.459429 1803 scope.go:117] "RemoveContainer" containerID="e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187" Feb 13 15:56:57.459797 containerd[1491]: time="2025-02-13T15:56:57.459749823Z" level=error msg="ContainerStatus for \"e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187\": not found" Feb 13 15:56:57.459938 kubelet[1803]: E0213 15:56:57.459917 1803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187\": not found" containerID="e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187" Feb 13 15:56:57.460060 kubelet[1803]: I0213 15:56:57.460039 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187"} err="failed to get container status \"e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187\": rpc error: code = NotFound desc = an error occurred when try to find container \"e099f794541b5e1dede0030c6aa2a281ae4bb6ada664c68af1242c0a0deef187\": not found" Feb 13 15:56:57.460099 kubelet[1803]: I0213 15:56:57.460061 1803 scope.go:117] "RemoveContainer" containerID="358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6" Feb 13 15:56:57.460303 containerd[1491]: time="2025-02-13T15:56:57.460266383Z" level=error msg="ContainerStatus for \"358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6\": not found" Feb 13 15:56:57.460483 kubelet[1803]: E0213 15:56:57.460467 1803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6\": not found" containerID="358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6" Feb 13 15:56:57.460541 kubelet[1803]: I0213 15:56:57.460500 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6"} err="failed to get container status \"358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6\": rpc error: code = NotFound desc = an error occurred when try to find container \"358e736fa4869dadf863212162489bb9b3ae93ec1a86fcbbbf77e84f4efaffa6\": not found" Feb 13 15:56:57.460541 kubelet[1803]: I0213 15:56:57.460519 1803 scope.go:117] "RemoveContainer" containerID="1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01" Feb 13 15:56:57.460690 containerd[1491]: time="2025-02-13T15:56:57.460657257Z" level=error msg="ContainerStatus for \"1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01\": not found" Feb 13 15:56:57.460834 kubelet[1803]: E0213 15:56:57.460788 1803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01\": not found" containerID="1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01" Feb 13 15:56:57.460834 kubelet[1803]: I0213 15:56:57.460820 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01"} err="failed to get container status \"1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01\": rpc error: code = NotFound desc = an error occurred when try to find container \"1fce21f417c6ae079cc69259ed012b53fecb9276e5cb5e20d817c7f66c7d7a01\": not found" Feb 13 15:56:57.460834 kubelet[1803]: I0213 15:56:57.460835 1803 scope.go:117] "RemoveContainer" containerID="27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7" Feb 13 15:56:57.461089 containerd[1491]: time="2025-02-13T15:56:57.460998618Z" level=error msg="ContainerStatus for \"27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7\": not found" Feb 13 15:56:57.461169 kubelet[1803]: E0213 15:56:57.461143 1803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7\": not found" containerID="27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7" Feb 13 15:56:57.461169 kubelet[1803]: I0213 15:56:57.461168 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7"} err="failed to get container status \"27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"27ea4897de07bbc4cf86577fe794192d391e28d59d676f02160bf7b044c9e1f7\": not found" Feb 13 15:56:57.461278 kubelet[1803]: I0213 15:56:57.461177 1803 scope.go:117] "RemoveContainer" containerID="0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff" Feb 13 15:56:57.461394 containerd[1491]: time="2025-02-13T15:56:57.461355108Z" level=error msg="ContainerStatus for \"0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff\": not found" Feb 13 15:56:57.461897 kubelet[1803]: E0213 15:56:57.461846 1803 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff\": not found" containerID="0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff" Feb 13 15:56:57.461963 kubelet[1803]: I0213 15:56:57.461916 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff"} err="failed to get container status \"0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e5bec68ac86d3c49f5c914e70f06089e3eb0fc0e829086b339f14238cc3b9ff\": not found" Feb 13 15:56:57.605273 systemd[1]: var-lib-kubelet-pods-8ba71f8d\x2dcc7b\x2d49f0\x2dbb1d\x2d0c74f526dd67-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:56:58.175971 kubelet[1803]: E0213 15:56:58.175866 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:58.364246 kubelet[1803]: E0213 15:56:58.364191 1803 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:56:59.177226 kubelet[1803]: E0213 15:56:59.177121 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:56:59.203266 kubelet[1803]: I0213 15:56:59.203203 1803 topology_manager.go:215] "Topology Admit Handler" podUID="d273041d-993e-43e3-adfc-650dc6ef9eac" podNamespace="kube-system" podName="cilium-operator-5cc964979-gsdsk" Feb 13 15:56:59.203266 kubelet[1803]: E0213 15:56:59.203263 1803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" containerName="mount-bpf-fs" Feb 13 15:56:59.203266 kubelet[1803]: E0213 15:56:59.203279 1803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" containerName="cilium-agent" Feb 13 15:56:59.203557 kubelet[1803]: E0213 15:56:59.203286 1803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" containerName="mount-cgroup" Feb 13 15:56:59.203557 kubelet[1803]: E0213 15:56:59.203294 1803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" containerName="apply-sysctl-overwrites" Feb 13 15:56:59.203557 kubelet[1803]: E0213 15:56:59.203301 1803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" containerName="clean-cilium-state" Feb 13 15:56:59.203557 kubelet[1803]: I0213 15:56:59.203360 1803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" containerName="cilium-agent" Feb 13 15:56:59.209579 systemd[1]: Created slice kubepods-besteffort-podd273041d_993e_43e3_adfc_650dc6ef9eac.slice - libcontainer container kubepods-besteffort-podd273041d_993e_43e3_adfc_650dc6ef9eac.slice. Feb 13 15:56:59.210191 kubelet[1803]: I0213 15:56:59.210040 1803 topology_manager.go:215] "Topology Admit Handler" podUID="d1f40a97-60e5-4d14-92bf-3f280a0d54f4" podNamespace="kube-system" podName="cilium-n2nwh" Feb 13 15:56:59.216837 systemd[1]: Created slice kubepods-burstable-podd1f40a97_60e5_4d14_92bf_3f280a0d54f4.slice - libcontainer container kubepods-burstable-podd1f40a97_60e5_4d14_92bf_3f280a0d54f4.slice. Feb 13 15:56:59.244033 kubelet[1803]: I0213 15:56:59.243971 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d469n\" (UniqueName: \"kubernetes.io/projected/d273041d-993e-43e3-adfc-650dc6ef9eac-kube-api-access-d469n\") pod \"cilium-operator-5cc964979-gsdsk\" (UID: \"d273041d-993e-43e3-adfc-650dc6ef9eac\") " pod="kube-system/cilium-operator-5cc964979-gsdsk" Feb 13 15:56:59.244033 kubelet[1803]: I0213 15:56:59.244022 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d1f40a97-60e5-4d14-92bf-3f280a0d54f4-host-proc-sys-net\") pod \"cilium-n2nwh\" (UID: \"d1f40a97-60e5-4d14-92bf-3f280a0d54f4\") " pod="kube-system/cilium-n2nwh" Feb 13 15:56:59.244033 kubelet[1803]: I0213 15:56:59.244042 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d1f40a97-60e5-4d14-92bf-3f280a0d54f4-cni-path\") pod \"cilium-n2nwh\" (UID: \"d1f40a97-60e5-4d14-92bf-3f280a0d54f4\") " pod="kube-system/cilium-n2nwh" Feb 13 15:56:59.244290 kubelet[1803]: I0213 15:56:59.244060 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d1f40a97-60e5-4d14-92bf-3f280a0d54f4-cilium-run\") pod \"cilium-n2nwh\" (UID: \"d1f40a97-60e5-4d14-92bf-3f280a0d54f4\") " pod="kube-system/cilium-n2nwh" Feb 13 15:56:59.244290 kubelet[1803]: I0213 15:56:59.244138 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d1f40a97-60e5-4d14-92bf-3f280a0d54f4-bpf-maps\") pod \"cilium-n2nwh\" (UID: \"d1f40a97-60e5-4d14-92bf-3f280a0d54f4\") " pod="kube-system/cilium-n2nwh" Feb 13 15:56:59.244290 kubelet[1803]: I0213 15:56:59.244185 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d1f40a97-60e5-4d14-92bf-3f280a0d54f4-hostproc\") pod \"cilium-n2nwh\" (UID: \"d1f40a97-60e5-4d14-92bf-3f280a0d54f4\") " pod="kube-system/cilium-n2nwh" Feb 13 15:56:59.244290 kubelet[1803]: I0213 15:56:59.244205 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1f40a97-60e5-4d14-92bf-3f280a0d54f4-xtables-lock\") pod \"cilium-n2nwh\" (UID: \"d1f40a97-60e5-4d14-92bf-3f280a0d54f4\") " pod="kube-system/cilium-n2nwh" Feb 13 15:56:59.244290 kubelet[1803]: I0213 15:56:59.244238 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d1f40a97-60e5-4d14-92bf-3f280a0d54f4-clustermesh-secrets\") pod \"cilium-n2nwh\" (UID: \"d1f40a97-60e5-4d14-92bf-3f280a0d54f4\") " pod="kube-system/cilium-n2nwh" Feb 13 15:56:59.244290 kubelet[1803]: I0213 15:56:59.244256 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d1f40a97-60e5-4d14-92bf-3f280a0d54f4-etc-cni-netd\") pod \"cilium-n2nwh\" (UID: \"d1f40a97-60e5-4d14-92bf-3f280a0d54f4\") " pod="kube-system/cilium-n2nwh" Feb 13 15:56:59.244633 kubelet[1803]: I0213 15:56:59.244275 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1f40a97-60e5-4d14-92bf-3f280a0d54f4-cilium-config-path\") pod \"cilium-n2nwh\" (UID: \"d1f40a97-60e5-4d14-92bf-3f280a0d54f4\") " pod="kube-system/cilium-n2nwh" Feb 13 15:56:59.244633 kubelet[1803]: I0213 15:56:59.244292 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d1f40a97-60e5-4d14-92bf-3f280a0d54f4-cilium-ipsec-secrets\") pod \"cilium-n2nwh\" (UID: \"d1f40a97-60e5-4d14-92bf-3f280a0d54f4\") " pod="kube-system/cilium-n2nwh" Feb 13 15:56:59.244633 kubelet[1803]: I0213 15:56:59.244340 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b5wb\" (UniqueName: \"kubernetes.io/projected/d1f40a97-60e5-4d14-92bf-3f280a0d54f4-kube-api-access-7b5wb\") pod \"cilium-n2nwh\" (UID: \"d1f40a97-60e5-4d14-92bf-3f280a0d54f4\") " pod="kube-system/cilium-n2nwh" Feb 13 15:56:59.244633 kubelet[1803]: I0213 15:56:59.244362 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d273041d-993e-43e3-adfc-650dc6ef9eac-cilium-config-path\") pod \"cilium-operator-5cc964979-gsdsk\" (UID: \"d273041d-993e-43e3-adfc-650dc6ef9eac\") " pod="kube-system/cilium-operator-5cc964979-gsdsk" Feb 13 15:56:59.244633 kubelet[1803]: I0213 15:56:59.244406 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d1f40a97-60e5-4d14-92bf-3f280a0d54f4-host-proc-sys-kernel\") pod \"cilium-n2nwh\" (UID: \"d1f40a97-60e5-4d14-92bf-3f280a0d54f4\") " pod="kube-system/cilium-n2nwh" Feb 13 15:56:59.244805 kubelet[1803]: I0213 15:56:59.244423 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d1f40a97-60e5-4d14-92bf-3f280a0d54f4-hubble-tls\") pod \"cilium-n2nwh\" (UID: \"d1f40a97-60e5-4d14-92bf-3f280a0d54f4\") " pod="kube-system/cilium-n2nwh" Feb 13 15:56:59.244805 kubelet[1803]: I0213 15:56:59.244441 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d1f40a97-60e5-4d14-92bf-3f280a0d54f4-cilium-cgroup\") pod \"cilium-n2nwh\" (UID: \"d1f40a97-60e5-4d14-92bf-3f280a0d54f4\") " pod="kube-system/cilium-n2nwh" Feb 13 15:56:59.244805 kubelet[1803]: I0213 15:56:59.244460 1803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1f40a97-60e5-4d14-92bf-3f280a0d54f4-lib-modules\") pod \"cilium-n2nwh\" (UID: \"d1f40a97-60e5-4d14-92bf-3f280a0d54f4\") " pod="kube-system/cilium-n2nwh" Feb 13 15:56:59.291470 kubelet[1803]: I0213 15:56:59.291427 1803 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67" path="/var/lib/kubelet/pods/8ba71f8d-cc7b-49f0-bb1d-0c74f526dd67/volumes" Feb 13 15:56:59.513900 kubelet[1803]: E0213 15:56:59.513750 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:59.514711 containerd[1491]: time="2025-02-13T15:56:59.514656856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gsdsk,Uid:d273041d-993e-43e3-adfc-650dc6ef9eac,Namespace:kube-system,Attempt:0,}" Feb 13 15:56:59.529688 kubelet[1803]: E0213 15:56:59.529649 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:59.530284 containerd[1491]: time="2025-02-13T15:56:59.530235219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n2nwh,Uid:d1f40a97-60e5-4d14-92bf-3f280a0d54f4,Namespace:kube-system,Attempt:0,}" Feb 13 15:56:59.538385 containerd[1491]: time="2025-02-13T15:56:59.538011271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:56:59.538385 containerd[1491]: time="2025-02-13T15:56:59.538081343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:56:59.538385 containerd[1491]: time="2025-02-13T15:56:59.538097884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:56:59.538385 containerd[1491]: time="2025-02-13T15:56:59.538204915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:56:59.557368 containerd[1491]: time="2025-02-13T15:56:59.557242204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:56:59.557555 containerd[1491]: time="2025-02-13T15:56:59.557297457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:56:59.557555 containerd[1491]: time="2025-02-13T15:56:59.557395621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:56:59.557555 containerd[1491]: time="2025-02-13T15:56:59.557466895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:56:59.559501 systemd[1]: Started cri-containerd-f167b8e99c3f122b11d3fbe7551761b6cc57aa59c3b12866eb14d88585234e51.scope - libcontainer container f167b8e99c3f122b11d3fbe7551761b6cc57aa59c3b12866eb14d88585234e51. Feb 13 15:56:59.579583 systemd[1]: Started cri-containerd-2b0f05668d0cac498cb42b5a1664adcbe9ed7b2b099018ff362b4613cafb153f.scope - libcontainer container 2b0f05668d0cac498cb42b5a1664adcbe9ed7b2b099018ff362b4613cafb153f. Feb 13 15:56:59.602295 containerd[1491]: time="2025-02-13T15:56:59.602247575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-gsdsk,Uid:d273041d-993e-43e3-adfc-650dc6ef9eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"f167b8e99c3f122b11d3fbe7551761b6cc57aa59c3b12866eb14d88585234e51\"" Feb 13 15:56:59.603128 kubelet[1803]: E0213 15:56:59.603093 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:59.604206 containerd[1491]: time="2025-02-13T15:56:59.604151650Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:56:59.608879 containerd[1491]: time="2025-02-13T15:56:59.608827610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n2nwh,Uid:d1f40a97-60e5-4d14-92bf-3f280a0d54f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b0f05668d0cac498cb42b5a1664adcbe9ed7b2b099018ff362b4613cafb153f\"" Feb 13 15:56:59.609651 kubelet[1803]: E0213 15:56:59.609622 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:56:59.611664 containerd[1491]: time="2025-02-13T15:56:59.611595458Z" level=info msg="CreateContainer within sandbox \"2b0f05668d0cac498cb42b5a1664adcbe9ed7b2b099018ff362b4613cafb153f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:56:59.627300 containerd[1491]: time="2025-02-13T15:56:59.627226019Z" level=info msg="CreateContainer within sandbox \"2b0f05668d0cac498cb42b5a1664adcbe9ed7b2b099018ff362b4613cafb153f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7b48c5c410174ab233bbcfe8c39af87c13d5a242ff43049ec72d31ce79e4290c\"" Feb 13 15:56:59.627754 containerd[1491]: time="2025-02-13T15:56:59.627711921Z" level=info msg="StartContainer for \"7b48c5c410174ab233bbcfe8c39af87c13d5a242ff43049ec72d31ce79e4290c\"" Feb 13 15:56:59.658572 systemd[1]: Started cri-containerd-7b48c5c410174ab233bbcfe8c39af87c13d5a242ff43049ec72d31ce79e4290c.scope - libcontainer container 7b48c5c410174ab233bbcfe8c39af87c13d5a242ff43049ec72d31ce79e4290c. Feb 13 15:56:59.684547 containerd[1491]: time="2025-02-13T15:56:59.684492435Z" level=info msg="StartContainer for \"7b48c5c410174ab233bbcfe8c39af87c13d5a242ff43049ec72d31ce79e4290c\" returns successfully" Feb 13 15:56:59.695540 systemd[1]: cri-containerd-7b48c5c410174ab233bbcfe8c39af87c13d5a242ff43049ec72d31ce79e4290c.scope: Deactivated successfully. Feb 13 15:56:59.732033 containerd[1491]: time="2025-02-13T15:56:59.731947225Z" level=info msg="shim disconnected" id=7b48c5c410174ab233bbcfe8c39af87c13d5a242ff43049ec72d31ce79e4290c namespace=k8s.io Feb 13 15:56:59.732033 containerd[1491]: time="2025-02-13T15:56:59.732028518Z" level=warning msg="cleaning up after shim disconnected" id=7b48c5c410174ab233bbcfe8c39af87c13d5a242ff43049ec72d31ce79e4290c namespace=k8s.io Feb 13 15:56:59.732033 containerd[1491]: time="2025-02-13T15:56:59.732042814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:57:00.178172 kubelet[1803]: E0213 15:57:00.178072 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:57:00.430416 kubelet[1803]: E0213 15:57:00.430292 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:57:00.432045 containerd[1491]: time="2025-02-13T15:57:00.432007180Z" level=info msg="CreateContainer within sandbox \"2b0f05668d0cac498cb42b5a1664adcbe9ed7b2b099018ff362b4613cafb153f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:57:00.460173 containerd[1491]: time="2025-02-13T15:57:00.460085651Z" level=info msg="CreateContainer within sandbox \"2b0f05668d0cac498cb42b5a1664adcbe9ed7b2b099018ff362b4613cafb153f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"841ebbbeb1c125826488ec3e9d185a33896334c834d722ec7e4f861732ee9d1b\"" Feb 13 15:57:00.460757 containerd[1491]: time="2025-02-13T15:57:00.460715834Z" level=info msg="StartContainer for \"841ebbbeb1c125826488ec3e9d185a33896334c834d722ec7e4f861732ee9d1b\"" Feb 13 15:57:00.495574 systemd[1]: Started cri-containerd-841ebbbeb1c125826488ec3e9d185a33896334c834d722ec7e4f861732ee9d1b.scope - libcontainer container 841ebbbeb1c125826488ec3e9d185a33896334c834d722ec7e4f861732ee9d1b. Feb 13 15:57:00.525947 containerd[1491]: time="2025-02-13T15:57:00.525891752Z" level=info msg="StartContainer for \"841ebbbeb1c125826488ec3e9d185a33896334c834d722ec7e4f861732ee9d1b\" returns successfully" Feb 13 15:57:00.533616 systemd[1]: cri-containerd-841ebbbeb1c125826488ec3e9d185a33896334c834d722ec7e4f861732ee9d1b.scope: Deactivated successfully. Feb 13 15:57:00.559478 containerd[1491]: time="2025-02-13T15:57:00.559404668Z" level=info msg="shim disconnected" id=841ebbbeb1c125826488ec3e9d185a33896334c834d722ec7e4f861732ee9d1b namespace=k8s.io Feb 13 15:57:00.559478 containerd[1491]: time="2025-02-13T15:57:00.559469920Z" level=warning msg="cleaning up after shim disconnected" id=841ebbbeb1c125826488ec3e9d185a33896334c834d722ec7e4f861732ee9d1b namespace=k8s.io Feb 13 15:57:00.559478 containerd[1491]: time="2025-02-13T15:57:00.559480299Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:57:01.178418 kubelet[1803]: E0213 15:57:01.178367 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:57:01.350837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-841ebbbeb1c125826488ec3e9d185a33896334c834d722ec7e4f861732ee9d1b-rootfs.mount: Deactivated successfully. Feb 13 15:57:01.435487 kubelet[1803]: E0213 15:57:01.435354 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:57:01.437573 containerd[1491]: time="2025-02-13T15:57:01.437535781Z" level=info msg="CreateContainer within sandbox \"2b0f05668d0cac498cb42b5a1664adcbe9ed7b2b099018ff362b4613cafb153f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:57:01.462715 containerd[1491]: time="2025-02-13T15:57:01.462534035Z" level=info msg="CreateContainer within sandbox \"2b0f05668d0cac498cb42b5a1664adcbe9ed7b2b099018ff362b4613cafb153f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b39fd45e72616fe2cf4ee299474506048ed83ebcc29a0cdb8bb86fdb6ba42403\"" Feb 13 15:57:01.463922 containerd[1491]: time="2025-02-13T15:57:01.463709051Z" level=info msg="StartContainer for \"b39fd45e72616fe2cf4ee299474506048ed83ebcc29a0cdb8bb86fdb6ba42403\"" Feb 13 15:57:01.503897 systemd[1]: Started cri-containerd-b39fd45e72616fe2cf4ee299474506048ed83ebcc29a0cdb8bb86fdb6ba42403.scope - libcontainer container b39fd45e72616fe2cf4ee299474506048ed83ebcc29a0cdb8bb86fdb6ba42403. Feb 13 15:57:01.548280 systemd[1]: cri-containerd-b39fd45e72616fe2cf4ee299474506048ed83ebcc29a0cdb8bb86fdb6ba42403.scope: Deactivated successfully. Feb 13 15:57:01.552218 containerd[1491]: time="2025-02-13T15:57:01.552172893Z" level=info msg="StartContainer for \"b39fd45e72616fe2cf4ee299474506048ed83ebcc29a0cdb8bb86fdb6ba42403\" returns successfully" Feb 13 15:57:01.869581 containerd[1491]: time="2025-02-13T15:57:01.869400888Z" level=info msg="shim disconnected" id=b39fd45e72616fe2cf4ee299474506048ed83ebcc29a0cdb8bb86fdb6ba42403 namespace=k8s.io Feb 13 15:57:01.869581 containerd[1491]: time="2025-02-13T15:57:01.869477742Z" level=warning msg="cleaning up after shim disconnected" id=b39fd45e72616fe2cf4ee299474506048ed83ebcc29a0cdb8bb86fdb6ba42403 namespace=k8s.io Feb 13 15:57:01.869581 containerd[1491]: time="2025-02-13T15:57:01.869489554Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:57:01.898291 containerd[1491]: time="2025-02-13T15:57:01.898121640Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:01.899421 containerd[1491]: time="2025-02-13T15:57:01.899275237Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 15:57:01.901322 containerd[1491]: time="2025-02-13T15:57:01.901260003Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:57:01.903363 containerd[1491]: time="2025-02-13T15:57:01.902798331Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.298367787s" Feb 13 15:57:01.903363 containerd[1491]: time="2025-02-13T15:57:01.902852051Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 15:57:01.905198 containerd[1491]: time="2025-02-13T15:57:01.905071278Z" level=info msg="CreateContainer within sandbox \"f167b8e99c3f122b11d3fbe7551761b6cc57aa59c3b12866eb14d88585234e51\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:57:01.936988 containerd[1491]: time="2025-02-13T15:57:01.936910596Z" level=info msg="CreateContainer within sandbox \"f167b8e99c3f122b11d3fbe7551761b6cc57aa59c3b12866eb14d88585234e51\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6bd0df9208171cf90ed723be1001c9c9117f2d1dbfa1f3e06b6734e43bff000d\"" Feb 13 15:57:01.937601 containerd[1491]: time="2025-02-13T15:57:01.937568262Z" level=info msg="StartContainer for \"6bd0df9208171cf90ed723be1001c9c9117f2d1dbfa1f3e06b6734e43bff000d\"" Feb 13 15:57:01.977692 systemd[1]: Started cri-containerd-6bd0df9208171cf90ed723be1001c9c9117f2d1dbfa1f3e06b6734e43bff000d.scope - libcontainer container 6bd0df9208171cf90ed723be1001c9c9117f2d1dbfa1f3e06b6734e43bff000d. Feb 13 15:57:02.012346 containerd[1491]: time="2025-02-13T15:57:02.012233402Z" level=info msg="StartContainer for \"6bd0df9208171cf90ed723be1001c9c9117f2d1dbfa1f3e06b6734e43bff000d\" returns successfully" Feb 13 15:57:02.179335 kubelet[1803]: E0213 15:57:02.179249 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:57:02.352180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b39fd45e72616fe2cf4ee299474506048ed83ebcc29a0cdb8bb86fdb6ba42403-rootfs.mount: Deactivated successfully. Feb 13 15:57:02.439240 kubelet[1803]: E0213 15:57:02.438870 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:57:02.440846 kubelet[1803]: E0213 15:57:02.440816 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:57:02.442934 containerd[1491]: time="2025-02-13T15:57:02.442872821Z" level=info msg="CreateContainer within sandbox \"2b0f05668d0cac498cb42b5a1664adcbe9ed7b2b099018ff362b4613cafb153f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:57:02.451125 kubelet[1803]: I0213 15:57:02.450947 1803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-gsdsk" podStartSLOduration=1.15150769 podStartE2EDuration="3.450875024s" podCreationTimestamp="2025-02-13 15:56:59 +0000 UTC" firstStartedPulling="2025-02-13 15:56:59.603782417 +0000 UTC m=+66.875301225" lastFinishedPulling="2025-02-13 15:57:01.903149751 +0000 UTC m=+69.174668559" observedRunningTime="2025-02-13 15:57:02.450758936 +0000 UTC m=+69.722277744" watchObservedRunningTime="2025-02-13 15:57:02.450875024 +0000 UTC m=+69.722393842" Feb 13 15:57:02.461845 containerd[1491]: time="2025-02-13T15:57:02.461789114Z" level=info msg="CreateContainer within sandbox \"2b0f05668d0cac498cb42b5a1664adcbe9ed7b2b099018ff362b4613cafb153f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"25f0caa8c2a965c50ff79c8c49bdeabf55b7b6a62c61575ea335d642bc687260\"" Feb 13 15:57:02.462477 containerd[1491]: time="2025-02-13T15:57:02.462449844Z" level=info msg="StartContainer for \"25f0caa8c2a965c50ff79c8c49bdeabf55b7b6a62c61575ea335d642bc687260\"" Feb 13 15:57:02.505684 systemd[1]: Started cri-containerd-25f0caa8c2a965c50ff79c8c49bdeabf55b7b6a62c61575ea335d642bc687260.scope - libcontainer container 25f0caa8c2a965c50ff79c8c49bdeabf55b7b6a62c61575ea335d642bc687260. Feb 13 15:57:02.534734 systemd[1]: cri-containerd-25f0caa8c2a965c50ff79c8c49bdeabf55b7b6a62c61575ea335d642bc687260.scope: Deactivated successfully. Feb 13 15:57:02.536588 containerd[1491]: time="2025-02-13T15:57:02.536551951Z" level=info msg="StartContainer for \"25f0caa8c2a965c50ff79c8c49bdeabf55b7b6a62c61575ea335d642bc687260\" returns successfully" Feb 13 15:57:02.566619 containerd[1491]: time="2025-02-13T15:57:02.566523730Z" level=info msg="shim disconnected" id=25f0caa8c2a965c50ff79c8c49bdeabf55b7b6a62c61575ea335d642bc687260 namespace=k8s.io Feb 13 15:57:02.566619 containerd[1491]: time="2025-02-13T15:57:02.566601456Z" level=warning msg="cleaning up after shim disconnected" id=25f0caa8c2a965c50ff79c8c49bdeabf55b7b6a62c61575ea335d642bc687260 namespace=k8s.io Feb 13 15:57:02.566619 containerd[1491]: time="2025-02-13T15:57:02.566613348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:57:03.180221 kubelet[1803]: E0213 15:57:03.180108 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:57:03.351377 systemd[1]: run-containerd-runc-k8s.io-25f0caa8c2a965c50ff79c8c49bdeabf55b7b6a62c61575ea335d642bc687260-runc.vX1IFW.mount: Deactivated successfully. Feb 13 15:57:03.351528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25f0caa8c2a965c50ff79c8c49bdeabf55b7b6a62c61575ea335d642bc687260-rootfs.mount: Deactivated successfully. Feb 13 15:57:03.364891 kubelet[1803]: E0213 15:57:03.364836 1803 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:57:03.446032 kubelet[1803]: E0213 15:57:03.445885 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:57:03.446032 kubelet[1803]: E0213 15:57:03.445916 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:57:03.448229 containerd[1491]: time="2025-02-13T15:57:03.448152771Z" level=info msg="CreateContainer within sandbox \"2b0f05668d0cac498cb42b5a1664adcbe9ed7b2b099018ff362b4613cafb153f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:57:03.468026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2889215683.mount: Deactivated successfully. Feb 13 15:57:03.470709 containerd[1491]: time="2025-02-13T15:57:03.470650966Z" level=info msg="CreateContainer within sandbox \"2b0f05668d0cac498cb42b5a1664adcbe9ed7b2b099018ff362b4613cafb153f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b051be4c72c9d623ea627a4947d894e285052117683a666b8559f9842f59e0ff\"" Feb 13 15:57:03.471281 containerd[1491]: time="2025-02-13T15:57:03.471253938Z" level=info msg="StartContainer for \"b051be4c72c9d623ea627a4947d894e285052117683a666b8559f9842f59e0ff\"" Feb 13 15:57:03.506438 systemd[1]: Started cri-containerd-b051be4c72c9d623ea627a4947d894e285052117683a666b8559f9842f59e0ff.scope - libcontainer container b051be4c72c9d623ea627a4947d894e285052117683a666b8559f9842f59e0ff. Feb 13 15:57:03.537773 containerd[1491]: time="2025-02-13T15:57:03.537720393Z" level=info msg="StartContainer for \"b051be4c72c9d623ea627a4947d894e285052117683a666b8559f9842f59e0ff\" returns successfully" Feb 13 15:57:03.957345 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 15:57:04.180351 kubelet[1803]: E0213 15:57:04.180262 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:57:04.351398 systemd[1]: run-containerd-runc-k8s.io-b051be4c72c9d623ea627a4947d894e285052117683a666b8559f9842f59e0ff-runc.uabvQc.mount: Deactivated successfully. Feb 13 15:57:04.450833 kubelet[1803]: E0213 15:57:04.450774 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:57:04.462897 kubelet[1803]: I0213 15:57:04.462855 1803 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-n2nwh" podStartSLOduration=5.462816704 podStartE2EDuration="5.462816704s" podCreationTimestamp="2025-02-13 15:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:57:04.462573538 +0000 UTC m=+71.734092346" watchObservedRunningTime="2025-02-13 15:57:04.462816704 +0000 UTC m=+71.734335512" Feb 13 15:57:04.790933 kubelet[1803]: I0213 15:57:04.790889 1803 setters.go:568] "Node became not ready" node="10.0.0.115" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:57:04Z","lastTransitionTime":"2025-02-13T15:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:57:05.180515 kubelet[1803]: E0213 15:57:05.180444 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:57:05.531133 kubelet[1803]: E0213 15:57:05.530996 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:57:06.180953 kubelet[1803]: E0213 15:57:06.180902 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:57:07.069641 systemd-networkd[1421]: lxc_health: Link UP Feb 13 15:57:07.079188 systemd-networkd[1421]: lxc_health: Gained carrier Feb 13 15:57:07.182038 kubelet[1803]: E0213 15:57:07.181952 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:57:07.532888 kubelet[1803]: E0213 15:57:07.532549 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:57:08.183026 kubelet[1803]: E0213 15:57:08.182968 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:57:08.457834 kubelet[1803]: E0213 15:57:08.457695 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:57:08.921532 systemd-networkd[1421]: lxc_health: Gained IPv6LL Feb 13 15:57:09.183619 kubelet[1803]: E0213 15:57:09.183455 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:57:09.459852 kubelet[1803]: E0213 15:57:09.459707 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:57:10.184714 kubelet[1803]: E0213 15:57:10.184656 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:57:11.185151 kubelet[1803]: E0213 15:57:11.185091 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:57:12.185505 kubelet[1803]: E0213 15:57:12.185442 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:57:13.131478 kubelet[1803]: E0213 15:57:13.129635 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:57:13.185954 kubelet[1803]: E0213 15:57:13.185888 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:57:14.186737 kubelet[1803]: E0213 15:57:14.186629 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"