Feb 13 19:44:20.903167 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 19:44:20.903187 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:44:20.903198 kernel: BIOS-provided physical RAM map: Feb 13 19:44:20.903204 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:44:20.903210 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:44:20.903216 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:44:20.903223 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 19:44:20.903230 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 19:44:20.903236 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 19:44:20.903244 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 19:44:20.903250 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:44:20.903256 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:44:20.903262 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:44:20.903268 kernel: NX (Execute Disable) protection: active Feb 13 19:44:20.903301 kernel: APIC: Static calls initialized Feb 13 19:44:20.903318 kernel: SMBIOS 2.8 present. Feb 13 19:44:20.903327 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 19:44:20.903333 kernel: Hypervisor detected: KVM Feb 13 19:44:20.903340 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:44:20.903346 kernel: kvm-clock: using sched offset of 2363816858 cycles Feb 13 19:44:20.903353 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:44:20.903360 kernel: tsc: Detected 2794.750 MHz processor Feb 13 19:44:20.903367 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:44:20.903375 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:44:20.903382 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 19:44:20.903391 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:44:20.903398 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:44:20.903405 kernel: Using GB pages for direct mapping Feb 13 19:44:20.903412 kernel: ACPI: Early table checksum verification disabled Feb 13 19:44:20.903418 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 19:44:20.903425 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:44:20.903432 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:44:20.903439 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:44:20.903448 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 19:44:20.903455 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:44:20.903462 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:44:20.903468 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:44:20.903475 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:44:20.903482 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 19:44:20.903489 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 19:44:20.903499 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 19:44:20.903509 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 19:44:20.903516 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 19:44:20.903523 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 19:44:20.903530 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 19:44:20.903537 kernel: No NUMA configuration found Feb 13 19:44:20.903544 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 19:44:20.903551 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 19:44:20.903560 kernel: Zone ranges: Feb 13 19:44:20.903568 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:44:20.903575 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 19:44:20.903582 kernel: Normal empty Feb 13 19:44:20.903589 kernel: Movable zone start for each node Feb 13 19:44:20.903596 kernel: Early memory node ranges Feb 13 19:44:20.903603 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:44:20.903610 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 19:44:20.903617 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 19:44:20.903626 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:44:20.903633 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:44:20.903640 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 19:44:20.903647 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:44:20.903654 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:44:20.903661 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:44:20.903668 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:44:20.903675 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:44:20.903689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:44:20.903699 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:44:20.903706 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:44:20.903713 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:44:20.903721 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:44:20.903728 kernel: TSC deadline timer available Feb 13 19:44:20.903735 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:44:20.903742 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:44:20.903749 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:44:20.903756 kernel: kvm-guest: setup PV sched yield Feb 13 19:44:20.903765 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 19:44:20.903772 kernel: Booting paravirtualized kernel on KVM Feb 13 19:44:20.903779 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:44:20.903786 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:44:20.903793 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:44:20.903800 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:44:20.903807 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:44:20.903814 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:44:20.903821 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:44:20.903829 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:44:20.903839 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:44:20.903846 kernel: random: crng init done Feb 13 19:44:20.903853 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:44:20.903861 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:44:20.903868 kernel: Fallback order for Node 0: 0 Feb 13 19:44:20.903875 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 19:44:20.903882 kernel: Policy zone: DMA32 Feb 13 19:44:20.903889 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:44:20.903898 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 136900K reserved, 0K cma-reserved) Feb 13 19:44:20.903905 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:44:20.903913 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 19:44:20.903920 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:44:20.903927 kernel: Dynamic Preempt: voluntary Feb 13 19:44:20.903934 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:44:20.903942 kernel: rcu: RCU event tracing is enabled. Feb 13 19:44:20.903949 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:44:20.903956 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:44:20.903967 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:44:20.903975 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:44:20.903984 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:44:20.903992 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:44:20.903999 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:44:20.904006 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:44:20.904013 kernel: Console: colour VGA+ 80x25 Feb 13 19:44:20.904020 kernel: printk: console [ttyS0] enabled Feb 13 19:44:20.904026 kernel: ACPI: Core revision 20230628 Feb 13 19:44:20.904036 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:44:20.904043 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:44:20.904050 kernel: x2apic enabled Feb 13 19:44:20.904057 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:44:20.904064 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:44:20.904071 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:44:20.904078 kernel: kvm-guest: setup PV IPIs Feb 13 19:44:20.904095 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:44:20.904102 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:44:20.904109 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 13 19:44:20.904117 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:44:20.904124 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:44:20.904134 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:44:20.904141 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:44:20.904148 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:44:20.904156 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:44:20.904163 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:44:20.904173 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:44:20.904180 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:44:20.904188 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:44:20.904195 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:44:20.904203 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:44:20.904211 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:44:20.904218 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:44:20.904226 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:44:20.904235 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:44:20.904243 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:44:20.904250 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:44:20.904258 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:44:20.904265 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:44:20.904272 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:44:20.904280 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:44:20.904359 kernel: landlock: Up and running. Feb 13 19:44:20.904366 kernel: SELinux: Initializing. Feb 13 19:44:20.904377 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:44:20.904384 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:44:20.904392 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:44:20.904399 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:44:20.904406 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:44:20.904414 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:44:20.904421 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:44:20.904429 kernel: ... version: 0 Feb 13 19:44:20.904436 kernel: ... bit width: 48 Feb 13 19:44:20.904445 kernel: ... generic registers: 6 Feb 13 19:44:20.904453 kernel: ... value mask: 0000ffffffffffff Feb 13 19:44:20.904460 kernel: ... max period: 00007fffffffffff Feb 13 19:44:20.904467 kernel: ... fixed-purpose events: 0 Feb 13 19:44:20.904475 kernel: ... event mask: 000000000000003f Feb 13 19:44:20.904482 kernel: signal: max sigframe size: 1776 Feb 13 19:44:20.904490 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:44:20.904497 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:44:20.904505 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:44:20.904514 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:44:20.904521 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:44:20.904529 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:44:20.904536 kernel: smpboot: Max logical packages: 1 Feb 13 19:44:20.904544 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 13 19:44:20.904551 kernel: devtmpfs: initialized Feb 13 19:44:20.904558 kernel: x86/mm: Memory block size: 128MB Feb 13 19:44:20.904566 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:44:20.904573 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:44:20.904583 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:44:20.904590 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:44:20.904597 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:44:20.904605 kernel: audit: type=2000 audit(1739475859.819:1): state=initialized audit_enabled=0 res=1 Feb 13 19:44:20.904612 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:44:20.904620 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:44:20.904627 kernel: cpuidle: using governor menu Feb 13 19:44:20.904634 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:44:20.904642 kernel: dca service started, version 1.12.1 Feb 13 19:44:20.904651 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 19:44:20.904659 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 19:44:20.904666 kernel: PCI: Using configuration type 1 for base access Feb 13 19:44:20.904674 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:44:20.904681 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:44:20.904697 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:44:20.904704 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:44:20.904712 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:44:20.904720 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:44:20.904729 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:44:20.904737 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:44:20.904744 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:44:20.904752 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:44:20.904759 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:44:20.904766 kernel: ACPI: Interpreter enabled Feb 13 19:44:20.904773 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:44:20.904781 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:44:20.904788 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:44:20.904798 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:44:20.904806 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:44:20.904813 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:44:20.905090 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:44:20.905223 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:44:20.905362 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:44:20.905374 kernel: PCI host bridge to bus 0000:00 Feb 13 19:44:20.905503 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:44:20.905615 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:44:20.905737 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:44:20.905847 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 19:44:20.905960 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 19:44:20.906069 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 19:44:20.906228 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:44:20.906394 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:44:20.906528 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:44:20.906651 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 19:44:20.906784 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 19:44:20.906904 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 19:44:20.907023 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:44:20.907167 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:44:20.907362 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 19:44:20.907488 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 19:44:20.907608 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 19:44:20.907749 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:44:20.907870 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 19:44:20.907994 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 19:44:20.908118 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 19:44:20.908246 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:44:20.908392 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 19:44:20.908512 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 19:44:20.908632 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 19:44:20.908762 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 19:44:20.908893 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:44:20.909018 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:44:20.909154 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:44:20.909301 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 19:44:20.909425 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 19:44:20.909553 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:44:20.909675 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 19:44:20.909693 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:44:20.909705 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:44:20.909713 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:44:20.909721 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:44:20.909728 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:44:20.909736 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:44:20.909743 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:44:20.909751 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:44:20.909758 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:44:20.909766 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:44:20.909775 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:44:20.909783 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:44:20.909790 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:44:20.909798 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:44:20.909805 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:44:20.909813 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:44:20.909820 kernel: iommu: Default domain type: Translated Feb 13 19:44:20.909828 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:44:20.909835 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:44:20.909845 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:44:20.909852 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:44:20.909860 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 19:44:20.909985 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:44:20.910105 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:44:20.910225 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:44:20.910235 kernel: vgaarb: loaded Feb 13 19:44:20.910243 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:44:20.910254 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:44:20.910261 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:44:20.910269 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:44:20.910277 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:44:20.910310 kernel: pnp: PnP ACPI init Feb 13 19:44:20.910441 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 19:44:20.910452 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:44:20.910460 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:44:20.910471 kernel: NET: Registered PF_INET protocol family Feb 13 19:44:20.910478 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:44:20.910486 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:44:20.910494 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:44:20.910501 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:44:20.910508 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:44:20.910516 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:44:20.910523 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:44:20.910530 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:44:20.910540 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:44:20.910547 kernel: NET: Registered PF_XDP protocol family Feb 13 19:44:20.910659 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:44:20.910777 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:44:20.910887 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:44:20.910995 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 19:44:20.911111 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 19:44:20.911219 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 19:44:20.911232 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:44:20.911240 kernel: Initialise system trusted keyrings Feb 13 19:44:20.911248 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:44:20.911256 kernel: Key type asymmetric registered Feb 13 19:44:20.911263 kernel: Asymmetric key parser 'x509' registered Feb 13 19:44:20.911271 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:44:20.911278 kernel: io scheduler mq-deadline registered Feb 13 19:44:20.911308 kernel: io scheduler kyber registered Feb 13 19:44:20.911316 kernel: io scheduler bfq registered Feb 13 19:44:20.911326 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:44:20.911335 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:44:20.911342 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:44:20.911350 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:44:20.911357 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:44:20.911365 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:44:20.911373 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:44:20.911380 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:44:20.911388 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:44:20.911519 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:44:20.911634 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:44:20.911644 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:44:20.911764 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:44:20 UTC (1739475860) Feb 13 19:44:20.911876 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 19:44:20.911886 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:44:20.911893 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:44:20.911901 kernel: Segment Routing with IPv6 Feb 13 19:44:20.911912 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:44:20.911920 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:44:20.911928 kernel: Key type dns_resolver registered Feb 13 19:44:20.911935 kernel: IPI shorthand broadcast: enabled Feb 13 19:44:20.911943 kernel: sched_clock: Marking stable (588003086, 106524133)->(747910282, -53383063) Feb 13 19:44:20.911950 kernel: registered taskstats version 1 Feb 13 19:44:20.911958 kernel: Loading compiled-in X.509 certificates Feb 13 19:44:20.911965 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 19:44:20.911973 kernel: Key type .fscrypt registered Feb 13 19:44:20.911982 kernel: Key type fscrypt-provisioning registered Feb 13 19:44:20.911990 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:44:20.911997 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:44:20.912005 kernel: ima: No architecture policies found Feb 13 19:44:20.912012 kernel: clk: Disabling unused clocks Feb 13 19:44:20.912020 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 19:44:20.912027 kernel: Write protecting the kernel read-only data: 36864k Feb 13 19:44:20.912035 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 19:44:20.912042 kernel: Run /init as init process Feb 13 19:44:20.912052 kernel: with arguments: Feb 13 19:44:20.912059 kernel: /init Feb 13 19:44:20.912066 kernel: with environment: Feb 13 19:44:20.912074 kernel: HOME=/ Feb 13 19:44:20.912081 kernel: TERM=linux Feb 13 19:44:20.912088 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:44:20.912098 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:44:20.912108 systemd[1]: Detected virtualization kvm. Feb 13 19:44:20.912118 systemd[1]: Detected architecture x86-64. Feb 13 19:44:20.912126 systemd[1]: Running in initrd. Feb 13 19:44:20.912134 systemd[1]: No hostname configured, using default hostname. Feb 13 19:44:20.912141 systemd[1]: Hostname set to . Feb 13 19:44:20.912149 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:44:20.912157 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:44:20.912165 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:44:20.912173 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:44:20.912184 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:44:20.912203 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:44:20.912214 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:44:20.912222 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:44:20.912232 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:44:20.912243 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:44:20.912251 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:44:20.912259 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:44:20.912267 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:44:20.912275 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:44:20.912331 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:44:20.912348 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:44:20.912356 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:44:20.912368 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:44:20.912376 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:44:20.912384 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:44:20.912392 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:44:20.912401 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:44:20.912409 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:44:20.912418 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:44:20.912426 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:44:20.912434 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:44:20.912444 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:44:20.912452 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:44:20.912460 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:44:20.912471 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:44:20.912479 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:44:20.912487 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:44:20.912496 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:44:20.912507 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:44:20.912520 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:44:20.912551 systemd-journald[193]: Collecting audit messages is disabled. Feb 13 19:44:20.912575 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:44:20.912585 systemd-journald[193]: Journal started Feb 13 19:44:20.912609 systemd-journald[193]: Runtime Journal (/run/log/journal/aabb66df48db4e43af83d7e14b3e53f7) is 6.0M, max 48.4M, 42.3M free. Feb 13 19:44:20.903352 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 19:44:20.955249 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:44:20.955281 kernel: Bridge firewalling registered Feb 13 19:44:20.955318 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:44:20.939314 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 19:44:20.950771 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:44:20.951392 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:44:20.958503 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:44:20.960677 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:44:20.961906 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:44:20.965423 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:44:20.980579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:44:20.980986 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:44:20.983995 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:44:20.996602 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:44:20.998124 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:44:21.002969 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:44:21.012848 dracut-cmdline[227]: dracut-dracut-053 Feb 13 19:44:21.016180 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:44:21.062872 systemd-resolved[231]: Positive Trust Anchors: Feb 13 19:44:21.062891 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:44:21.062932 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:44:21.066162 systemd-resolved[231]: Defaulting to hostname 'linux'. Feb 13 19:44:21.067536 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:44:21.073495 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:44:21.118321 kernel: SCSI subsystem initialized Feb 13 19:44:21.127325 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:44:21.139337 kernel: iscsi: registered transport (tcp) Feb 13 19:44:21.223589 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:44:21.223670 kernel: QLogic iSCSI HBA Driver Feb 13 19:44:21.280922 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:44:21.292473 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:44:21.321639 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:44:21.321721 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:44:21.322984 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:44:21.365320 kernel: raid6: avx2x4 gen() 24152 MB/s Feb 13 19:44:21.382314 kernel: raid6: avx2x2 gen() 27347 MB/s Feb 13 19:44:21.400638 kernel: raid6: avx2x1 gen() 19609 MB/s Feb 13 19:44:21.400684 kernel: raid6: using algorithm avx2x2 gen() 27347 MB/s Feb 13 19:44:21.421315 kernel: raid6: .... xor() 17049 MB/s, rmw enabled Feb 13 19:44:21.421344 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:44:21.452336 kernel: xor: automatically using best checksumming function avx Feb 13 19:44:21.617352 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:44:21.633133 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:44:21.649610 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:44:21.662361 systemd-udevd[413]: Using default interface naming scheme 'v255'. Feb 13 19:44:21.667361 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:44:21.682515 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:44:21.697727 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Feb 13 19:44:21.731146 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:44:21.751499 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:44:21.815998 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:44:21.827425 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:44:21.841382 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:44:21.851591 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:44:21.853012 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:44:21.855066 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:44:21.864473 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:44:21.872311 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:44:21.874315 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:44:21.903805 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:44:21.903967 kernel: libata version 3.00 loaded. Feb 13 19:44:21.903986 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:44:21.914475 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:44:21.914491 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:44:21.914653 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:44:21.914803 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:44:21.914816 kernel: GPT:9289727 != 19775487 Feb 13 19:44:21.914826 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:44:21.914837 kernel: GPT:9289727 != 19775487 Feb 13 19:44:21.914853 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:44:21.914864 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:44:21.914874 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:44:21.914885 kernel: scsi host0: ahci Feb 13 19:44:21.915047 kernel: AES CTR mode by8 optimization enabled Feb 13 19:44:21.915059 kernel: scsi host1: ahci Feb 13 19:44:21.915204 kernel: scsi host2: ahci Feb 13 19:44:21.915401 kernel: scsi host3: ahci Feb 13 19:44:21.915548 kernel: scsi host4: ahci Feb 13 19:44:21.915708 kernel: scsi host5: ahci Feb 13 19:44:21.915852 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 19:44:21.915864 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 19:44:21.915874 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 19:44:21.915884 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 19:44:21.915895 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 19:44:21.915909 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 19:44:21.882018 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:44:21.891785 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:44:21.891850 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:44:21.895709 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:44:21.896898 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:44:21.896949 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:44:21.898135 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:44:21.907510 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:44:21.936337 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (469) Feb 13 19:44:21.936392 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (461) Feb 13 19:44:21.950720 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:44:21.976375 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:44:21.978029 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:44:21.985569 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:44:21.986866 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:44:21.996845 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:44:22.013553 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:44:22.015795 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:44:22.028203 disk-uuid[554]: Primary Header is updated. Feb 13 19:44:22.028203 disk-uuid[554]: Secondary Entries is updated. Feb 13 19:44:22.028203 disk-uuid[554]: Secondary Header is updated. Feb 13 19:44:22.032315 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:44:22.037311 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:44:22.039543 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:44:22.222263 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:44:22.222369 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:44:22.222381 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:44:22.222392 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:44:22.223317 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:44:22.224384 kernel: ata3.00: applying bridge limits Feb 13 19:44:22.225313 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:44:22.226315 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:44:22.226332 kernel: ata3.00: configured for UDMA/100 Feb 13 19:44:22.227313 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:44:22.272468 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:44:22.285266 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:44:22.285306 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:44:23.038187 disk-uuid[557]: The operation has completed successfully. Feb 13 19:44:23.039605 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:44:23.069908 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:44:23.070038 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:44:23.092468 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:44:23.096156 sh[591]: Success Feb 13 19:44:23.110393 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:44:23.147462 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:44:23.159218 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:44:23.162413 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:44:23.177427 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 19:44:23.177479 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:44:23.177494 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:44:23.178708 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:44:23.180399 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:44:23.185457 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:44:23.186279 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:44:23.194491 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:44:23.195409 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:44:23.206484 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:44:23.206532 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:44:23.206548 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:44:23.210349 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:44:23.219526 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:44:23.222315 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:44:23.232155 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:44:23.243462 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:44:23.303647 ignition[687]: Ignition 2.20.0 Feb 13 19:44:23.303658 ignition[687]: Stage: fetch-offline Feb 13 19:44:23.303701 ignition[687]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:23.303711 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:44:23.303813 ignition[687]: parsed url from cmdline: "" Feb 13 19:44:23.303818 ignition[687]: no config URL provided Feb 13 19:44:23.303823 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:44:23.303834 ignition[687]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:44:23.303865 ignition[687]: op(1): [started] loading QEMU firmware config module Feb 13 19:44:23.303870 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:44:23.315133 ignition[687]: op(1): [finished] loading QEMU firmware config module Feb 13 19:44:23.316528 ignition[687]: parsing config with SHA512: 0fd54810bf0c1c6a703077f6c038e59d3c818678a4e363e872863d8e01ff96a3b0e414d5a834b70668cd6ef64bca18d8e920d8fa46157d128ce359343e802560 Feb 13 19:44:23.319658 unknown[687]: fetched base config from "system" Feb 13 19:44:23.319796 unknown[687]: fetched user config from "qemu" Feb 13 19:44:23.320063 ignition[687]: fetch-offline: fetch-offline passed Feb 13 19:44:23.320138 ignition[687]: Ignition finished successfully Feb 13 19:44:23.324873 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:44:23.328499 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:44:23.338435 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:44:23.359541 systemd-networkd[781]: lo: Link UP Feb 13 19:44:23.359554 systemd-networkd[781]: lo: Gained carrier Feb 13 19:44:23.361244 systemd-networkd[781]: Enumeration completed Feb 13 19:44:23.361373 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:44:23.361684 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:44:23.361688 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:44:23.362941 systemd-networkd[781]: eth0: Link UP Feb 13 19:44:23.362945 systemd-networkd[781]: eth0: Gained carrier Feb 13 19:44:23.362952 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:44:23.363686 systemd[1]: Reached target network.target - Network. Feb 13 19:44:23.365728 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:44:23.375461 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:44:23.386357 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:44:23.388308 ignition[783]: Ignition 2.20.0 Feb 13 19:44:23.388578 ignition[783]: Stage: kargs Feb 13 19:44:23.388752 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:23.388763 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:44:23.389391 ignition[783]: kargs: kargs passed Feb 13 19:44:23.389429 ignition[783]: Ignition finished successfully Feb 13 19:44:23.392063 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:44:23.402454 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:44:23.413706 ignition[793]: Ignition 2.20.0 Feb 13 19:44:23.413716 ignition[793]: Stage: disks Feb 13 19:44:23.413880 ignition[793]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:23.413890 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:44:23.414546 ignition[793]: disks: disks passed Feb 13 19:44:23.416707 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:44:23.414587 ignition[793]: Ignition finished successfully Feb 13 19:44:23.419380 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:44:23.421323 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:44:23.422776 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:44:23.422834 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:44:23.423242 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:44:23.436514 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:44:23.448587 systemd-resolved[231]: Detected conflict on linux IN A 10.0.0.131 Feb 13 19:44:23.448605 systemd-resolved[231]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Feb 13 19:44:23.453090 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:44:23.459385 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:44:23.478488 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:44:23.589322 kernel: EXT4-fs (vda9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 19:44:23.590004 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:44:23.592575 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:44:23.605436 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:44:23.608273 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:44:23.610753 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:44:23.610811 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:44:23.619967 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (813) Feb 13 19:44:23.619993 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:44:23.620005 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:44:23.620015 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:44:23.610838 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:44:23.622486 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:44:23.624576 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:44:23.625683 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:44:23.637506 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:44:23.668470 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:44:23.673593 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:44:23.677569 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:44:23.682640 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:44:23.773837 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:44:23.787400 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:44:23.790187 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:44:23.798384 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:44:23.814983 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:44:23.821682 ignition[927]: INFO : Ignition 2.20.0 Feb 13 19:44:23.821682 ignition[927]: INFO : Stage: mount Feb 13 19:44:23.823493 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:23.823493 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:44:23.823493 ignition[927]: INFO : mount: mount passed Feb 13 19:44:23.823493 ignition[927]: INFO : Ignition finished successfully Feb 13 19:44:23.829824 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:44:23.841522 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:44:24.176507 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:44:24.185476 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:44:24.194757 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) Feb 13 19:44:24.194804 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:44:24.194819 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:44:24.195698 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:44:24.199318 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:44:24.200582 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:44:24.220239 ignition[957]: INFO : Ignition 2.20.0 Feb 13 19:44:24.220239 ignition[957]: INFO : Stage: files Feb 13 19:44:24.222222 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:24.222222 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:44:24.222222 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:44:24.222222 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:44:24.222222 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:44:24.229262 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:44:24.229262 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:44:24.229262 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:44:24.229262 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:44:24.229262 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:44:24.229262 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:44:24.229262 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:44:24.229262 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:44:24.229262 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:44:24.229262 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:44:24.229262 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 19:44:24.225031 unknown[957]: wrote ssh authorized keys file for user: core Feb 13 19:44:24.638327 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:44:25.001936 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:44:25.001936 ignition[957]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 19:44:25.006129 ignition[957]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:44:25.006129 ignition[957]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:44:25.006129 ignition[957]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 19:44:25.006129 ignition[957]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:44:25.030228 ignition[957]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:44:25.035648 ignition[957]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:44:25.037538 ignition[957]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:44:25.037538 ignition[957]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:44:25.037538 ignition[957]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:44:25.037538 ignition[957]: INFO : files: files passed Feb 13 19:44:25.037538 ignition[957]: INFO : Ignition finished successfully Feb 13 19:44:25.039254 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:44:25.053439 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:44:25.054432 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:44:25.057720 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:44:25.057829 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:44:25.066420 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:44:25.069931 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:44:25.069931 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:44:25.073423 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:44:25.074188 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:44:25.076393 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:44:25.087456 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:44:25.109626 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:44:25.109746 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:44:25.113509 systemd-networkd[781]: eth0: Gained IPv6LL Feb 13 19:44:25.115179 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:44:25.117450 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:44:25.118656 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:44:25.132428 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:44:25.144351 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:44:25.156394 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:44:25.164713 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:44:25.166102 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:44:25.183960 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:44:25.185027 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:44:25.185138 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:44:25.187950 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:44:25.189735 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:44:25.191864 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:44:25.194108 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:44:25.196543 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:44:25.210829 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:44:25.213078 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:44:25.215715 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:44:25.218096 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:44:25.220378 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:44:25.222644 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:44:25.222818 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:44:25.225332 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:44:25.227359 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:44:25.250645 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:44:25.250793 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:44:25.253049 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:44:25.253221 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:44:25.255930 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:44:25.256078 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:44:25.258082 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:44:25.260268 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:44:25.264385 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:44:25.266158 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:44:25.268353 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:44:25.270654 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:44:25.270782 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:44:25.281817 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:44:25.281937 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:44:25.284132 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:44:25.284300 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:44:25.287081 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:44:25.287228 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:44:25.303445 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:44:25.304447 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:44:25.304613 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:44:25.309160 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:44:25.310273 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:44:25.317222 ignition[1013]: INFO : Ignition 2.20.0 Feb 13 19:44:25.317222 ignition[1013]: INFO : Stage: umount Feb 13 19:44:25.317222 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:25.317222 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:44:25.317222 ignition[1013]: INFO : umount: umount passed Feb 13 19:44:25.317222 ignition[1013]: INFO : Ignition finished successfully Feb 13 19:44:25.310485 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:44:25.313116 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:44:25.313403 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:44:25.319197 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:44:25.319315 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:44:25.339155 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:44:25.339275 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:44:25.343722 systemd[1]: Stopped target network.target - Network. Feb 13 19:44:25.345973 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:44:25.346029 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:44:25.347250 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:44:25.347332 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:44:25.349866 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:44:25.349913 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:44:25.352312 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:44:25.352360 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:44:25.354523 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:44:25.356874 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:44:25.359904 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:44:25.360328 systemd-networkd[781]: eth0: DHCPv6 lease lost Feb 13 19:44:25.363034 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:44:25.363156 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:44:25.364632 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:44:25.364678 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:44:25.375382 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:44:25.376524 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:44:25.376613 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:44:25.379164 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:44:25.382635 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:44:25.382790 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:44:25.388732 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:44:25.388798 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:44:25.390915 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:44:25.390991 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:44:25.393213 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:44:25.393313 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:44:25.396150 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:44:25.396305 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:44:25.398078 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:44:25.398240 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:44:25.401447 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:44:25.401508 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:44:25.403426 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:44:25.403470 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:44:25.405482 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:44:25.405538 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:44:25.407696 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:44:25.407751 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:44:25.409731 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:44:25.409788 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:44:25.421476 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:44:25.423517 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:44:25.423594 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:44:25.425740 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:44:25.425789 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:44:25.429596 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:44:25.429723 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:44:25.652572 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:44:25.652718 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:44:25.655355 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:44:25.656762 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:44:25.656823 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:44:25.670436 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:44:25.677650 systemd[1]: Switching root. Feb 13 19:44:25.706807 systemd-journald[193]: Journal stopped Feb 13 19:44:26.843795 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Feb 13 19:44:26.843859 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:44:26.843873 kernel: SELinux: policy capability open_perms=1 Feb 13 19:44:26.843885 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:44:26.843896 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:44:26.843909 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:44:26.843929 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:44:26.843940 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:44:26.843956 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:44:26.843971 kernel: audit: type=1403 audit(1739475866.039:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:44:26.843984 systemd[1]: Successfully loaded SELinux policy in 46.072ms. Feb 13 19:44:26.844004 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.086ms. Feb 13 19:44:26.844017 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:44:26.844029 systemd[1]: Detected virtualization kvm. Feb 13 19:44:26.844043 systemd[1]: Detected architecture x86-64. Feb 13 19:44:26.844055 systemd[1]: Detected first boot. Feb 13 19:44:26.844067 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:44:26.844082 zram_generator::config[1058]: No configuration found. Feb 13 19:44:26.844095 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:44:26.844109 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:44:26.844121 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:44:26.844133 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:44:26.844145 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:44:26.844157 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:44:26.844169 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:44:26.844186 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:44:26.844198 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:44:26.844211 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:44:26.844225 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:44:26.844237 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:44:26.844249 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:44:26.844261 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:44:26.844274 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:44:26.844319 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:44:26.844332 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:44:26.844348 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:44:26.844363 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:44:26.844388 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:44:26.844402 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:44:26.844417 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:44:26.844433 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:44:26.844450 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:44:26.844465 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:44:26.844481 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:44:26.844500 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:44:26.844521 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:44:26.844537 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:44:26.844553 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:44:26.844569 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:44:26.844582 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:44:26.844594 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:44:26.844609 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:44:26.844625 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:44:26.844637 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:44:26.844652 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:44:26.844665 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:44:26.844677 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:44:26.844692 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:44:26.844708 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:44:26.844721 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:44:26.844736 systemd[1]: Reached target machines.target - Containers. Feb 13 19:44:26.844752 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:44:26.844770 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:44:26.844783 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:44:26.844795 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:44:26.844807 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:44:26.844820 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:44:26.844832 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:44:26.844844 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:44:26.844855 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:44:26.844870 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:44:26.844882 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:44:26.844894 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:44:26.844906 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:44:26.844918 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:44:26.844930 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:44:26.844943 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:44:26.844955 kernel: loop: module loaded Feb 13 19:44:26.844966 kernel: fuse: init (API version 7.39) Feb 13 19:44:26.844980 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:44:26.844992 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:44:26.845005 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:44:26.845018 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:44:26.845029 systemd[1]: Stopped verity-setup.service. Feb 13 19:44:26.845042 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:44:26.845055 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:44:26.845067 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:44:26.845080 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:44:26.845097 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:44:26.845112 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:44:26.845127 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:44:26.845142 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:44:26.845156 kernel: ACPI: bus type drm_connector registered Feb 13 19:44:26.845173 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:44:26.845188 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:44:26.845219 systemd-journald[1122]: Collecting audit messages is disabled. Feb 13 19:44:26.845241 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:44:26.845253 systemd-journald[1122]: Journal started Feb 13 19:44:26.845274 systemd-journald[1122]: Runtime Journal (/run/log/journal/aabb66df48db4e43af83d7e14b3e53f7) is 6.0M, max 48.4M, 42.3M free. Feb 13 19:44:26.577442 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:44:26.596429 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:44:26.597062 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:44:26.849229 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:44:26.851646 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:44:26.852870 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:44:26.853202 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:44:26.855049 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:44:26.855358 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:44:26.857477 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:44:26.857777 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:44:26.859895 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:44:26.860183 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:44:26.862010 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:44:26.863832 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:44:26.865622 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:44:26.867705 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:44:26.887049 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:44:26.900386 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:44:26.903201 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:44:26.904815 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:44:26.904856 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:44:26.907490 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:44:26.910413 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:44:26.913116 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:44:26.914595 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:44:26.918045 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:44:26.921951 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:44:26.923543 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:44:26.924935 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:44:26.926620 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:44:26.927847 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:44:26.935761 systemd-journald[1122]: Time spent on flushing to /var/log/journal/aabb66df48db4e43af83d7e14b3e53f7 is 24.618ms for 934 entries. Feb 13 19:44:26.935761 systemd-journald[1122]: System Journal (/var/log/journal/aabb66df48db4e43af83d7e14b3e53f7) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:44:26.978007 systemd-journald[1122]: Received client request to flush runtime journal. Feb 13 19:44:26.978042 kernel: loop0: detected capacity change from 0 to 138184 Feb 13 19:44:26.936988 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:44:26.942593 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:44:26.947636 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:44:26.949434 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:44:26.951585 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:44:26.953406 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:44:26.956860 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:44:26.960848 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:44:26.972603 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:44:26.979087 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:44:26.984951 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:44:26.990862 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:44:26.998212 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:44:27.007306 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:44:27.008346 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:44:27.018545 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:44:27.020763 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:44:27.021482 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:44:27.027462 kernel: loop1: detected capacity change from 0 to 205544 Feb 13 19:44:27.040993 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Feb 13 19:44:27.041014 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Feb 13 19:44:27.048320 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:44:27.058318 kernel: loop2: detected capacity change from 0 to 140992 Feb 13 19:44:27.101317 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 19:44:27.112320 kernel: loop4: detected capacity change from 0 to 205544 Feb 13 19:44:27.119314 kernel: loop5: detected capacity change from 0 to 140992 Feb 13 19:44:27.129597 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:44:27.130329 (sd-merge)[1196]: Merged extensions into '/usr'. Feb 13 19:44:27.134607 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:44:27.134625 systemd[1]: Reloading... Feb 13 19:44:27.201473 zram_generator::config[1225]: No configuration found. Feb 13 19:44:27.254048 ldconfig[1167]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:44:27.331267 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:44:27.386006 systemd[1]: Reloading finished in 250 ms. Feb 13 19:44:27.418880 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:44:27.420681 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:44:27.433665 systemd[1]: Starting ensure-sysext.service... Feb 13 19:44:27.435885 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:44:27.442069 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:44:27.442086 systemd[1]: Reloading... Feb 13 19:44:27.461349 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:44:27.461728 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:44:27.462758 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:44:27.463059 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Feb 13 19:44:27.463139 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Feb 13 19:44:27.466678 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:44:27.466693 systemd-tmpfiles[1260]: Skipping /boot Feb 13 19:44:27.480076 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:44:27.482461 systemd-tmpfiles[1260]: Skipping /boot Feb 13 19:44:27.507340 zram_generator::config[1286]: No configuration found. Feb 13 19:44:27.617584 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:44:27.670912 systemd[1]: Reloading finished in 228 ms. Feb 13 19:44:27.693204 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:44:27.714356 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:44:27.717155 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:44:27.719469 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:44:27.724511 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:44:27.733379 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:44:27.735574 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:44:27.741816 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:44:27.747551 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:44:27.752509 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:44:27.752758 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:44:27.756074 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:44:27.759135 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:44:27.764432 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:44:27.765925 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:44:27.766140 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:44:27.768747 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:44:27.772424 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:44:27.772909 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:44:27.775626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:44:27.777127 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:44:27.780131 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:44:27.780869 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:44:27.791110 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:44:27.795253 augenrules[1360]: No rules Feb 13 19:44:27.795282 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Feb 13 19:44:27.795674 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:44:27.795911 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:44:27.800525 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:44:27.800778 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:44:27.813662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:44:27.817782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:44:27.820577 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:44:27.822229 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:44:27.828086 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:44:27.829237 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:44:27.830174 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:44:27.831695 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:44:27.834829 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:44:27.836766 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:44:27.836961 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:44:27.838640 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:44:27.838809 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:44:27.840707 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:44:27.840920 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:44:27.843433 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:44:27.862739 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:44:27.869526 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:44:27.870861 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:44:27.872860 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:44:27.876798 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:44:27.881063 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:44:27.886778 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:44:27.888001 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:44:27.891127 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:44:27.892811 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:44:27.892839 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:44:27.893810 systemd[1]: Finished ensure-sysext.service. Feb 13 19:44:27.895441 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:44:27.895677 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:44:27.897892 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:44:27.898070 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:44:27.899683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:44:27.900472 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:44:27.902409 augenrules[1399]: /sbin/augenrules: No change Feb 13 19:44:27.913315 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1386) Feb 13 19:44:27.916664 augenrules[1428]: No rules Feb 13 19:44:27.917976 systemd-resolved[1328]: Positive Trust Anchors: Feb 13 19:44:27.918037 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:44:27.918349 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:44:27.918426 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:44:27.918953 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:44:27.921791 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:44:27.922015 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:44:27.926553 systemd-resolved[1328]: Defaulting to hostname 'linux'. Feb 13 19:44:27.930085 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:44:27.944192 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:44:27.950684 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:44:27.953463 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:44:27.968425 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:44:27.968075 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:44:27.969518 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:44:27.969616 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:44:27.972197 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:44:27.978355 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:44:27.991991 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:44:28.005466 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 19:44:28.009609 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:44:28.012522 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:44:28.012749 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:44:28.011591 systemd-networkd[1412]: lo: Link UP Feb 13 19:44:28.011597 systemd-networkd[1412]: lo: Gained carrier Feb 13 19:44:28.014102 systemd-networkd[1412]: Enumeration completed Feb 13 19:44:28.015243 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:44:28.015256 systemd-networkd[1412]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:44:28.015376 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:44:28.016878 systemd[1]: Reached target network.target - Network. Feb 13 19:44:28.018731 systemd-networkd[1412]: eth0: Link UP Feb 13 19:44:28.018744 systemd-networkd[1412]: eth0: Gained carrier Feb 13 19:44:28.018759 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:44:28.026544 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:44:28.038397 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:44:28.044366 systemd-networkd[1412]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:44:28.114839 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:44:28.114944 systemd-timesyncd[1441]: Initial clock synchronization to Thu 2025-02-13 19:44:28.061180 UTC. Feb 13 19:44:28.115844 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:44:28.118166 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:44:28.121471 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:44:28.148778 kernel: kvm_amd: TSC scaling supported Feb 13 19:44:28.148839 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:44:28.148875 kernel: kvm_amd: Nested Paging enabled Feb 13 19:44:28.148888 kernel: kvm_amd: LBR virtualization supported Feb 13 19:44:28.149388 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:44:28.150596 kernel: kvm_amd: Virtual GIF supported Feb 13 19:44:28.171309 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:44:28.204520 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:44:28.231534 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:44:28.233218 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:44:28.243720 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:44:28.283089 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:44:28.285041 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:44:28.286465 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:44:28.287972 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:44:28.289580 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:44:28.291229 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:44:28.292551 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:44:28.293840 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:44:28.295164 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:44:28.295199 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:44:28.296280 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:44:28.298496 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:44:28.302020 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:44:28.315784 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:44:28.318985 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:44:28.320934 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:44:28.322324 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:44:28.323514 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:44:28.324654 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:44:28.324682 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:44:28.325919 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:44:28.328392 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:44:28.332649 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:44:28.336056 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:44:28.337376 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:44:28.339818 jq[1466]: false Feb 13 19:44:28.340182 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:44:28.342752 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:44:28.347050 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:44:28.350675 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:44:28.356176 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:44:28.357878 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:44:28.358531 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:44:28.360437 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:44:28.365041 extend-filesystems[1467]: Found loop3 Feb 13 19:44:28.366116 extend-filesystems[1467]: Found loop4 Feb 13 19:44:28.366116 extend-filesystems[1467]: Found loop5 Feb 13 19:44:28.366116 extend-filesystems[1467]: Found sr0 Feb 13 19:44:28.366116 extend-filesystems[1467]: Found vda Feb 13 19:44:28.366116 extend-filesystems[1467]: Found vda1 Feb 13 19:44:28.366116 extend-filesystems[1467]: Found vda2 Feb 13 19:44:28.366116 extend-filesystems[1467]: Found vda3 Feb 13 19:44:28.366116 extend-filesystems[1467]: Found usr Feb 13 19:44:28.366116 extend-filesystems[1467]: Found vda4 Feb 13 19:44:28.366116 extend-filesystems[1467]: Found vda6 Feb 13 19:44:28.395395 extend-filesystems[1467]: Found vda7 Feb 13 19:44:28.395395 extend-filesystems[1467]: Found vda9 Feb 13 19:44:28.395395 extend-filesystems[1467]: Checking size of /dev/vda9 Feb 13 19:44:28.374504 dbus-daemon[1465]: [system] SELinux support is enabled Feb 13 19:44:28.367112 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:44:28.406690 extend-filesystems[1467]: Resized partition /dev/vda9 Feb 13 19:44:28.373684 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:44:28.408010 extend-filesystems[1492]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:44:28.373904 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:44:28.410078 update_engine[1474]: I20250213 19:44:28.395030 1474 main.cc:92] Flatcar Update Engine starting Feb 13 19:44:28.410078 update_engine[1474]: I20250213 19:44:28.396277 1474 update_check_scheduler.cc:74] Next update check in 2m36s Feb 13 19:44:28.374381 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:44:28.410559 jq[1476]: true Feb 13 19:44:28.374630 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:44:28.377546 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:44:28.382755 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:44:28.382797 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:44:28.384935 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:44:28.384958 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:44:28.388718 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:44:28.389018 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:44:28.409479 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:44:28.414713 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:44:28.420398 jq[1489]: true Feb 13 19:44:28.430978 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1379) Feb 13 19:44:28.428913 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:44:28.429125 (ntainerd)[1493]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:44:28.431710 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:44:28.440429 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:44:28.442509 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:44:28.462973 systemd-logind[1473]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:44:28.463336 systemd-logind[1473]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:44:28.464831 extend-filesystems[1492]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:44:28.464831 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:44:28.464831 extend-filesystems[1492]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:44:28.481151 extend-filesystems[1467]: Resized filesystem in /dev/vda9 Feb 13 19:44:28.465729 systemd-logind[1473]: New seat seat0. Feb 13 19:44:28.467478 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:44:28.473929 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:44:28.474160 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:44:28.488443 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:44:28.497646 bash[1517]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:44:28.499689 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:44:28.501827 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:44:28.643841 containerd[1493]: time="2025-02-13T19:44:28.643728118Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:44:28.668318 containerd[1493]: time="2025-02-13T19:44:28.668154408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:44:28.670078 containerd[1493]: time="2025-02-13T19:44:28.670039462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:44:28.670078 containerd[1493]: time="2025-02-13T19:44:28.670074428Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:44:28.670134 containerd[1493]: time="2025-02-13T19:44:28.670095507Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:44:28.670335 containerd[1493]: time="2025-02-13T19:44:28.670317413Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:44:28.670357 containerd[1493]: time="2025-02-13T19:44:28.670344384Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:44:28.670454 containerd[1493]: time="2025-02-13T19:44:28.670428572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:44:28.670485 containerd[1493]: time="2025-02-13T19:44:28.670450873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:44:28.670734 containerd[1493]: time="2025-02-13T19:44:28.670706974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:44:28.670755 containerd[1493]: time="2025-02-13T19:44:28.670732722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:44:28.670773 containerd[1493]: time="2025-02-13T19:44:28.670758200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:44:28.670791 containerd[1493]: time="2025-02-13T19:44:28.670773078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:44:28.670925 containerd[1493]: time="2025-02-13T19:44:28.670900486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:44:28.671235 containerd[1493]: time="2025-02-13T19:44:28.671211820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:44:28.671409 containerd[1493]: time="2025-02-13T19:44:28.671383612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:44:28.671440 containerd[1493]: time="2025-02-13T19:44:28.671406686Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:44:28.671586 containerd[1493]: time="2025-02-13T19:44:28.671562047Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:44:28.671655 containerd[1493]: time="2025-02-13T19:44:28.671639692Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:44:28.677624 containerd[1493]: time="2025-02-13T19:44:28.677587094Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:44:28.677679 containerd[1493]: time="2025-02-13T19:44:28.677638500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:44:28.677679 containerd[1493]: time="2025-02-13T19:44:28.677658618Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:44:28.677679 containerd[1493]: time="2025-02-13T19:44:28.677674828Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:44:28.677763 containerd[1493]: time="2025-02-13T19:44:28.677690237Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:44:28.677878 containerd[1493]: time="2025-02-13T19:44:28.677852852Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:44:28.678126 containerd[1493]: time="2025-02-13T19:44:28.678104774Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:44:28.678247 containerd[1493]: time="2025-02-13T19:44:28.678227214Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:44:28.678271 containerd[1493]: time="2025-02-13T19:44:28.678245859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:44:28.678271 containerd[1493]: time="2025-02-13T19:44:28.678260777Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:44:28.678324 containerd[1493]: time="2025-02-13T19:44:28.678274703Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:44:28.678324 containerd[1493]: time="2025-02-13T19:44:28.678309027Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:44:28.678324 containerd[1493]: time="2025-02-13T19:44:28.678321511Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:44:28.678384 containerd[1493]: time="2025-02-13T19:44:28.678334385Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:44:28.678384 containerd[1493]: time="2025-02-13T19:44:28.678346518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:44:28.678384 containerd[1493]: time="2025-02-13T19:44:28.678358580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:44:28.678384 containerd[1493]: time="2025-02-13T19:44:28.678369781Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:44:28.678384 containerd[1493]: time="2025-02-13T19:44:28.678379459Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:44:28.678475 containerd[1493]: time="2025-02-13T19:44:28.678422310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.678475 containerd[1493]: time="2025-02-13T19:44:28.678435094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.678475 containerd[1493]: time="2025-02-13T19:44:28.678446535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.678475 containerd[1493]: time="2025-02-13T19:44:28.678457526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.678570 containerd[1493]: time="2025-02-13T19:44:28.678478946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.678570 containerd[1493]: time="2025-02-13T19:44:28.678491319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.678570 containerd[1493]: time="2025-02-13T19:44:28.678506768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.678570 containerd[1493]: time="2025-02-13T19:44:28.678521416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.678570 containerd[1493]: time="2025-02-13T19:44:28.678537756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.678570 containerd[1493]: time="2025-02-13T19:44:28.678554918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.678570 containerd[1493]: time="2025-02-13T19:44:28.678565649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.678718 containerd[1493]: time="2025-02-13T19:44:28.678576779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.678718 containerd[1493]: time="2025-02-13T19:44:28.678587610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.678718 containerd[1493]: time="2025-02-13T19:44:28.678604281Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:44:28.678718 containerd[1493]: time="2025-02-13T19:44:28.678622285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.678718 containerd[1493]: time="2025-02-13T19:44:28.678634307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.678718 containerd[1493]: time="2025-02-13T19:44:28.678644526Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:44:28.678718 containerd[1493]: time="2025-02-13T19:44:28.678693378Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:44:28.678718 containerd[1493]: time="2025-02-13T19:44:28.678709669Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:44:28.678718 containerd[1493]: time="2025-02-13T19:44:28.678719818Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:44:28.678880 containerd[1493]: time="2025-02-13T19:44:28.678731920Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:44:28.678880 containerd[1493]: time="2025-02-13T19:44:28.678740767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.678880 containerd[1493]: time="2025-02-13T19:44:28.678757729Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:44:28.678880 containerd[1493]: time="2025-02-13T19:44:28.678767257Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:44:28.678880 containerd[1493]: time="2025-02-13T19:44:28.678776324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:44:28.679075 containerd[1493]: time="2025-02-13T19:44:28.679018808Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:44:28.679075 containerd[1493]: time="2025-02-13T19:44:28.679064033Z" level=info msg="Connect containerd service" Feb 13 19:44:28.679264 containerd[1493]: time="2025-02-13T19:44:28.679110620Z" level=info msg="using legacy CRI server" Feb 13 19:44:28.679264 containerd[1493]: time="2025-02-13T19:44:28.679120920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:44:28.679264 containerd[1493]: time="2025-02-13T19:44:28.679235665Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:44:28.679950 containerd[1493]: time="2025-02-13T19:44:28.679919617Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:44:28.680209 containerd[1493]: time="2025-02-13T19:44:28.680097992Z" level=info msg="Start subscribing containerd event" Feb 13 19:44:28.680209 containerd[1493]: time="2025-02-13T19:44:28.680149759Z" level=info msg="Start recovering state" Feb 13 19:44:28.680332 containerd[1493]: time="2025-02-13T19:44:28.680283620Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:44:28.680379 containerd[1493]: time="2025-02-13T19:44:28.680305801Z" level=info msg="Start event monitor" Feb 13 19:44:28.680518 containerd[1493]: time="2025-02-13T19:44:28.680418833Z" level=info msg="Start snapshots syncer" Feb 13 19:44:28.680518 containerd[1493]: time="2025-02-13T19:44:28.680388306Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:44:28.680518 containerd[1493]: time="2025-02-13T19:44:28.680433982Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:44:28.680518 containerd[1493]: time="2025-02-13T19:44:28.680454901Z" level=info msg="Start streaming server" Feb 13 19:44:28.680518 containerd[1493]: time="2025-02-13T19:44:28.680536053Z" level=info msg="containerd successfully booted in 0.039492s" Feb 13 19:44:28.680625 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:44:28.680952 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:44:28.705709 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:44:28.716516 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:44:28.718506 systemd[1]: Started sshd@0-10.0.0.131:22-10.0.0.1:56680.service - OpenSSH per-connection server daemon (10.0.0.1:56680). Feb 13 19:44:28.725650 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:44:28.725871 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:44:28.728604 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:44:28.744406 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:44:28.755625 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:44:28.757881 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:44:28.759443 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:44:28.781636 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 56680 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:44:28.783470 sshd-session[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:28.792385 systemd-logind[1473]: New session 1 of user core. Feb 13 19:44:28.793650 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:44:28.806538 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:44:28.820333 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:44:28.831532 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:44:28.835488 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:44:28.959113 systemd[1555]: Queued start job for default target default.target. Feb 13 19:44:28.967649 systemd[1555]: Created slice app.slice - User Application Slice. Feb 13 19:44:28.967677 systemd[1555]: Reached target paths.target - Paths. Feb 13 19:44:28.967691 systemd[1555]: Reached target timers.target - Timers. Feb 13 19:44:28.969181 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:44:28.980699 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:44:28.980825 systemd[1555]: Reached target sockets.target - Sockets. Feb 13 19:44:28.980840 systemd[1555]: Reached target basic.target - Basic System. Feb 13 19:44:28.980877 systemd[1555]: Reached target default.target - Main User Target. Feb 13 19:44:28.980910 systemd[1555]: Startup finished in 138ms. Feb 13 19:44:28.981386 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:44:28.984235 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:44:29.047265 systemd[1]: Started sshd@1-10.0.0.131:22-10.0.0.1:56682.service - OpenSSH per-connection server daemon (10.0.0.1:56682). Feb 13 19:44:29.089056 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 56682 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:44:29.090375 sshd-session[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:29.094472 systemd-logind[1473]: New session 2 of user core. Feb 13 19:44:29.106420 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:44:29.159885 sshd[1568]: Connection closed by 10.0.0.1 port 56682 Feb 13 19:44:29.160189 sshd-session[1566]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:29.173633 systemd[1]: sshd@1-10.0.0.131:22-10.0.0.1:56682.service: Deactivated successfully. Feb 13 19:44:29.175065 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:44:29.176203 systemd-logind[1473]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:44:29.177329 systemd[1]: Started sshd@2-10.0.0.131:22-10.0.0.1:56698.service - OpenSSH per-connection server daemon (10.0.0.1:56698). Feb 13 19:44:29.179346 systemd-logind[1473]: Removed session 2. Feb 13 19:44:29.212665 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 56698 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:44:29.213917 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:29.217372 systemd-logind[1473]: New session 3 of user core. Feb 13 19:44:29.224381 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:44:29.279840 sshd[1575]: Connection closed by 10.0.0.1 port 56698 Feb 13 19:44:29.280228 sshd-session[1573]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:29.284784 systemd[1]: sshd@2-10.0.0.131:22-10.0.0.1:56698.service: Deactivated successfully. Feb 13 19:44:29.286978 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:44:29.287888 systemd-logind[1473]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:44:29.289014 systemd-logind[1473]: Removed session 3. Feb 13 19:44:29.910499 systemd-networkd[1412]: eth0: Gained IPv6LL Feb 13 19:44:29.913925 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:44:29.915978 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:44:29.927485 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:44:29.930548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:44:29.933637 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:44:29.952442 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:44:29.952718 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:44:29.954578 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:44:29.957544 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:44:30.589363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:44:30.591085 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:44:30.603595 systemd[1]: Startup finished in 729ms (kernel) + 5.329s (initrd) + 4.609s (userspace) = 10.668s. Feb 13 19:44:30.609755 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:44:31.009581 kubelet[1601]: E0213 19:44:31.009522 1601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:44:31.013599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:44:31.013813 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:44:39.252893 systemd[1]: Started sshd@3-10.0.0.131:22-10.0.0.1:49458.service - OpenSSH per-connection server daemon (10.0.0.1:49458). Feb 13 19:44:39.291923 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 49458 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:44:39.293452 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:39.297780 systemd-logind[1473]: New session 4 of user core. Feb 13 19:44:39.307550 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:44:39.361340 sshd[1616]: Connection closed by 10.0.0.1 port 49458 Feb 13 19:44:39.361820 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:39.379306 systemd[1]: sshd@3-10.0.0.131:22-10.0.0.1:49458.service: Deactivated successfully. Feb 13 19:44:39.381160 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:44:39.382997 systemd-logind[1473]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:44:39.403702 systemd[1]: Started sshd@4-10.0.0.131:22-10.0.0.1:55662.service - OpenSSH per-connection server daemon (10.0.0.1:55662). Feb 13 19:44:39.404708 systemd-logind[1473]: Removed session 4. Feb 13 19:44:39.434517 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 55662 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:44:39.436319 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:39.440602 systemd-logind[1473]: New session 5 of user core. Feb 13 19:44:39.450455 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:44:39.501448 sshd[1623]: Connection closed by 10.0.0.1 port 55662 Feb 13 19:44:39.501925 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:39.514798 systemd[1]: sshd@4-10.0.0.131:22-10.0.0.1:55662.service: Deactivated successfully. Feb 13 19:44:39.516990 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:44:39.518979 systemd-logind[1473]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:44:39.529563 systemd[1]: Started sshd@5-10.0.0.131:22-10.0.0.1:55676.service - OpenSSH per-connection server daemon (10.0.0.1:55676). Feb 13 19:44:39.530613 systemd-logind[1473]: Removed session 5. Feb 13 19:44:39.562421 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 55676 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:44:39.563959 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:39.569174 systemd-logind[1473]: New session 6 of user core. Feb 13 19:44:39.578593 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:44:39.633534 sshd[1630]: Connection closed by 10.0.0.1 port 55676 Feb 13 19:44:39.633986 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:39.658483 systemd[1]: sshd@5-10.0.0.131:22-10.0.0.1:55676.service: Deactivated successfully. Feb 13 19:44:39.660277 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:44:39.661860 systemd-logind[1473]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:44:39.663158 systemd[1]: Started sshd@6-10.0.0.131:22-10.0.0.1:55682.service - OpenSSH per-connection server daemon (10.0.0.1:55682). Feb 13 19:44:39.663927 systemd-logind[1473]: Removed session 6. Feb 13 19:44:39.712267 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 55682 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:44:39.713781 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:39.718092 systemd-logind[1473]: New session 7 of user core. Feb 13 19:44:39.732542 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:44:39.790846 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:44:39.791195 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:44:39.807359 sudo[1638]: pam_unix(sudo:session): session closed for user root Feb 13 19:44:39.809224 sshd[1637]: Connection closed by 10.0.0.1 port 55682 Feb 13 19:44:39.809384 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:39.820977 systemd[1]: sshd@6-10.0.0.131:22-10.0.0.1:55682.service: Deactivated successfully. Feb 13 19:44:39.822570 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:44:39.823992 systemd-logind[1473]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:44:39.825690 systemd[1]: Started sshd@7-10.0.0.131:22-10.0.0.1:55690.service - OpenSSH per-connection server daemon (10.0.0.1:55690). Feb 13 19:44:39.826549 systemd-logind[1473]: Removed session 7. Feb 13 19:44:39.863856 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 55690 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:44:39.865417 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:39.869322 systemd-logind[1473]: New session 8 of user core. Feb 13 19:44:39.879401 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:44:39.933273 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:44:39.933637 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:44:39.937226 sudo[1647]: pam_unix(sudo:session): session closed for user root Feb 13 19:44:39.943147 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:44:39.943498 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:44:39.960793 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:44:39.992153 augenrules[1669]: No rules Feb 13 19:44:39.993222 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:44:39.993481 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:44:39.994741 sudo[1646]: pam_unix(sudo:session): session closed for user root Feb 13 19:44:39.996497 sshd[1645]: Connection closed by 10.0.0.1 port 55690 Feb 13 19:44:39.996849 sshd-session[1643]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:40.014360 systemd[1]: sshd@7-10.0.0.131:22-10.0.0.1:55690.service: Deactivated successfully. Feb 13 19:44:40.016072 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:44:40.017585 systemd-logind[1473]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:44:40.018841 systemd[1]: Started sshd@8-10.0.0.131:22-10.0.0.1:55702.service - OpenSSH per-connection server daemon (10.0.0.1:55702). Feb 13 19:44:40.019640 systemd-logind[1473]: Removed session 8. Feb 13 19:44:40.055521 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 55702 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:44:40.057000 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:40.060944 systemd-logind[1473]: New session 9 of user core. Feb 13 19:44:40.070472 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:44:40.125115 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:44:40.125468 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:44:40.148628 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:44:40.166831 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:44:40.167092 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:44:40.724031 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:44:40.733585 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:44:40.759832 systemd[1]: Reloading requested from client PID 1720 ('systemctl') (unit session-9.scope)... Feb 13 19:44:40.759847 systemd[1]: Reloading... Feb 13 19:44:40.848374 zram_generator::config[1758]: No configuration found. Feb 13 19:44:41.170966 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:44:41.248776 systemd[1]: Reloading finished in 488 ms. Feb 13 19:44:41.296725 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:44:41.296829 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:44:41.297115 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:44:41.299570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:44:41.454919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:44:41.460918 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:44:41.525320 kubelet[1807]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:44:41.525320 kubelet[1807]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:44:41.525320 kubelet[1807]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:44:41.525735 kubelet[1807]: I0213 19:44:41.525364 1807 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:44:42.033826 kubelet[1807]: I0213 19:44:42.032954 1807 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:44:42.033826 kubelet[1807]: I0213 19:44:42.032987 1807 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:44:42.033826 kubelet[1807]: I0213 19:44:42.033382 1807 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:44:42.053626 kubelet[1807]: I0213 19:44:42.053529 1807 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:44:42.060987 kubelet[1807]: E0213 19:44:42.060943 1807 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:44:42.060987 kubelet[1807]: I0213 19:44:42.060973 1807 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:44:42.066906 kubelet[1807]: I0213 19:44:42.066869 1807 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:44:42.067852 kubelet[1807]: I0213 19:44:42.067818 1807 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:44:42.068011 kubelet[1807]: I0213 19:44:42.067970 1807 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:44:42.068161 kubelet[1807]: I0213 19:44:42.067998 1807 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.131","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:44:42.068161 kubelet[1807]: I0213 19:44:42.068153 1807 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:44:42.068161 kubelet[1807]: I0213 19:44:42.068162 1807 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:44:42.068336 kubelet[1807]: I0213 19:44:42.068276 1807 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:44:42.069721 kubelet[1807]: I0213 19:44:42.069681 1807 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:44:42.069721 kubelet[1807]: I0213 19:44:42.069708 1807 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:44:42.069790 kubelet[1807]: I0213 19:44:42.069746 1807 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:44:42.069790 kubelet[1807]: I0213 19:44:42.069759 1807 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:44:42.069840 kubelet[1807]: E0213 19:44:42.069801 1807 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:42.069869 kubelet[1807]: E0213 19:44:42.069846 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:42.075983 kubelet[1807]: I0213 19:44:42.075953 1807 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:44:42.077649 kubelet[1807]: I0213 19:44:42.077608 1807 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:44:42.077688 kubelet[1807]: W0213 19:44:42.077680 1807 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:44:42.078392 kubelet[1807]: I0213 19:44:42.078365 1807 server.go:1269] "Started kubelet" Feb 13 19:44:42.079680 kubelet[1807]: I0213 19:44:42.079617 1807 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:44:42.082169 kubelet[1807]: I0213 19:44:42.082091 1807 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:44:42.082784 kubelet[1807]: I0213 19:44:42.082423 1807 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:44:42.082784 kubelet[1807]: I0213 19:44:42.082673 1807 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:44:42.084178 kubelet[1807]: I0213 19:44:42.084094 1807 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:44:42.084418 kubelet[1807]: I0213 19:44:42.084373 1807 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:44:42.085356 kubelet[1807]: W0213 19:44:42.084942 1807 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:44:42.085356 kubelet[1807]: E0213 19:44:42.084976 1807 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 19:44:42.085356 kubelet[1807]: E0213 19:44:42.085198 1807 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 13 19:44:42.085356 kubelet[1807]: I0213 19:44:42.085232 1807 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:44:42.085875 kubelet[1807]: I0213 19:44:42.085846 1807 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:44:42.087294 kubelet[1807]: I0213 19:44:42.087112 1807 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:44:42.087621 kubelet[1807]: I0213 19:44:42.087539 1807 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:44:42.088729 kubelet[1807]: W0213 19:44:42.088577 1807 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.131" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:44:42.088729 kubelet[1807]: E0213 19:44:42.088617 1807 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.131\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:44:42.088729 kubelet[1807]: W0213 19:44:42.088644 1807 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:44:42.088729 kubelet[1807]: E0213 19:44:42.088672 1807 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:44:42.090762 kubelet[1807]: I0213 19:44:42.089043 1807 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:44:42.090762 kubelet[1807]: I0213 19:44:42.089059 1807 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:44:42.095303 kubelet[1807]: E0213 19:44:42.092674 1807 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:44:42.096929 kubelet[1807]: E0213 19:44:42.096876 1807 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.131\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:44:42.100458 kubelet[1807]: E0213 19:44:42.098492 1807 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.131.1823dc193ea39dac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.131,UID:10.0.0.131,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.131,},FirstTimestamp:2025-02-13 19:44:42.078346668 +0000 UTC m=+0.594227416,LastTimestamp:2025-02-13 19:44:42.078346668 +0000 UTC m=+0.594227416,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.131,}" Feb 13 19:44:42.102826 kubelet[1807]: I0213 19:44:42.102784 1807 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:44:42.102893 kubelet[1807]: I0213 19:44:42.102841 1807 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:44:42.102893 kubelet[1807]: I0213 19:44:42.102866 1807 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:44:42.106941 kubelet[1807]: E0213 19:44:42.106636 1807 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.131.1823dc193f7dfac7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.131,UID:10.0.0.131,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.131,},FirstTimestamp:2025-02-13 19:44:42.092657351 +0000 UTC m=+0.608538098,LastTimestamp:2025-02-13 19:44:42.092657351 +0000 UTC m=+0.608538098,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.131,}" Feb 13 19:44:42.110082 kubelet[1807]: E0213 19:44:42.110003 1807 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.131.1823dc19400c16de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.131,UID:10.0.0.131,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.131 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.131,},FirstTimestamp:2025-02-13 19:44:42.101970654 +0000 UTC m=+0.617851402,LastTimestamp:2025-02-13 19:44:42.101970654 +0000 UTC m=+0.617851402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.131,}" Feb 13 19:44:42.113430 kubelet[1807]: E0213 19:44:42.113181 1807 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.131.1823dc19400c3dc3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.131,UID:10.0.0.131,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.131 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.131,},FirstTimestamp:2025-02-13 19:44:42.101980611 +0000 UTC m=+0.617861359,LastTimestamp:2025-02-13 19:44:42.101980611 +0000 UTC m=+0.617861359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.131,}" Feb 13 19:44:42.116128 kubelet[1807]: E0213 19:44:42.116078 1807 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.131.1823dc19400c4960 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.131,UID:10.0.0.131,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.131 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.131,},FirstTimestamp:2025-02-13 19:44:42.101983584 +0000 UTC m=+0.617864332,LastTimestamp:2025-02-13 19:44:42.101983584 +0000 UTC m=+0.617864332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.131,}" Feb 13 19:44:42.150607 kubelet[1807]: I0213 19:44:42.150533 1807 policy_none.go:49] "None policy: Start" Feb 13 19:44:42.151816 kubelet[1807]: I0213 19:44:42.151767 1807 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:44:42.151816 kubelet[1807]: I0213 19:44:42.151814 1807 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:44:42.174936 kubelet[1807]: I0213 19:44:42.174892 1807 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:44:42.176793 kubelet[1807]: I0213 19:44:42.176767 1807 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:44:42.176850 kubelet[1807]: I0213 19:44:42.176810 1807 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:44:42.176850 kubelet[1807]: I0213 19:44:42.176838 1807 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:44:42.176999 kubelet[1807]: E0213 19:44:42.176974 1807 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:44:42.181630 kubelet[1807]: W0213 19:44:42.181598 1807 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 13 19:44:42.181676 kubelet[1807]: E0213 19:44:42.181639 1807 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 19:44:42.185469 kubelet[1807]: E0213 19:44:42.185438 1807 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 13 19:44:42.187888 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:44:42.200590 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:44:42.203919 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:44:42.211164 kubelet[1807]: I0213 19:44:42.211134 1807 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:44:42.211404 kubelet[1807]: I0213 19:44:42.211387 1807 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:44:42.211442 kubelet[1807]: I0213 19:44:42.211404 1807 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:44:42.212095 kubelet[1807]: I0213 19:44:42.212075 1807 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:44:42.213437 kubelet[1807]: E0213 19:44:42.213399 1807 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.131\" not found" Feb 13 19:44:42.217750 kubelet[1807]: E0213 19:44:42.217625 1807 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.131.1823dc1946b4ef5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.131,UID:10.0.0.131,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:10.0.0.131,},FirstTimestamp:2025-02-13 19:44:42.213699423 +0000 UTC m=+0.729580171,LastTimestamp:2025-02-13 19:44:42.213699423 +0000 UTC m=+0.729580171,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.131,}" Feb 13 19:44:42.301358 kubelet[1807]: E0213 19:44:42.301250 1807 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.131\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 13 19:44:42.312882 kubelet[1807]: I0213 19:44:42.312851 1807 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.131" Feb 13 19:44:42.316558 kubelet[1807]: E0213 19:44:42.316530 1807 kubelet_node_status.go:95] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.131" Feb 13 19:44:42.316598 kubelet[1807]: E0213 19:44:42.316485 1807 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.131.1823dc19400c16de\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.131.1823dc19400c16de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.131,UID:10.0.0.131,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.131 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.131,},FirstTimestamp:2025-02-13 19:44:42.101970654 +0000 UTC m=+0.617851402,LastTimestamp:2025-02-13 19:44:42.31280962 +0000 UTC m=+0.828690368,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.131,}" Feb 13 19:44:42.317562 kubelet[1807]: E0213 19:44:42.317481 1807 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.131.1823dc19400c3dc3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.131.1823dc19400c3dc3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.131,UID:10.0.0.131,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.131 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.131,},FirstTimestamp:2025-02-13 19:44:42.101980611 +0000 UTC m=+0.617861359,LastTimestamp:2025-02-13 19:44:42.312821879 +0000 UTC m=+0.828702627,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.131,}" Feb 13 19:44:42.320297 kubelet[1807]: E0213 19:44:42.320201 1807 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.131.1823dc19400c4960\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.131.1823dc19400c4960 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.131,UID:10.0.0.131,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.131 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.131,},FirstTimestamp:2025-02-13 19:44:42.101983584 +0000 UTC m=+0.617864332,LastTimestamp:2025-02-13 19:44:42.312824931 +0000 UTC m=+0.828705679,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.131,}" Feb 13 19:44:42.517524 kubelet[1807]: I0213 19:44:42.517469 1807 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.131" Feb 13 19:44:42.527054 kubelet[1807]: I0213 19:44:42.527009 1807 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.131" Feb 13 19:44:42.527054 kubelet[1807]: E0213 19:44:42.527042 1807 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.131\": node \"10.0.0.131\" not found" Feb 13 19:44:42.546457 kubelet[1807]: E0213 19:44:42.546404 1807 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 13 19:44:42.646926 kubelet[1807]: E0213 19:44:42.646743 1807 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 13 19:44:42.747176 kubelet[1807]: E0213 19:44:42.747121 1807 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 13 19:44:42.847865 kubelet[1807]: E0213 19:44:42.847791 1807 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 13 19:44:42.948512 kubelet[1807]: E0213 19:44:42.948353 1807 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 13 19:44:43.037027 kubelet[1807]: I0213 19:44:43.036958 1807 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:44:43.048809 kubelet[1807]: E0213 19:44:43.048782 1807 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 13 19:44:43.070187 kubelet[1807]: E0213 19:44:43.070146 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:43.148908 kubelet[1807]: E0213 19:44:43.148859 1807 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 13 19:44:43.249152 kubelet[1807]: E0213 19:44:43.249113 1807 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" Feb 13 19:44:43.350182 kubelet[1807]: I0213 19:44:43.350135 1807 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:44:43.350617 containerd[1493]: time="2025-02-13T19:44:43.350578794Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:44:43.350940 kubelet[1807]: I0213 19:44:43.350810 1807 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:44:43.755549 sudo[1680]: pam_unix(sudo:session): session closed for user root Feb 13 19:44:43.756954 sshd[1679]: Connection closed by 10.0.0.1 port 55702 Feb 13 19:44:43.757397 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:43.762419 systemd[1]: sshd@8-10.0.0.131:22-10.0.0.1:55702.service: Deactivated successfully. Feb 13 19:44:43.764413 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:44:43.765094 systemd-logind[1473]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:44:43.765943 systemd-logind[1473]: Removed session 9. Feb 13 19:44:44.071215 kubelet[1807]: E0213 19:44:44.071076 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:44.071215 kubelet[1807]: I0213 19:44:44.071098 1807 apiserver.go:52] "Watching apiserver" Feb 13 19:44:44.074089 kubelet[1807]: E0213 19:44:44.074046 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qptz9" podUID="7f69143c-9b46-49ed-a443-f2500935a881" Feb 13 19:44:44.082403 systemd[1]: Created slice kubepods-besteffort-podba8c561e_83bb_4554_a636_2aae88d6b27f.slice - libcontainer container kubepods-besteffort-podba8c561e_83bb_4554_a636_2aae88d6b27f.slice. Feb 13 19:44:44.083031 kubelet[1807]: I0213 19:44:44.082869 1807 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:44:44.092143 systemd[1]: Created slice kubepods-besteffort-pod9633839f_8429_4c41_b611_cdbf1f1cc11e.slice - libcontainer container kubepods-besteffort-pod9633839f_8429_4c41_b611_cdbf1f1cc11e.slice. Feb 13 19:44:44.094933 kubelet[1807]: I0213 19:44:44.094875 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7f69143c-9b46-49ed-a443-f2500935a881-socket-dir\") pod \"csi-node-driver-qptz9\" (UID: \"7f69143c-9b46-49ed-a443-f2500935a881\") " pod="calico-system/csi-node-driver-qptz9" Feb 13 19:44:44.094933 kubelet[1807]: I0213 19:44:44.094926 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7f69143c-9b46-49ed-a443-f2500935a881-registration-dir\") pod \"csi-node-driver-qptz9\" (UID: \"7f69143c-9b46-49ed-a443-f2500935a881\") " pod="calico-system/csi-node-driver-qptz9" Feb 13 19:44:44.095027 kubelet[1807]: I0213 19:44:44.094957 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ba8c561e-83bb-4554-a636-2aae88d6b27f-kube-proxy\") pod \"kube-proxy-shvq5\" (UID: \"ba8c561e-83bb-4554-a636-2aae88d6b27f\") " pod="kube-system/kube-proxy-shvq5" Feb 13 19:44:44.095067 kubelet[1807]: I0213 19:44:44.095020 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba8c561e-83bb-4554-a636-2aae88d6b27f-xtables-lock\") pod \"kube-proxy-shvq5\" (UID: \"ba8c561e-83bb-4554-a636-2aae88d6b27f\") " pod="kube-system/kube-proxy-shvq5" Feb 13 19:44:44.095067 kubelet[1807]: I0213 19:44:44.095054 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9633839f-8429-4c41-b611-cdbf1f1cc11e-tigera-ca-bundle\") pod \"calico-node-hmpt5\" (UID: \"9633839f-8429-4c41-b611-cdbf1f1cc11e\") " pod="calico-system/calico-node-hmpt5" Feb 13 19:44:44.095123 kubelet[1807]: I0213 19:44:44.095070 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9633839f-8429-4c41-b611-cdbf1f1cc11e-var-run-calico\") pod \"calico-node-hmpt5\" (UID: \"9633839f-8429-4c41-b611-cdbf1f1cc11e\") " pod="calico-system/calico-node-hmpt5" Feb 13 19:44:44.095123 kubelet[1807]: I0213 19:44:44.095088 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9633839f-8429-4c41-b611-cdbf1f1cc11e-lib-modules\") pod \"calico-node-hmpt5\" (UID: \"9633839f-8429-4c41-b611-cdbf1f1cc11e\") " pod="calico-system/calico-node-hmpt5" Feb 13 19:44:44.095123 kubelet[1807]: I0213 19:44:44.095103 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9633839f-8429-4c41-b611-cdbf1f1cc11e-cni-net-dir\") pod \"calico-node-hmpt5\" (UID: \"9633839f-8429-4c41-b611-cdbf1f1cc11e\") " pod="calico-system/calico-node-hmpt5" Feb 13 19:44:44.095123 kubelet[1807]: I0213 19:44:44.095118 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9633839f-8429-4c41-b611-cdbf1f1cc11e-flexvol-driver-host\") pod \"calico-node-hmpt5\" (UID: \"9633839f-8429-4c41-b611-cdbf1f1cc11e\") " pod="calico-system/calico-node-hmpt5" Feb 13 19:44:44.095239 kubelet[1807]: I0213 19:44:44.095136 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f69143c-9b46-49ed-a443-f2500935a881-kubelet-dir\") pod \"csi-node-driver-qptz9\" (UID: \"7f69143c-9b46-49ed-a443-f2500935a881\") " pod="calico-system/csi-node-driver-qptz9" Feb 13 19:44:44.095239 kubelet[1807]: I0213 19:44:44.095152 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdzh8\" (UniqueName: \"kubernetes.io/projected/9633839f-8429-4c41-b611-cdbf1f1cc11e-kube-api-access-fdzh8\") pod \"calico-node-hmpt5\" (UID: \"9633839f-8429-4c41-b611-cdbf1f1cc11e\") " pod="calico-system/calico-node-hmpt5" Feb 13 19:44:44.095239 kubelet[1807]: I0213 19:44:44.095167 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl2s5\" (UniqueName: \"kubernetes.io/projected/7f69143c-9b46-49ed-a443-f2500935a881-kube-api-access-rl2s5\") pod \"csi-node-driver-qptz9\" (UID: \"7f69143c-9b46-49ed-a443-f2500935a881\") " pod="calico-system/csi-node-driver-qptz9" Feb 13 19:44:44.095239 kubelet[1807]: I0213 19:44:44.095182 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9633839f-8429-4c41-b611-cdbf1f1cc11e-policysync\") pod \"calico-node-hmpt5\" (UID: \"9633839f-8429-4c41-b611-cdbf1f1cc11e\") " pod="calico-system/calico-node-hmpt5" Feb 13 19:44:44.095239 kubelet[1807]: I0213 19:44:44.095208 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9633839f-8429-4c41-b611-cdbf1f1cc11e-node-certs\") pod \"calico-node-hmpt5\" (UID: \"9633839f-8429-4c41-b611-cdbf1f1cc11e\") " pod="calico-system/calico-node-hmpt5" Feb 13 19:44:44.095414 kubelet[1807]: I0213 19:44:44.095236 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9633839f-8429-4c41-b611-cdbf1f1cc11e-var-lib-calico\") pod \"calico-node-hmpt5\" (UID: \"9633839f-8429-4c41-b611-cdbf1f1cc11e\") " pod="calico-system/calico-node-hmpt5" Feb 13 19:44:44.095414 kubelet[1807]: I0213 19:44:44.095259 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9633839f-8429-4c41-b611-cdbf1f1cc11e-cni-log-dir\") pod \"calico-node-hmpt5\" (UID: \"9633839f-8429-4c41-b611-cdbf1f1cc11e\") " pod="calico-system/calico-node-hmpt5" Feb 13 19:44:44.095414 kubelet[1807]: I0213 19:44:44.095279 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7f69143c-9b46-49ed-a443-f2500935a881-varrun\") pod \"csi-node-driver-qptz9\" (UID: \"7f69143c-9b46-49ed-a443-f2500935a881\") " pod="calico-system/csi-node-driver-qptz9" Feb 13 19:44:44.095414 kubelet[1807]: I0213 19:44:44.095334 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba8c561e-83bb-4554-a636-2aae88d6b27f-lib-modules\") pod \"kube-proxy-shvq5\" (UID: \"ba8c561e-83bb-4554-a636-2aae88d6b27f\") " pod="kube-system/kube-proxy-shvq5" Feb 13 19:44:44.095414 kubelet[1807]: I0213 19:44:44.095354 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rcwq\" (UniqueName: \"kubernetes.io/projected/ba8c561e-83bb-4554-a636-2aae88d6b27f-kube-api-access-8rcwq\") pod \"kube-proxy-shvq5\" (UID: \"ba8c561e-83bb-4554-a636-2aae88d6b27f\") " pod="kube-system/kube-proxy-shvq5" Feb 13 19:44:44.095613 kubelet[1807]: I0213 19:44:44.095380 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9633839f-8429-4c41-b611-cdbf1f1cc11e-xtables-lock\") pod \"calico-node-hmpt5\" (UID: \"9633839f-8429-4c41-b611-cdbf1f1cc11e\") " pod="calico-system/calico-node-hmpt5" Feb 13 19:44:44.095613 kubelet[1807]: I0213 19:44:44.095395 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9633839f-8429-4c41-b611-cdbf1f1cc11e-cni-bin-dir\") pod \"calico-node-hmpt5\" (UID: \"9633839f-8429-4c41-b611-cdbf1f1cc11e\") " pod="calico-system/calico-node-hmpt5" Feb 13 19:44:44.200056 kubelet[1807]: E0213 19:44:44.200024 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:44:44.200056 kubelet[1807]: W0213 19:44:44.200044 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:44:44.200186 kubelet[1807]: E0213 19:44:44.200066 1807 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:44:44.297891 kubelet[1807]: E0213 19:44:44.297855 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:44:44.297891 kubelet[1807]: W0213 19:44:44.297874 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:44:44.297891 kubelet[1807]: E0213 19:44:44.297893 1807 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:44:44.298151 kubelet[1807]: E0213 19:44:44.298127 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:44:44.298151 kubelet[1807]: W0213 19:44:44.298139 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:44:44.298151 kubelet[1807]: E0213 19:44:44.298146 1807 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:44:44.298398 kubelet[1807]: E0213 19:44:44.298374 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:44:44.298398 kubelet[1807]: W0213 19:44:44.298386 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:44:44.298398 kubelet[1807]: E0213 19:44:44.298395 1807 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:44:44.398974 kubelet[1807]: E0213 19:44:44.398853 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:44:44.398974 kubelet[1807]: W0213 19:44:44.398876 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:44:44.398974 kubelet[1807]: E0213 19:44:44.398896 1807 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:44:44.399316 kubelet[1807]: E0213 19:44:44.399084 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:44:44.399316 kubelet[1807]: W0213 19:44:44.399100 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:44:44.399316 kubelet[1807]: E0213 19:44:44.399108 1807 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:44:44.399316 kubelet[1807]: E0213 19:44:44.399276 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:44:44.399316 kubelet[1807]: W0213 19:44:44.399303 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:44:44.399316 kubelet[1807]: E0213 19:44:44.399312 1807 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:44:44.403022 kubelet[1807]: E0213 19:44:44.402949 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:44:44.403022 kubelet[1807]: W0213 19:44:44.402963 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:44:44.403022 kubelet[1807]: E0213 19:44:44.402975 1807 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:44:44.403697 kubelet[1807]: E0213 19:44:44.403681 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:44:44.403697 kubelet[1807]: W0213 19:44:44.403692 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:44:44.403782 kubelet[1807]: E0213 19:44:44.403703 1807 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:44:44.479573 kubelet[1807]: E0213 19:44:44.479530 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:44:44.479573 kubelet[1807]: W0213 19:44:44.479556 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:44:44.479573 kubelet[1807]: E0213 19:44:44.479579 1807 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:44:44.690303 kubelet[1807]: E0213 19:44:44.690153 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:44:44.690810 containerd[1493]: time="2025-02-13T19:44:44.690737751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-shvq5,Uid:ba8c561e-83bb-4554-a636-2aae88d6b27f,Namespace:kube-system,Attempt:0,}" Feb 13 19:44:44.694886 kubelet[1807]: E0213 19:44:44.694866 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:44:44.695270 containerd[1493]: time="2025-02-13T19:44:44.695236314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hmpt5,Uid:9633839f-8429-4c41-b611-cdbf1f1cc11e,Namespace:calico-system,Attempt:0,}" Feb 13 19:44:45.072122 kubelet[1807]: E0213 19:44:45.072048 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:45.648635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3948182080.mount: Deactivated successfully. Feb 13 19:44:45.660622 containerd[1493]: time="2025-02-13T19:44:45.660533876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:44:45.664712 containerd[1493]: time="2025-02-13T19:44:45.664658139Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:44:45.665849 containerd[1493]: time="2025-02-13T19:44:45.665810992Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:44:45.666846 containerd[1493]: time="2025-02-13T19:44:45.666797214Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:44:45.667905 containerd[1493]: time="2025-02-13T19:44:45.667859509Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:44:45.669906 containerd[1493]: time="2025-02-13T19:44:45.669868433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:44:45.670790 containerd[1493]: time="2025-02-13T19:44:45.670733784Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 979.875024ms" Feb 13 19:44:45.673117 containerd[1493]: time="2025-02-13T19:44:45.673084948Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 977.720761ms" Feb 13 19:44:45.761122 containerd[1493]: time="2025-02-13T19:44:45.760962161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:44:45.761122 containerd[1493]: time="2025-02-13T19:44:45.761029294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:44:45.761122 containerd[1493]: time="2025-02-13T19:44:45.761045412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:44:45.761655 containerd[1493]: time="2025-02-13T19:44:45.761356850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:44:45.762123 containerd[1493]: time="2025-02-13T19:44:45.762010011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:44:45.762207 containerd[1493]: time="2025-02-13T19:44:45.762054999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:44:45.762207 containerd[1493]: time="2025-02-13T19:44:45.762179084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:44:45.762792 containerd[1493]: time="2025-02-13T19:44:45.762406831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:44:45.839443 systemd[1]: Started cri-containerd-df8c668a9f30253e964b4021ce469eedf86dfe9eb5fb63d0a65679ded9c9c6d4.scope - libcontainer container df8c668a9f30253e964b4021ce469eedf86dfe9eb5fb63d0a65679ded9c9c6d4. Feb 13 19:44:45.844266 systemd[1]: Started cri-containerd-7ea8ce13ac72fc4302210b202bed690809ac6dd301dbdafa1aa2e5876420f30a.scope - libcontainer container 7ea8ce13ac72fc4302210b202bed690809ac6dd301dbdafa1aa2e5876420f30a. Feb 13 19:44:45.864142 containerd[1493]: time="2025-02-13T19:44:45.864095587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hmpt5,Uid:9633839f-8429-4c41-b611-cdbf1f1cc11e,Namespace:calico-system,Attempt:0,} returns sandbox id \"df8c668a9f30253e964b4021ce469eedf86dfe9eb5fb63d0a65679ded9c9c6d4\"" Feb 13 19:44:45.866143 kubelet[1807]: E0213 19:44:45.865833 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:44:45.867584 containerd[1493]: time="2025-02-13T19:44:45.867528539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:44:45.871123 containerd[1493]: time="2025-02-13T19:44:45.871088268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-shvq5,Uid:ba8c561e-83bb-4554-a636-2aae88d6b27f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ea8ce13ac72fc4302210b202bed690809ac6dd301dbdafa1aa2e5876420f30a\"" Feb 13 19:44:45.871813 kubelet[1807]: E0213 19:44:45.871791 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:44:46.072815 kubelet[1807]: E0213 19:44:46.072752 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:46.177907 kubelet[1807]: E0213 19:44:46.177840 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qptz9" podUID="7f69143c-9b46-49ed-a443-f2500935a881" Feb 13 19:44:47.073905 kubelet[1807]: E0213 19:44:47.073852 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:48.074613 kubelet[1807]: E0213 19:44:48.074530 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:48.177217 kubelet[1807]: E0213 19:44:48.177151 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qptz9" podUID="7f69143c-9b46-49ed-a443-f2500935a881" Feb 13 19:44:48.316061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800431155.mount: Deactivated successfully. Feb 13 19:44:48.382639 containerd[1493]: time="2025-02-13T19:44:48.382493076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:44:48.383447 containerd[1493]: time="2025-02-13T19:44:48.383412044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 19:44:48.384682 containerd[1493]: time="2025-02-13T19:44:48.384631988Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:44:48.386627 containerd[1493]: time="2025-02-13T19:44:48.386596214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:44:48.387208 containerd[1493]: time="2025-02-13T19:44:48.387163748Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.519596916s" Feb 13 19:44:48.387234 containerd[1493]: time="2025-02-13T19:44:48.387206997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:44:48.388354 containerd[1493]: time="2025-02-13T19:44:48.388318695Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:44:48.389579 containerd[1493]: time="2025-02-13T19:44:48.389551816Z" level=info msg="CreateContainer within sandbox \"df8c668a9f30253e964b4021ce469eedf86dfe9eb5fb63d0a65679ded9c9c6d4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:44:48.504738 containerd[1493]: time="2025-02-13T19:44:48.504666493Z" level=info msg="CreateContainer within sandbox \"df8c668a9f30253e964b4021ce469eedf86dfe9eb5fb63d0a65679ded9c9c6d4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"433585fe15cb8eb3d9dc3235de572e796fc15ef306f1109e4edc2d6cc4800177\"" Feb 13 19:44:48.505570 containerd[1493]: time="2025-02-13T19:44:48.505528334Z" level=info msg="StartContainer for \"433585fe15cb8eb3d9dc3235de572e796fc15ef306f1109e4edc2d6cc4800177\"" Feb 13 19:44:48.536541 systemd[1]: Started cri-containerd-433585fe15cb8eb3d9dc3235de572e796fc15ef306f1109e4edc2d6cc4800177.scope - libcontainer container 433585fe15cb8eb3d9dc3235de572e796fc15ef306f1109e4edc2d6cc4800177. Feb 13 19:44:48.568483 containerd[1493]: time="2025-02-13T19:44:48.568439901Z" level=info msg="StartContainer for \"433585fe15cb8eb3d9dc3235de572e796fc15ef306f1109e4edc2d6cc4800177\" returns successfully" Feb 13 19:44:48.580336 systemd[1]: cri-containerd-433585fe15cb8eb3d9dc3235de572e796fc15ef306f1109e4edc2d6cc4800177.scope: Deactivated successfully. Feb 13 19:44:48.682629 containerd[1493]: time="2025-02-13T19:44:48.682490264Z" level=info msg="shim disconnected" id=433585fe15cb8eb3d9dc3235de572e796fc15ef306f1109e4edc2d6cc4800177 namespace=k8s.io Feb 13 19:44:48.682629 containerd[1493]: time="2025-02-13T19:44:48.682550656Z" level=warning msg="cleaning up after shim disconnected" id=433585fe15cb8eb3d9dc3235de572e796fc15ef306f1109e4edc2d6cc4800177 namespace=k8s.io Feb 13 19:44:48.682629 containerd[1493]: time="2025-02-13T19:44:48.682562982Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:44:49.075496 kubelet[1807]: E0213 19:44:49.075457 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:49.190033 kubelet[1807]: E0213 19:44:49.189992 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:44:49.297320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-433585fe15cb8eb3d9dc3235de572e796fc15ef306f1109e4edc2d6cc4800177-rootfs.mount: Deactivated successfully. Feb 13 19:44:49.431439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270232535.mount: Deactivated successfully. Feb 13 19:44:50.075655 kubelet[1807]: E0213 19:44:50.075604 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:50.111000 containerd[1493]: time="2025-02-13T19:44:50.110935389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:44:50.112194 containerd[1493]: time="2025-02-13T19:44:50.112159770Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 19:44:50.113640 containerd[1493]: time="2025-02-13T19:44:50.113572949Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:44:50.115860 containerd[1493]: time="2025-02-13T19:44:50.115809989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:44:50.117482 containerd[1493]: time="2025-02-13T19:44:50.117439967Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 1.729085743s" Feb 13 19:44:50.117482 containerd[1493]: time="2025-02-13T19:44:50.117477062Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 19:44:50.118560 containerd[1493]: time="2025-02-13T19:44:50.118503802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:44:50.119566 containerd[1493]: time="2025-02-13T19:44:50.119535679Z" level=info msg="CreateContainer within sandbox \"7ea8ce13ac72fc4302210b202bed690809ac6dd301dbdafa1aa2e5876420f30a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:44:50.137035 containerd[1493]: time="2025-02-13T19:44:50.136970756Z" level=info msg="CreateContainer within sandbox \"7ea8ce13ac72fc4302210b202bed690809ac6dd301dbdafa1aa2e5876420f30a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e707361c9ad0a2c9ce4511ddc9eece96a56d2fc88266e76847babc7d7b1ecaa8\"" Feb 13 19:44:50.137569 containerd[1493]: time="2025-02-13T19:44:50.137532052Z" level=info msg="StartContainer for \"e707361c9ad0a2c9ce4511ddc9eece96a56d2fc88266e76847babc7d7b1ecaa8\"" Feb 13 19:44:50.167464 systemd[1]: Started cri-containerd-e707361c9ad0a2c9ce4511ddc9eece96a56d2fc88266e76847babc7d7b1ecaa8.scope - libcontainer container e707361c9ad0a2c9ce4511ddc9eece96a56d2fc88266e76847babc7d7b1ecaa8. Feb 13 19:44:50.179353 kubelet[1807]: E0213 19:44:50.179308 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qptz9" podUID="7f69143c-9b46-49ed-a443-f2500935a881" Feb 13 19:44:50.200204 containerd[1493]: time="2025-02-13T19:44:50.200160868Z" level=info msg="StartContainer for \"e707361c9ad0a2c9ce4511ddc9eece96a56d2fc88266e76847babc7d7b1ecaa8\" returns successfully" Feb 13 19:44:51.076515 kubelet[1807]: E0213 19:44:51.076413 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:51.195754 kubelet[1807]: E0213 19:44:51.195701 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:44:51.277675 kubelet[1807]: I0213 19:44:51.277600 1807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-shvq5" podStartSLOduration=5.031464539 podStartE2EDuration="9.277585077s" podCreationTimestamp="2025-02-13 19:44:42 +0000 UTC" firstStartedPulling="2025-02-13 19:44:45.872182398 +0000 UTC m=+4.388063146" lastFinishedPulling="2025-02-13 19:44:50.118302936 +0000 UTC m=+8.634183684" observedRunningTime="2025-02-13 19:44:51.27745597 +0000 UTC m=+9.793336718" watchObservedRunningTime="2025-02-13 19:44:51.277585077 +0000 UTC m=+9.793465825" Feb 13 19:44:52.076857 kubelet[1807]: E0213 19:44:52.076788 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:52.178390 kubelet[1807]: E0213 19:44:52.178314 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qptz9" podUID="7f69143c-9b46-49ed-a443-f2500935a881" Feb 13 19:44:52.197206 kubelet[1807]: E0213 19:44:52.197162 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:44:53.077106 kubelet[1807]: E0213 19:44:53.077039 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:54.077410 kubelet[1807]: E0213 19:44:54.077350 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:54.177958 kubelet[1807]: E0213 19:44:54.177339 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qptz9" podUID="7f69143c-9b46-49ed-a443-f2500935a881" Feb 13 19:44:55.077673 kubelet[1807]: E0213 19:44:55.077610 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:55.339307 containerd[1493]: time="2025-02-13T19:44:55.339149196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:44:55.340200 containerd[1493]: time="2025-02-13T19:44:55.340158408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:44:55.341568 containerd[1493]: time="2025-02-13T19:44:55.341525919Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:44:55.343780 containerd[1493]: time="2025-02-13T19:44:55.343725458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:44:55.346305 containerd[1493]: time="2025-02-13T19:44:55.345308999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.226758186s" Feb 13 19:44:55.346305 containerd[1493]: time="2025-02-13T19:44:55.345352772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:44:55.348523 containerd[1493]: time="2025-02-13T19:44:55.348446271Z" level=info msg="CreateContainer within sandbox \"df8c668a9f30253e964b4021ce469eedf86dfe9eb5fb63d0a65679ded9c9c6d4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:44:55.374324 containerd[1493]: time="2025-02-13T19:44:55.374264962Z" level=info msg="CreateContainer within sandbox \"df8c668a9f30253e964b4021ce469eedf86dfe9eb5fb63d0a65679ded9c9c6d4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"73cc6a33126a66052314f1eea76bb44062a6b6be2880a15bacf3c810b86cd10f\"" Feb 13 19:44:55.374901 containerd[1493]: time="2025-02-13T19:44:55.374708412Z" level=info msg="StartContainer for \"73cc6a33126a66052314f1eea76bb44062a6b6be2880a15bacf3c810b86cd10f\"" Feb 13 19:44:55.417563 systemd[1]: Started cri-containerd-73cc6a33126a66052314f1eea76bb44062a6b6be2880a15bacf3c810b86cd10f.scope - libcontainer container 73cc6a33126a66052314f1eea76bb44062a6b6be2880a15bacf3c810b86cd10f. Feb 13 19:44:55.468374 containerd[1493]: time="2025-02-13T19:44:55.468319648Z" level=info msg="StartContainer for \"73cc6a33126a66052314f1eea76bb44062a6b6be2880a15bacf3c810b86cd10f\" returns successfully" Feb 13 19:44:56.077921 kubelet[1807]: E0213 19:44:56.077863 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:56.179421 kubelet[1807]: E0213 19:44:56.178198 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qptz9" podUID="7f69143c-9b46-49ed-a443-f2500935a881" Feb 13 19:44:56.245608 kubelet[1807]: E0213 19:44:56.245555 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:44:57.078684 kubelet[1807]: E0213 19:44:57.078629 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:57.197063 systemd[1]: cri-containerd-73cc6a33126a66052314f1eea76bb44062a6b6be2880a15bacf3c810b86cd10f.scope: Deactivated successfully. Feb 13 19:44:57.208416 kubelet[1807]: I0213 19:44:57.208386 1807 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:44:57.226540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73cc6a33126a66052314f1eea76bb44062a6b6be2880a15bacf3c810b86cd10f-rootfs.mount: Deactivated successfully. Feb 13 19:44:57.247877 kubelet[1807]: E0213 19:44:57.247829 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:44:58.078930 kubelet[1807]: E0213 19:44:58.078858 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:58.184170 systemd[1]: Created slice kubepods-besteffort-pod7f69143c_9b46_49ed_a443_f2500935a881.slice - libcontainer container kubepods-besteffort-pod7f69143c_9b46_49ed_a443_f2500935a881.slice. Feb 13 19:44:58.186468 containerd[1493]: time="2025-02-13T19:44:58.186413223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qptz9,Uid:7f69143c-9b46-49ed-a443-f2500935a881,Namespace:calico-system,Attempt:0,}" Feb 13 19:44:58.420469 containerd[1493]: time="2025-02-13T19:44:58.420257086Z" level=info msg="shim disconnected" id=73cc6a33126a66052314f1eea76bb44062a6b6be2880a15bacf3c810b86cd10f namespace=k8s.io Feb 13 19:44:58.420469 containerd[1493]: time="2025-02-13T19:44:58.420361527Z" level=warning msg="cleaning up after shim disconnected" id=73cc6a33126a66052314f1eea76bb44062a6b6be2880a15bacf3c810b86cd10f namespace=k8s.io Feb 13 19:44:58.420469 containerd[1493]: time="2025-02-13T19:44:58.420375190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:44:58.714907 containerd[1493]: time="2025-02-13T19:44:58.714849290Z" level=error msg="Failed to destroy network for sandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:44:58.715247 containerd[1493]: time="2025-02-13T19:44:58.715217229Z" level=error msg="encountered an error cleaning up failed sandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:44:58.715304 containerd[1493]: time="2025-02-13T19:44:58.715275510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qptz9,Uid:7f69143c-9b46-49ed-a443-f2500935a881,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:44:58.715570 kubelet[1807]: E0213 19:44:58.715504 1807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:44:58.715778 kubelet[1807]: E0213 19:44:58.715588 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qptz9" Feb 13 19:44:58.715778 kubelet[1807]: E0213 19:44:58.715608 1807 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qptz9" Feb 13 19:44:58.715778 kubelet[1807]: E0213 19:44:58.715661 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qptz9_calico-system(7f69143c-9b46-49ed-a443-f2500935a881)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qptz9_calico-system(7f69143c-9b46-49ed-a443-f2500935a881)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qptz9" podUID="7f69143c-9b46-49ed-a443-f2500935a881" Feb 13 19:44:58.716589 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa-shm.mount: Deactivated successfully. Feb 13 19:44:59.079250 kubelet[1807]: E0213 19:44:59.079051 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:44:59.251485 kubelet[1807]: I0213 19:44:59.251441 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa" Feb 13 19:44:59.252025 containerd[1493]: time="2025-02-13T19:44:59.251991266Z" level=info msg="StopPodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\"" Feb 13 19:44:59.252466 containerd[1493]: time="2025-02-13T19:44:59.252180047Z" level=info msg="Ensure that sandbox 2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa in task-service has been cleanup successfully" Feb 13 19:44:59.252466 containerd[1493]: time="2025-02-13T19:44:59.252442817Z" level=info msg="TearDown network for sandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" successfully" Feb 13 19:44:59.252466 containerd[1493]: time="2025-02-13T19:44:59.252456501Z" level=info msg="StopPodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" returns successfully" Feb 13 19:44:59.254121 kubelet[1807]: E0213 19:44:59.253981 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:44:59.254184 containerd[1493]: time="2025-02-13T19:44:59.254146414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qptz9,Uid:7f69143c-9b46-49ed-a443-f2500935a881,Namespace:calico-system,Attempt:1,}" Feb 13 19:44:59.254436 systemd[1]: run-netns-cni\x2d7dbce197\x2de0a8\x2d1c4a\x2d7a00\x2d9a978e889d53.mount: Deactivated successfully. Feb 13 19:44:59.254974 containerd[1493]: time="2025-02-13T19:44:59.254828540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:44:59.328076 containerd[1493]: time="2025-02-13T19:44:59.328020447Z" level=error msg="Failed to destroy network for sandbox \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:44:59.328475 containerd[1493]: time="2025-02-13T19:44:59.328438219Z" level=error msg="encountered an error cleaning up failed sandbox \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:44:59.328524 containerd[1493]: time="2025-02-13T19:44:59.328496862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qptz9,Uid:7f69143c-9b46-49ed-a443-f2500935a881,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:44:59.328797 kubelet[1807]: E0213 19:44:59.328750 1807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:44:59.328854 kubelet[1807]: E0213 19:44:59.328822 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qptz9" Feb 13 19:44:59.328854 kubelet[1807]: E0213 19:44:59.328844 1807 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qptz9" Feb 13 19:44:59.328919 kubelet[1807]: E0213 19:44:59.328890 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qptz9_calico-system(7f69143c-9b46-49ed-a443-f2500935a881)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qptz9_calico-system(7f69143c-9b46-49ed-a443-f2500935a881)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qptz9" podUID="7f69143c-9b46-49ed-a443-f2500935a881" Feb 13 19:44:59.568216 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060-shm.mount: Deactivated successfully. Feb 13 19:45:00.044021 systemd[1]: Created slice kubepods-besteffort-pod04b14b13_baaa_41e3_952c_da558cf5b655.slice - libcontainer container kubepods-besteffort-pod04b14b13_baaa_41e3_952c_da558cf5b655.slice. Feb 13 19:45:00.079600 kubelet[1807]: E0213 19:45:00.079521 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:00.104046 kubelet[1807]: I0213 19:45:00.103953 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92cg8\" (UniqueName: \"kubernetes.io/projected/04b14b13-baaa-41e3-952c-da558cf5b655-kube-api-access-92cg8\") pod \"nginx-deployment-8587fbcb89-6d6tm\" (UID: \"04b14b13-baaa-41e3-952c-da558cf5b655\") " pod="default/nginx-deployment-8587fbcb89-6d6tm" Feb 13 19:45:00.256931 kubelet[1807]: I0213 19:45:00.256887 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060" Feb 13 19:45:00.257586 containerd[1493]: time="2025-02-13T19:45:00.257549911Z" level=info msg="StopPodSandbox for \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\"" Feb 13 19:45:00.257966 containerd[1493]: time="2025-02-13T19:45:00.257770131Z" level=info msg="Ensure that sandbox df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060 in task-service has been cleanup successfully" Feb 13 19:45:00.258035 containerd[1493]: time="2025-02-13T19:45:00.257962882Z" level=info msg="TearDown network for sandbox \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\" successfully" Feb 13 19:45:00.258035 containerd[1493]: time="2025-02-13T19:45:00.257980032Z" level=info msg="StopPodSandbox for \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\" returns successfully" Feb 13 19:45:00.259131 containerd[1493]: time="2025-02-13T19:45:00.258979883Z" level=info msg="StopPodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\"" Feb 13 19:45:00.259131 containerd[1493]: time="2025-02-13T19:45:00.259071506Z" level=info msg="TearDown network for sandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" successfully" Feb 13 19:45:00.259131 containerd[1493]: time="2025-02-13T19:45:00.259082915Z" level=info msg="StopPodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" returns successfully" Feb 13 19:45:00.259665 containerd[1493]: time="2025-02-13T19:45:00.259641615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qptz9,Uid:7f69143c-9b46-49ed-a443-f2500935a881,Namespace:calico-system,Attempt:2,}" Feb 13 19:45:00.260110 systemd[1]: run-netns-cni\x2d7addf0be\x2dcf37\x2d8bb4\x2d2005\x2d44f062daba13.mount: Deactivated successfully. Feb 13 19:45:00.348539 containerd[1493]: time="2025-02-13T19:45:00.348416606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-6d6tm,Uid:04b14b13-baaa-41e3-952c-da558cf5b655,Namespace:default,Attempt:0,}" Feb 13 19:45:00.389066 containerd[1493]: time="2025-02-13T19:45:00.389020244Z" level=error msg="Failed to destroy network for sandbox \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:00.389504 containerd[1493]: time="2025-02-13T19:45:00.389463929Z" level=error msg="encountered an error cleaning up failed sandbox \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:00.389604 containerd[1493]: time="2025-02-13T19:45:00.389548320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qptz9,Uid:7f69143c-9b46-49ed-a443-f2500935a881,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:00.389898 kubelet[1807]: E0213 19:45:00.389849 1807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:00.389957 kubelet[1807]: E0213 19:45:00.389928 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qptz9" Feb 13 19:45:00.390041 kubelet[1807]: E0213 19:45:00.389954 1807 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qptz9" Feb 13 19:45:00.390041 kubelet[1807]: E0213 19:45:00.390014 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qptz9_calico-system(7f69143c-9b46-49ed-a443-f2500935a881)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qptz9_calico-system(7f69143c-9b46-49ed-a443-f2500935a881)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qptz9" podUID="7f69143c-9b46-49ed-a443-f2500935a881" Feb 13 19:45:00.415934 containerd[1493]: time="2025-02-13T19:45:00.415865446Z" level=error msg="Failed to destroy network for sandbox \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:00.416300 containerd[1493]: time="2025-02-13T19:45:00.416251620Z" level=error msg="encountered an error cleaning up failed sandbox \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:00.416364 containerd[1493]: time="2025-02-13T19:45:00.416330420Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-6d6tm,Uid:04b14b13-baaa-41e3-952c-da558cf5b655,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:00.416629 kubelet[1807]: E0213 19:45:00.416588 1807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:00.416686 kubelet[1807]: E0213 19:45:00.416657 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-6d6tm" Feb 13 19:45:00.416723 kubelet[1807]: E0213 19:45:00.416681 1807 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-6d6tm" Feb 13 19:45:00.416776 kubelet[1807]: E0213 19:45:00.416746 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-6d6tm_default(04b14b13-baaa-41e3-952c-da558cf5b655)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-6d6tm_default(04b14b13-baaa-41e3-952c-da558cf5b655)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-6d6tm" podUID="04b14b13-baaa-41e3-952c-da558cf5b655" Feb 13 19:45:00.569311 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11-shm.mount: Deactivated successfully. Feb 13 19:45:01.080635 kubelet[1807]: E0213 19:45:01.080569 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:01.259796 kubelet[1807]: I0213 19:45:01.259756 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11" Feb 13 19:45:01.260340 containerd[1493]: time="2025-02-13T19:45:01.260305776Z" level=info msg="StopPodSandbox for \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\"" Feb 13 19:45:01.260778 containerd[1493]: time="2025-02-13T19:45:01.260558976Z" level=info msg="Ensure that sandbox f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11 in task-service has been cleanup successfully" Feb 13 19:45:01.260917 kubelet[1807]: I0213 19:45:01.260884 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19" Feb 13 19:45:01.261571 containerd[1493]: time="2025-02-13T19:45:01.261263382Z" level=info msg="StopPodSandbox for \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\"" Feb 13 19:45:01.261571 containerd[1493]: time="2025-02-13T19:45:01.261464784Z" level=info msg="Ensure that sandbox aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19 in task-service has been cleanup successfully" Feb 13 19:45:01.263036 systemd[1]: run-netns-cni\x2d4c1a9aee\x2d5b5e\x2d96f9\x2d8528\x2d9f6b73a612dd.mount: Deactivated successfully. Feb 13 19:45:01.263138 systemd[1]: run-netns-cni\x2d188ff8cf\x2d0744\x2dc467\x2d4033\x2d914357e4c928.mount: Deactivated successfully. Feb 13 19:45:01.263476 containerd[1493]: time="2025-02-13T19:45:01.263377388Z" level=info msg="TearDown network for sandbox \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\" successfully" Feb 13 19:45:01.263476 containerd[1493]: time="2025-02-13T19:45:01.263394901Z" level=info msg="TearDown network for sandbox \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\" successfully" Feb 13 19:45:01.263476 containerd[1493]: time="2025-02-13T19:45:01.263419928Z" level=info msg="StopPodSandbox for \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\" returns successfully" Feb 13 19:45:01.263476 containerd[1493]: time="2025-02-13T19:45:01.263397977Z" level=info msg="StopPodSandbox for \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\" returns successfully" Feb 13 19:45:01.264044 containerd[1493]: time="2025-02-13T19:45:01.263853610Z" level=info msg="StopPodSandbox for \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\"" Feb 13 19:45:01.264044 containerd[1493]: time="2025-02-13T19:45:01.263958329Z" level=info msg="TearDown network for sandbox \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\" successfully" Feb 13 19:45:01.264044 containerd[1493]: time="2025-02-13T19:45:01.263967386Z" level=info msg="StopPodSandbox for \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\" returns successfully" Feb 13 19:45:01.264658 containerd[1493]: time="2025-02-13T19:45:01.264623330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-6d6tm,Uid:04b14b13-baaa-41e3-952c-da558cf5b655,Namespace:default,Attempt:1,}" Feb 13 19:45:01.264658 containerd[1493]: time="2025-02-13T19:45:01.264645491Z" level=info msg="StopPodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\"" Feb 13 19:45:01.264760 containerd[1493]: time="2025-02-13T19:45:01.264742645Z" level=info msg="TearDown network for sandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" successfully" Feb 13 19:45:01.264864 containerd[1493]: time="2025-02-13T19:45:01.264760419Z" level=info msg="StopPodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" returns successfully" Feb 13 19:45:01.265556 containerd[1493]: time="2025-02-13T19:45:01.265304941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qptz9,Uid:7f69143c-9b46-49ed-a443-f2500935a881,Namespace:calico-system,Attempt:3,}" Feb 13 19:45:01.350983 containerd[1493]: time="2025-02-13T19:45:01.350853996Z" level=error msg="Failed to destroy network for sandbox \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:01.389038 containerd[1493]: time="2025-02-13T19:45:01.388975235Z" level=error msg="encountered an error cleaning up failed sandbox \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:01.389158 containerd[1493]: time="2025-02-13T19:45:01.389076817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qptz9,Uid:7f69143c-9b46-49ed-a443-f2500935a881,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:01.389465 kubelet[1807]: E0213 19:45:01.389421 1807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:01.389532 kubelet[1807]: E0213 19:45:01.389494 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qptz9" Feb 13 19:45:01.389590 kubelet[1807]: E0213 19:45:01.389532 1807 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qptz9" Feb 13 19:45:01.389628 kubelet[1807]: E0213 19:45:01.389582 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qptz9_calico-system(7f69143c-9b46-49ed-a443-f2500935a881)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qptz9_calico-system(7f69143c-9b46-49ed-a443-f2500935a881)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qptz9" podUID="7f69143c-9b46-49ed-a443-f2500935a881" Feb 13 19:45:01.404882 containerd[1493]: time="2025-02-13T19:45:01.404828350Z" level=error msg="Failed to destroy network for sandbox \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:01.405338 containerd[1493]: time="2025-02-13T19:45:01.405280997Z" level=error msg="encountered an error cleaning up failed sandbox \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:01.405395 containerd[1493]: time="2025-02-13T19:45:01.405371840Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-6d6tm,Uid:04b14b13-baaa-41e3-952c-da558cf5b655,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:01.405736 kubelet[1807]: E0213 19:45:01.405697 1807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:01.405802 kubelet[1807]: E0213 19:45:01.405763 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-6d6tm" Feb 13 19:45:01.405802 kubelet[1807]: E0213 19:45:01.405781 1807 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-6d6tm" Feb 13 19:45:01.405867 kubelet[1807]: E0213 19:45:01.405824 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-6d6tm_default(04b14b13-baaa-41e3-952c-da558cf5b655)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-6d6tm_default(04b14b13-baaa-41e3-952c-da558cf5b655)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-6d6tm" podUID="04b14b13-baaa-41e3-952c-da558cf5b655" Feb 13 19:45:01.568442 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6-shm.mount: Deactivated successfully. Feb 13 19:45:02.070404 kubelet[1807]: E0213 19:45:02.070347 1807 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:02.081019 kubelet[1807]: E0213 19:45:02.080994 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:02.266051 kubelet[1807]: I0213 19:45:02.266016 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07" Feb 13 19:45:02.267163 containerd[1493]: time="2025-02-13T19:45:02.266753186Z" level=info msg="StopPodSandbox for \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\"" Feb 13 19:45:02.267163 containerd[1493]: time="2025-02-13T19:45:02.267078172Z" level=info msg="Ensure that sandbox 150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07 in task-service has been cleanup successfully" Feb 13 19:45:02.267760 containerd[1493]: time="2025-02-13T19:45:02.267334117Z" level=info msg="TearDown network for sandbox \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\" successfully" Feb 13 19:45:02.267760 containerd[1493]: time="2025-02-13T19:45:02.267348554Z" level=info msg="StopPodSandbox for \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\" returns successfully" Feb 13 19:45:02.267760 containerd[1493]: time="2025-02-13T19:45:02.267553802Z" level=info msg="StopPodSandbox for \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\"" Feb 13 19:45:02.267760 containerd[1493]: time="2025-02-13T19:45:02.267675893Z" level=info msg="TearDown network for sandbox \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\" successfully" Feb 13 19:45:02.267760 containerd[1493]: time="2025-02-13T19:45:02.267688518Z" level=info msg="StopPodSandbox for \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\" returns successfully" Feb 13 19:45:02.267890 kubelet[1807]: I0213 19:45:02.267179 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6" Feb 13 19:45:02.267932 containerd[1493]: time="2025-02-13T19:45:02.267782345Z" level=info msg="StopPodSandbox for \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\"" Feb 13 19:45:02.268112 containerd[1493]: time="2025-02-13T19:45:02.268078526Z" level=info msg="Ensure that sandbox 8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6 in task-service has been cleanup successfully" Feb 13 19:45:02.268407 containerd[1493]: time="2025-02-13T19:45:02.268344981Z" level=info msg="TearDown network for sandbox \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\" successfully" Feb 13 19:45:02.268407 containerd[1493]: time="2025-02-13T19:45:02.268364098Z" level=info msg="StopPodSandbox for \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\" returns successfully" Feb 13 19:45:02.269187 containerd[1493]: time="2025-02-13T19:45:02.269137983Z" level=info msg="StopPodSandbox for \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\"" Feb 13 19:45:02.269272 containerd[1493]: time="2025-02-13T19:45:02.269252370Z" level=info msg="TearDown network for sandbox \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\" successfully" Feb 13 19:45:02.269272 containerd[1493]: time="2025-02-13T19:45:02.269268751Z" level=info msg="StopPodSandbox for \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\" returns successfully" Feb 13 19:45:02.269393 containerd[1493]: time="2025-02-13T19:45:02.269344694Z" level=info msg="StopPodSandbox for \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\"" Feb 13 19:45:02.269581 containerd[1493]: time="2025-02-13T19:45:02.269417493Z" level=info msg="TearDown network for sandbox \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\" successfully" Feb 13 19:45:02.269581 containerd[1493]: time="2025-02-13T19:45:02.269434405Z" level=info msg="StopPodSandbox for \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\" returns successfully" Feb 13 19:45:02.270462 containerd[1493]: time="2025-02-13T19:45:02.270037257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-6d6tm,Uid:04b14b13-baaa-41e3-952c-da558cf5b655,Namespace:default,Attempt:2,}" Feb 13 19:45:02.270462 containerd[1493]: time="2025-02-13T19:45:02.270057174Z" level=info msg="StopPodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\"" Feb 13 19:45:02.270462 containerd[1493]: time="2025-02-13T19:45:02.270180197Z" level=info msg="TearDown network for sandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" successfully" Feb 13 19:45:02.270462 containerd[1493]: time="2025-02-13T19:45:02.270195276Z" level=info msg="StopPodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" returns successfully" Feb 13 19:45:02.270114 systemd[1]: run-netns-cni\x2d6449ca23\x2dc9c5\x2d32d8\x2dd178\x2d579492c8250a.mount: Deactivated successfully. Feb 13 19:45:02.271555 containerd[1493]: time="2025-02-13T19:45:02.270993448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qptz9,Uid:7f69143c-9b46-49ed-a443-f2500935a881,Namespace:calico-system,Attempt:4,}" Feb 13 19:45:02.270267 systemd[1]: run-netns-cni\x2dfdc5aa4f\x2d3892\x2daa26\x2d403f\x2d3e15c427bcf6.mount: Deactivated successfully. Feb 13 19:45:02.392053 containerd[1493]: time="2025-02-13T19:45:02.391896971Z" level=error msg="Failed to destroy network for sandbox \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:02.392578 containerd[1493]: time="2025-02-13T19:45:02.392544118Z" level=error msg="encountered an error cleaning up failed sandbox \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:02.392814 containerd[1493]: time="2025-02-13T19:45:02.392739277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qptz9,Uid:7f69143c-9b46-49ed-a443-f2500935a881,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:02.393048 kubelet[1807]: E0213 19:45:02.393007 1807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:02.393224 kubelet[1807]: E0213 19:45:02.393156 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qptz9" Feb 13 19:45:02.393224 kubelet[1807]: E0213 19:45:02.393183 1807 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qptz9" Feb 13 19:45:02.393321 kubelet[1807]: E0213 19:45:02.393249 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qptz9_calico-system(7f69143c-9b46-49ed-a443-f2500935a881)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qptz9_calico-system(7f69143c-9b46-49ed-a443-f2500935a881)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qptz9" podUID="7f69143c-9b46-49ed-a443-f2500935a881" Feb 13 19:45:02.396856 containerd[1493]: time="2025-02-13T19:45:02.396812031Z" level=error msg="Failed to destroy network for sandbox \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:02.397229 containerd[1493]: time="2025-02-13T19:45:02.397207440Z" level=error msg="encountered an error cleaning up failed sandbox \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:02.397304 containerd[1493]: time="2025-02-13T19:45:02.397277433Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-6d6tm,Uid:04b14b13-baaa-41e3-952c-da558cf5b655,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:02.397553 kubelet[1807]: E0213 19:45:02.397525 1807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:02.397650 kubelet[1807]: E0213 19:45:02.397628 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-6d6tm" Feb 13 19:45:02.397691 kubelet[1807]: E0213 19:45:02.397649 1807 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-6d6tm" Feb 13 19:45:02.397782 kubelet[1807]: E0213 19:45:02.397685 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-6d6tm_default(04b14b13-baaa-41e3-952c-da558cf5b655)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-6d6tm_default(04b14b13-baaa-41e3-952c-da558cf5b655)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-6d6tm" podUID="04b14b13-baaa-41e3-952c-da558cf5b655" Feb 13 19:45:02.568994 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf-shm.mount: Deactivated successfully. Feb 13 19:45:03.082184 kubelet[1807]: E0213 19:45:03.082102 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:03.270631 kubelet[1807]: I0213 19:45:03.270596 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf" Feb 13 19:45:03.271251 containerd[1493]: time="2025-02-13T19:45:03.271195852Z" level=info msg="StopPodSandbox for \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\"" Feb 13 19:45:03.271746 containerd[1493]: time="2025-02-13T19:45:03.271469139Z" level=info msg="Ensure that sandbox 3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf in task-service has been cleanup successfully" Feb 13 19:45:03.271862 containerd[1493]: time="2025-02-13T19:45:03.271841995Z" level=info msg="TearDown network for sandbox \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\" successfully" Feb 13 19:45:03.271891 containerd[1493]: time="2025-02-13T19:45:03.271861041Z" level=info msg="StopPodSandbox for \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\" returns successfully" Feb 13 19:45:03.272323 containerd[1493]: time="2025-02-13T19:45:03.272302967Z" level=info msg="StopPodSandbox for \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\"" Feb 13 19:45:03.272409 containerd[1493]: time="2025-02-13T19:45:03.272387528Z" level=info msg="TearDown network for sandbox \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\" successfully" Feb 13 19:45:03.272437 containerd[1493]: time="2025-02-13T19:45:03.272406423Z" level=info msg="StopPodSandbox for \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\" returns successfully" Feb 13 19:45:03.272614 kubelet[1807]: I0213 19:45:03.272593 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8" Feb 13 19:45:03.273103 containerd[1493]: time="2025-02-13T19:45:03.273069519Z" level=info msg="StopPodSandbox for \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\"" Feb 13 19:45:03.273219 systemd[1]: run-netns-cni\x2d756e44ba\x2d8411\x2d324d\x2d0f33\x2dc8ad881dee8b.mount: Deactivated successfully. Feb 13 19:45:03.273507 containerd[1493]: time="2025-02-13T19:45:03.273240572Z" level=info msg="Ensure that sandbox 07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8 in task-service has been cleanup successfully" Feb 13 19:45:03.273507 containerd[1493]: time="2025-02-13T19:45:03.273456080Z" level=info msg="StopPodSandbox for \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\"" Feb 13 19:45:03.273560 containerd[1493]: time="2025-02-13T19:45:03.273543325Z" level=info msg="TearDown network for sandbox \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\" successfully" Feb 13 19:45:03.273590 containerd[1493]: time="2025-02-13T19:45:03.273558615Z" level=info msg="StopPodSandbox for \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\" returns successfully" Feb 13 19:45:03.274183 containerd[1493]: time="2025-02-13T19:45:03.273863231Z" level=info msg="TearDown network for sandbox \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\" successfully" Feb 13 19:45:03.274183 containerd[1493]: time="2025-02-13T19:45:03.273889982Z" level=info msg="StopPodSandbox for \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\" returns successfully" Feb 13 19:45:03.274183 containerd[1493]: time="2025-02-13T19:45:03.273979892Z" level=info msg="StopPodSandbox for \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\"" Feb 13 19:45:03.274183 containerd[1493]: time="2025-02-13T19:45:03.274139153Z" level=info msg="TearDown network for sandbox \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\" successfully" Feb 13 19:45:03.274183 containerd[1493]: time="2025-02-13T19:45:03.274150795Z" level=info msg="StopPodSandbox for \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\" returns successfully" Feb 13 19:45:03.274431 containerd[1493]: time="2025-02-13T19:45:03.274187224Z" level=info msg="StopPodSandbox for \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\"" Feb 13 19:45:03.274431 containerd[1493]: time="2025-02-13T19:45:03.274362646Z" level=info msg="TearDown network for sandbox \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\" successfully" Feb 13 19:45:03.274431 containerd[1493]: time="2025-02-13T19:45:03.274410226Z" level=info msg="StopPodSandbox for \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\" returns successfully" Feb 13 19:45:03.274733 containerd[1493]: time="2025-02-13T19:45:03.274708261Z" level=info msg="StopPodSandbox for \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\"" Feb 13 19:45:03.274830 containerd[1493]: time="2025-02-13T19:45:03.274801778Z" level=info msg="TearDown network for sandbox \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\" successfully" Feb 13 19:45:03.274830 containerd[1493]: time="2025-02-13T19:45:03.274821916Z" level=info msg="StopPodSandbox for \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\" returns successfully" Feb 13 19:45:03.274894 containerd[1493]: time="2025-02-13T19:45:03.274867111Z" level=info msg="StopPodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\"" Feb 13 19:45:03.274974 containerd[1493]: time="2025-02-13T19:45:03.274954046Z" level=info msg="TearDown network for sandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" successfully" Feb 13 19:45:03.275018 containerd[1493]: time="2025-02-13T19:45:03.274972601Z" level=info msg="StopPodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" returns successfully" Feb 13 19:45:03.275451 containerd[1493]: time="2025-02-13T19:45:03.275420388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-6d6tm,Uid:04b14b13-baaa-41e3-952c-da558cf5b655,Namespace:default,Attempt:3,}" Feb 13 19:45:03.275657 containerd[1493]: time="2025-02-13T19:45:03.275527672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qptz9,Uid:7f69143c-9b46-49ed-a443-f2500935a881,Namespace:calico-system,Attempt:5,}" Feb 13 19:45:03.275805 systemd[1]: run-netns-cni\x2de812a962\x2dcf4b\x2d1be3\x2d165e\x2d9ba0c0a28ca9.mount: Deactivated successfully. Feb 13 19:45:04.082806 kubelet[1807]: E0213 19:45:04.082746 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:04.743529 containerd[1493]: time="2025-02-13T19:45:04.743472833Z" level=error msg="Failed to destroy network for sandbox \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:04.744257 containerd[1493]: time="2025-02-13T19:45:04.744148301Z" level=error msg="encountered an error cleaning up failed sandbox \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:04.744257 containerd[1493]: time="2025-02-13T19:45:04.744204989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-6d6tm,Uid:04b14b13-baaa-41e3-952c-da558cf5b655,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:04.744485 kubelet[1807]: E0213 19:45:04.744430 1807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:04.744551 kubelet[1807]: E0213 19:45:04.744508 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-6d6tm" Feb 13 19:45:04.744551 kubelet[1807]: E0213 19:45:04.744530 1807 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-6d6tm" Feb 13 19:45:04.744625 kubelet[1807]: E0213 19:45:04.744575 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-6d6tm_default(04b14b13-baaa-41e3-952c-da558cf5b655)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-6d6tm_default(04b14b13-baaa-41e3-952c-da558cf5b655)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-6d6tm" podUID="04b14b13-baaa-41e3-952c-da558cf5b655" Feb 13 19:45:04.752137 containerd[1493]: time="2025-02-13T19:45:04.752083234Z" level=error msg="Failed to destroy network for sandbox \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:04.752711 containerd[1493]: time="2025-02-13T19:45:04.752679143Z" level=error msg="encountered an error cleaning up failed sandbox \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:04.752779 containerd[1493]: time="2025-02-13T19:45:04.752758041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qptz9,Uid:7f69143c-9b46-49ed-a443-f2500935a881,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:04.752989 kubelet[1807]: E0213 19:45:04.752951 1807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:04.753041 kubelet[1807]: E0213 19:45:04.753009 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qptz9" Feb 13 19:45:04.753076 kubelet[1807]: E0213 19:45:04.753030 1807 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qptz9" Feb 13 19:45:04.753122 kubelet[1807]: E0213 19:45:04.753092 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qptz9_calico-system(7f69143c-9b46-49ed-a443-f2500935a881)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qptz9_calico-system(7f69143c-9b46-49ed-a443-f2500935a881)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qptz9" podUID="7f69143c-9b46-49ed-a443-f2500935a881" Feb 13 19:45:04.766645 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736-shm.mount: Deactivated successfully. Feb 13 19:45:04.767112 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68-shm.mount: Deactivated successfully. Feb 13 19:45:05.083359 kubelet[1807]: E0213 19:45:05.083213 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:05.281617 kubelet[1807]: I0213 19:45:05.281571 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736" Feb 13 19:45:05.282337 containerd[1493]: time="2025-02-13T19:45:05.282244111Z" level=info msg="StopPodSandbox for \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\"" Feb 13 19:45:05.282644 containerd[1493]: time="2025-02-13T19:45:05.282615594Z" level=info msg="Ensure that sandbox 48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736 in task-service has been cleanup successfully" Feb 13 19:45:05.283247 kubelet[1807]: I0213 19:45:05.283227 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68" Feb 13 19:45:05.284236 containerd[1493]: time="2025-02-13T19:45:05.283725282Z" level=info msg="StopPodSandbox for \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\"" Feb 13 19:45:05.284236 containerd[1493]: time="2025-02-13T19:45:05.283774375Z" level=info msg="TearDown network for sandbox \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\" successfully" Feb 13 19:45:05.284236 containerd[1493]: time="2025-02-13T19:45:05.283803241Z" level=info msg="StopPodSandbox for \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\" returns successfully" Feb 13 19:45:05.284236 containerd[1493]: time="2025-02-13T19:45:05.283929259Z" level=info msg="Ensure that sandbox 97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68 in task-service has been cleanup successfully" Feb 13 19:45:05.284236 containerd[1493]: time="2025-02-13T19:45:05.284142151Z" level=info msg="TearDown network for sandbox \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\" successfully" Feb 13 19:45:05.284236 containerd[1493]: time="2025-02-13T19:45:05.284155556Z" level=info msg="StopPodSandbox for \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\" returns successfully" Feb 13 19:45:05.284522 containerd[1493]: time="2025-02-13T19:45:05.284510097Z" level=info msg="StopPodSandbox for \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\"" Feb 13 19:45:05.284633 containerd[1493]: time="2025-02-13T19:45:05.284583335Z" level=info msg="TearDown network for sandbox \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\" successfully" Feb 13 19:45:05.284633 containerd[1493]: time="2025-02-13T19:45:05.284624483Z" level=info msg="StopPodSandbox for \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\" returns successfully" Feb 13 19:45:05.284714 containerd[1493]: time="2025-02-13T19:45:05.284660230Z" level=info msg="StopPodSandbox for \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\"" Feb 13 19:45:05.284754 containerd[1493]: time="2025-02-13T19:45:05.284736525Z" level=info msg="TearDown network for sandbox \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\" successfully" Feb 13 19:45:05.284754 containerd[1493]: time="2025-02-13T19:45:05.284745492Z" level=info msg="StopPodSandbox for \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\" returns successfully" Feb 13 19:45:05.284984 systemd[1]: run-netns-cni\x2d77aa9038\x2d7d37\x2dd38e\x2d135c\x2ddebcbe0465c4.mount: Deactivated successfully. Feb 13 19:45:05.285165 containerd[1493]: time="2025-02-13T19:45:05.285142533Z" level=info msg="StopPodSandbox for \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\"" Feb 13 19:45:05.285232 containerd[1493]: time="2025-02-13T19:45:05.285214569Z" level=info msg="TearDown network for sandbox \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\" successfully" Feb 13 19:45:05.285232 containerd[1493]: time="2025-02-13T19:45:05.285228766Z" level=info msg="StopPodSandbox for \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\" returns successfully" Feb 13 19:45:05.285409 containerd[1493]: time="2025-02-13T19:45:05.285379681Z" level=info msg="StopPodSandbox for \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\"" Feb 13 19:45:05.285476 containerd[1493]: time="2025-02-13T19:45:05.285452990Z" level=info msg="TearDown network for sandbox \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\" successfully" Feb 13 19:45:05.285476 containerd[1493]: time="2025-02-13T19:45:05.285469792Z" level=info msg="StopPodSandbox for \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\" returns successfully" Feb 13 19:45:05.285840 containerd[1493]: time="2025-02-13T19:45:05.285812289Z" level=info msg="StopPodSandbox for \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\"" Feb 13 19:45:05.286170 containerd[1493]: time="2025-02-13T19:45:05.285948437Z" level=info msg="TearDown network for sandbox \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\" successfully" Feb 13 19:45:05.286170 containerd[1493]: time="2025-02-13T19:45:05.285966682Z" level=info msg="StopPodSandbox for \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\" returns successfully" Feb 13 19:45:05.286170 containerd[1493]: time="2025-02-13T19:45:05.286049799Z" level=info msg="StopPodSandbox for \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\"" Feb 13 19:45:05.286170 containerd[1493]: time="2025-02-13T19:45:05.286130681Z" level=info msg="TearDown network for sandbox \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\" successfully" Feb 13 19:45:05.286170 containerd[1493]: time="2025-02-13T19:45:05.286139758Z" level=info msg="StopPodSandbox for \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\" returns successfully" Feb 13 19:45:05.286575 containerd[1493]: time="2025-02-13T19:45:05.286542751Z" level=info msg="StopPodSandbox for \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\"" Feb 13 19:45:05.286731 containerd[1493]: time="2025-02-13T19:45:05.286674921Z" level=info msg="TearDown network for sandbox \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\" successfully" Feb 13 19:45:05.286731 containerd[1493]: time="2025-02-13T19:45:05.286713354Z" level=info msg="StopPodSandbox for \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\" returns successfully" Feb 13 19:45:05.287194 containerd[1493]: time="2025-02-13T19:45:05.287169226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-6d6tm,Uid:04b14b13-baaa-41e3-952c-da558cf5b655,Namespace:default,Attempt:4,}" Feb 13 19:45:05.287378 containerd[1493]: time="2025-02-13T19:45:05.287304782Z" level=info msg="StopPodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\"" Feb 13 19:45:05.287471 containerd[1493]: time="2025-02-13T19:45:05.287412436Z" level=info msg="TearDown network for sandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" successfully" Feb 13 19:45:05.287471 containerd[1493]: time="2025-02-13T19:45:05.287434898Z" level=info msg="StopPodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" returns successfully" Feb 13 19:45:05.287818 systemd[1]: run-netns-cni\x2d958dc402\x2dddc6\x2d94fa\x2d25ef\x2d3aa1d9098b51.mount: Deactivated successfully. Feb 13 19:45:05.288707 containerd[1493]: time="2025-02-13T19:45:05.288676746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qptz9,Uid:7f69143c-9b46-49ed-a443-f2500935a881,Namespace:calico-system,Attempt:6,}" Feb 13 19:45:06.084001 kubelet[1807]: E0213 19:45:06.083953 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:06.546642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2276130827.mount: Deactivated successfully. Feb 13 19:45:07.084333 kubelet[1807]: E0213 19:45:07.084202 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:07.589023 containerd[1493]: time="2025-02-13T19:45:07.588940299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:07.594599 containerd[1493]: time="2025-02-13T19:45:07.594498219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:45:07.595983 containerd[1493]: time="2025-02-13T19:45:07.595890801Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:07.602616 containerd[1493]: time="2025-02-13T19:45:07.602565903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:07.604117 containerd[1493]: time="2025-02-13T19:45:07.602974285Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.348104351s" Feb 13 19:45:07.604117 containerd[1493]: time="2025-02-13T19:45:07.603002999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:45:07.617153 containerd[1493]: time="2025-02-13T19:45:07.616975860Z" level=info msg="CreateContainer within sandbox \"df8c668a9f30253e964b4021ce469eedf86dfe9eb5fb63d0a65679ded9c9c6d4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:45:07.641646 containerd[1493]: time="2025-02-13T19:45:07.641518482Z" level=info msg="CreateContainer within sandbox \"df8c668a9f30253e964b4021ce469eedf86dfe9eb5fb63d0a65679ded9c9c6d4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"665de594e0fa4a2d05836c5fce2a55a2fc4a6dfa3a7352c5291c4675430b63a3\"" Feb 13 19:45:07.642523 containerd[1493]: time="2025-02-13T19:45:07.642390220Z" level=info msg="StartContainer for \"665de594e0fa4a2d05836c5fce2a55a2fc4a6dfa3a7352c5291c4675430b63a3\"" Feb 13 19:45:07.658129 containerd[1493]: time="2025-02-13T19:45:07.657902158Z" level=error msg="Failed to destroy network for sandbox \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:07.658973 containerd[1493]: time="2025-02-13T19:45:07.658943956Z" level=error msg="encountered an error cleaning up failed sandbox \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:07.659140 containerd[1493]: time="2025-02-13T19:45:07.659119829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qptz9,Uid:7f69143c-9b46-49ed-a443-f2500935a881,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:07.659572 kubelet[1807]: E0213 19:45:07.659535 1807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:07.660051 kubelet[1807]: E0213 19:45:07.659687 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qptz9" Feb 13 19:45:07.660051 kubelet[1807]: E0213 19:45:07.659734 1807 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qptz9" Feb 13 19:45:07.660051 kubelet[1807]: E0213 19:45:07.659785 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qptz9_calico-system(7f69143c-9b46-49ed-a443-f2500935a881)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qptz9_calico-system(7f69143c-9b46-49ed-a443-f2500935a881)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qptz9" podUID="7f69143c-9b46-49ed-a443-f2500935a881" Feb 13 19:45:07.661918 containerd[1493]: time="2025-02-13T19:45:07.661856220Z" level=error msg="Failed to destroy network for sandbox \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:07.662320 containerd[1493]: time="2025-02-13T19:45:07.662269691Z" level=error msg="encountered an error cleaning up failed sandbox \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:07.662416 containerd[1493]: time="2025-02-13T19:45:07.662376693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-6d6tm,Uid:04b14b13-baaa-41e3-952c-da558cf5b655,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:07.662780 kubelet[1807]: E0213 19:45:07.662679 1807 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:45:07.662780 kubelet[1807]: E0213 19:45:07.662753 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-6d6tm" Feb 13 19:45:07.662982 kubelet[1807]: E0213 19:45:07.662777 1807 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-6d6tm" Feb 13 19:45:07.662982 kubelet[1807]: E0213 19:45:07.662843 1807 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-6d6tm_default(04b14b13-baaa-41e3-952c-da558cf5b655)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-6d6tm_default(04b14b13-baaa-41e3-952c-da558cf5b655)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-6d6tm" podUID="04b14b13-baaa-41e3-952c-da558cf5b655" Feb 13 19:45:07.669649 systemd[1]: Started cri-containerd-665de594e0fa4a2d05836c5fce2a55a2fc4a6dfa3a7352c5291c4675430b63a3.scope - libcontainer container 665de594e0fa4a2d05836c5fce2a55a2fc4a6dfa3a7352c5291c4675430b63a3. Feb 13 19:45:07.707922 containerd[1493]: time="2025-02-13T19:45:07.707870702Z" level=info msg="StartContainer for \"665de594e0fa4a2d05836c5fce2a55a2fc4a6dfa3a7352c5291c4675430b63a3\" returns successfully" Feb 13 19:45:07.805229 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:45:07.805383 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:45:08.084914 kubelet[1807]: E0213 19:45:08.084873 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:08.291216 kubelet[1807]: E0213 19:45:08.291169 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:45:08.293791 kubelet[1807]: I0213 19:45:08.293755 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622" Feb 13 19:45:08.295175 containerd[1493]: time="2025-02-13T19:45:08.294259224Z" level=info msg="StopPodSandbox for \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\"" Feb 13 19:45:08.295175 containerd[1493]: time="2025-02-13T19:45:08.294652807Z" level=info msg="Ensure that sandbox 91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622 in task-service has been cleanup successfully" Feb 13 19:45:08.295175 containerd[1493]: time="2025-02-13T19:45:08.294855890Z" level=info msg="TearDown network for sandbox \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\" successfully" Feb 13 19:45:08.295175 containerd[1493]: time="2025-02-13T19:45:08.294871329Z" level=info msg="StopPodSandbox for \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\" returns successfully" Feb 13 19:45:08.295480 containerd[1493]: time="2025-02-13T19:45:08.295226831Z" level=info msg="StopPodSandbox for \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\"" Feb 13 19:45:08.295480 containerd[1493]: time="2025-02-13T19:45:08.295336779Z" level=info msg="TearDown network for sandbox \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\" successfully" Feb 13 19:45:08.295480 containerd[1493]: time="2025-02-13T19:45:08.295350555Z" level=info msg="StopPodSandbox for \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\" returns successfully" Feb 13 19:45:08.295717 containerd[1493]: time="2025-02-13T19:45:08.295679876Z" level=info msg="StopPodSandbox for \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\"" Feb 13 19:45:08.295821 containerd[1493]: time="2025-02-13T19:45:08.295793622Z" level=info msg="TearDown network for sandbox \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\" successfully" Feb 13 19:45:08.295821 containerd[1493]: time="2025-02-13T19:45:08.295814741Z" level=info msg="StopPodSandbox for \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\" returns successfully" Feb 13 19:45:08.296208 containerd[1493]: time="2025-02-13T19:45:08.296177576Z" level=info msg="StopPodSandbox for \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\"" Feb 13 19:45:08.296334 containerd[1493]: time="2025-02-13T19:45:08.296317150Z" level=info msg="TearDown network for sandbox \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\" successfully" Feb 13 19:45:08.296364 containerd[1493]: time="2025-02-13T19:45:08.296333912Z" level=info msg="StopPodSandbox for \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\" returns successfully" Feb 13 19:45:08.296702 containerd[1493]: time="2025-02-13T19:45:08.296581129Z" level=info msg="StopPodSandbox for \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\"" Feb 13 19:45:08.296702 containerd[1493]: time="2025-02-13T19:45:08.296657293Z" level=info msg="TearDown network for sandbox \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\" successfully" Feb 13 19:45:08.296702 containerd[1493]: time="2025-02-13T19:45:08.296667382Z" level=info msg="StopPodSandbox for \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\" returns successfully" Feb 13 19:45:08.297023 containerd[1493]: time="2025-02-13T19:45:08.297001152Z" level=info msg="StopPodSandbox for \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\"" Feb 13 19:45:08.297109 containerd[1493]: time="2025-02-13T19:45:08.297091974Z" level=info msg="TearDown network for sandbox \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\" successfully" Feb 13 19:45:08.297133 containerd[1493]: time="2025-02-13T19:45:08.297109747Z" level=info msg="StopPodSandbox for \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\" returns successfully" Feb 13 19:45:08.297369 kubelet[1807]: I0213 19:45:08.297342 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95" Feb 13 19:45:08.297449 containerd[1493]: time="2025-02-13T19:45:08.297406918Z" level=info msg="StopPodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\"" Feb 13 19:45:08.297530 containerd[1493]: time="2025-02-13T19:45:08.297499593Z" level=info msg="TearDown network for sandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" successfully" Feb 13 19:45:08.297530 containerd[1493]: time="2025-02-13T19:45:08.297518329Z" level=info msg="StopPodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" returns successfully" Feb 13 19:45:08.297932 containerd[1493]: time="2025-02-13T19:45:08.297878499Z" level=info msg="StopPodSandbox for \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\"" Feb 13 19:45:08.297987 containerd[1493]: time="2025-02-13T19:45:08.297929796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qptz9,Uid:7f69143c-9b46-49ed-a443-f2500935a881,Namespace:calico-system,Attempt:7,}" Feb 13 19:45:08.298140 containerd[1493]: time="2025-02-13T19:45:08.298112361Z" level=info msg="Ensure that sandbox 6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95 in task-service has been cleanup successfully" Feb 13 19:45:08.298489 containerd[1493]: time="2025-02-13T19:45:08.298464095Z" level=info msg="TearDown network for sandbox \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\" successfully" Feb 13 19:45:08.298489 containerd[1493]: time="2025-02-13T19:45:08.298485396Z" level=info msg="StopPodSandbox for \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\" returns successfully" Feb 13 19:45:08.298908 containerd[1493]: time="2025-02-13T19:45:08.298863570Z" level=info msg="StopPodSandbox for \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\"" Feb 13 19:45:08.299010 containerd[1493]: time="2025-02-13T19:45:08.298991481Z" level=info msg="TearDown network for sandbox \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\" successfully" Feb 13 19:45:08.299010 containerd[1493]: time="2025-02-13T19:45:08.299007201Z" level=info msg="StopPodSandbox for \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\" returns successfully" Feb 13 19:45:08.299343 containerd[1493]: time="2025-02-13T19:45:08.299269837Z" level=info msg="StopPodSandbox for \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\"" Feb 13 19:45:08.299438 containerd[1493]: time="2025-02-13T19:45:08.299406755Z" level=info msg="TearDown network for sandbox \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\" successfully" Feb 13 19:45:08.299477 containerd[1493]: time="2025-02-13T19:45:08.299434909Z" level=info msg="StopPodSandbox for \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\" returns successfully" Feb 13 19:45:08.299759 containerd[1493]: time="2025-02-13T19:45:08.299736528Z" level=info msg="StopPodSandbox for \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\"" Feb 13 19:45:08.299841 containerd[1493]: time="2025-02-13T19:45:08.299820908Z" level=info msg="TearDown network for sandbox \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\" successfully" Feb 13 19:45:08.299841 containerd[1493]: time="2025-02-13T19:45:08.299837669Z" level=info msg="StopPodSandbox for \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\" returns successfully" Feb 13 19:45:08.300128 containerd[1493]: time="2025-02-13T19:45:08.300105024Z" level=info msg="StopPodSandbox for \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\"" Feb 13 19:45:08.300240 containerd[1493]: time="2025-02-13T19:45:08.300207538Z" level=info msg="TearDown network for sandbox \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\" successfully" Feb 13 19:45:08.300240 containerd[1493]: time="2025-02-13T19:45:08.300225953Z" level=info msg="StopPodSandbox for \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\" returns successfully" Feb 13 19:45:08.300979 containerd[1493]: time="2025-02-13T19:45:08.300954148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-6d6tm,Uid:04b14b13-baaa-41e3-952c-da558cf5b655,Namespace:default,Attempt:5,}" Feb 13 19:45:08.461259 systemd-networkd[1412]: cali1b86d718697: Link UP Feb 13 19:45:08.461532 systemd-networkd[1412]: cali1b86d718697: Gained carrier Feb 13 19:45:08.470637 kubelet[1807]: I0213 19:45:08.470567 1807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hmpt5" podStartSLOduration=4.732739206 podStartE2EDuration="26.470544492s" podCreationTimestamp="2025-02-13 19:44:42 +0000 UTC" firstStartedPulling="2025-02-13 19:44:45.866958109 +0000 UTC m=+4.382838857" lastFinishedPulling="2025-02-13 19:45:07.604763395 +0000 UTC m=+26.120644143" observedRunningTime="2025-02-13 19:45:08.318720002 +0000 UTC m=+26.834600750" watchObservedRunningTime="2025-02-13 19:45:08.470544492 +0000 UTC m=+26.986425240" Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.365 [INFO][2809] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.379 [INFO][2809] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.131-k8s-nginx--deployment--8587fbcb89--6d6tm-eth0 nginx-deployment-8587fbcb89- default 04b14b13-baaa-41e3-952c-da558cf5b655 1031 0 2025-02-13 19:45:00 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.131 nginx-deployment-8587fbcb89-6d6tm eth0 default [] [] [kns.default ksa.default.default] cali1b86d718697 [] []}} ContainerID="cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" Namespace="default" Pod="nginx-deployment-8587fbcb89-6d6tm" WorkloadEndpoint="10.0.0.131-k8s-nginx--deployment--8587fbcb89--6d6tm-" Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.379 [INFO][2809] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" Namespace="default" Pod="nginx-deployment-8587fbcb89-6d6tm" WorkloadEndpoint="10.0.0.131-k8s-nginx--deployment--8587fbcb89--6d6tm-eth0" Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.406 [INFO][2846] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" HandleID="k8s-pod-network.cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" Workload="10.0.0.131-k8s-nginx--deployment--8587fbcb89--6d6tm-eth0" Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.416 [INFO][2846] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" HandleID="k8s-pod-network.cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" Workload="10.0.0.131-k8s-nginx--deployment--8587fbcb89--6d6tm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042fc50), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.131", "pod":"nginx-deployment-8587fbcb89-6d6tm", "timestamp":"2025-02-13 19:45:08.406943569 +0000 UTC"}, Hostname:"10.0.0.131", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.416 [INFO][2846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.416 [INFO][2846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.416 [INFO][2846] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.131' Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.418 [INFO][2846] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" host="10.0.0.131" Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.423 [INFO][2846] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.131" Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.428 [INFO][2846] ipam/ipam.go 489: Trying affinity for 192.168.18.0/26 host="10.0.0.131" Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.430 [INFO][2846] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.0/26 host="10.0.0.131" Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.433 [INFO][2846] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="10.0.0.131" Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.433 [INFO][2846] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" host="10.0.0.131" Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.434 [INFO][2846] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83 Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.438 [INFO][2846] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" host="10.0.0.131" Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.445 [INFO][2846] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.1/26] block=192.168.18.0/26 handle="k8s-pod-network.cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" host="10.0.0.131" Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.445 [INFO][2846] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.1/26] handle="k8s-pod-network.cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" host="10.0.0.131" Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.445 [INFO][2846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:45:08.473717 containerd[1493]: 2025-02-13 19:45:08.445 [INFO][2846] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.1/26] IPv6=[] ContainerID="cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" HandleID="k8s-pod-network.cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" Workload="10.0.0.131-k8s-nginx--deployment--8587fbcb89--6d6tm-eth0" Feb 13 19:45:08.474657 containerd[1493]: 2025-02-13 19:45:08.450 [INFO][2809] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" Namespace="default" Pod="nginx-deployment-8587fbcb89-6d6tm" WorkloadEndpoint="10.0.0.131-k8s-nginx--deployment--8587fbcb89--6d6tm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131-k8s-nginx--deployment--8587fbcb89--6d6tm-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"04b14b13-baaa-41e3-952c-da558cf5b655", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.131", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-6d6tm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.18.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1b86d718697", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:45:08.474657 containerd[1493]: 2025-02-13 19:45:08.450 [INFO][2809] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.1/32] ContainerID="cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" Namespace="default" Pod="nginx-deployment-8587fbcb89-6d6tm" WorkloadEndpoint="10.0.0.131-k8s-nginx--deployment--8587fbcb89--6d6tm-eth0" Feb 13 19:45:08.474657 containerd[1493]: 2025-02-13 19:45:08.450 [INFO][2809] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b86d718697 ContainerID="cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" Namespace="default" Pod="nginx-deployment-8587fbcb89-6d6tm" WorkloadEndpoint="10.0.0.131-k8s-nginx--deployment--8587fbcb89--6d6tm-eth0" Feb 13 19:45:08.474657 containerd[1493]: 2025-02-13 19:45:08.461 [INFO][2809] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" Namespace="default" Pod="nginx-deployment-8587fbcb89-6d6tm" WorkloadEndpoint="10.0.0.131-k8s-nginx--deployment--8587fbcb89--6d6tm-eth0" Feb 13 19:45:08.474657 containerd[1493]: 2025-02-13 19:45:08.461 [INFO][2809] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" Namespace="default" Pod="nginx-deployment-8587fbcb89-6d6tm" WorkloadEndpoint="10.0.0.131-k8s-nginx--deployment--8587fbcb89--6d6tm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131-k8s-nginx--deployment--8587fbcb89--6d6tm-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"04b14b13-baaa-41e3-952c-da558cf5b655", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.131", ContainerID:"cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83", Pod:"nginx-deployment-8587fbcb89-6d6tm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.18.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1b86d718697", MAC:"aa:52:ab:a3:1b:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:45:08.474657 containerd[1493]: 2025-02-13 19:45:08.471 [INFO][2809] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83" Namespace="default" Pod="nginx-deployment-8587fbcb89-6d6tm" WorkloadEndpoint="10.0.0.131-k8s-nginx--deployment--8587fbcb89--6d6tm-eth0" Feb 13 19:45:08.496860 containerd[1493]: time="2025-02-13T19:45:08.496750827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:45:08.496860 containerd[1493]: time="2025-02-13T19:45:08.496831419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:45:08.497026 containerd[1493]: time="2025-02-13T19:45:08.496846699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:08.497064 containerd[1493]: time="2025-02-13T19:45:08.496965382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:08.516495 systemd[1]: Started cri-containerd-cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83.scope - libcontainer container cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83. Feb 13 19:45:08.531129 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:45:08.557035 systemd-networkd[1412]: cali1a821eb6e0e: Link UP Feb 13 19:45:08.557367 systemd-networkd[1412]: cali1a821eb6e0e: Gained carrier Feb 13 19:45:08.560838 containerd[1493]: time="2025-02-13T19:45:08.560791511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-6d6tm,Uid:04b14b13-baaa-41e3-952c-da558cf5b655,Namespace:default,Attempt:5,} returns sandbox id \"cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83\"" Feb 13 19:45:08.562486 containerd[1493]: time="2025-02-13T19:45:08.562462036Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.366 [INFO][2824] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.379 [INFO][2824] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.131-k8s-csi--node--driver--qptz9-eth0 csi-node-driver- calico-system 7f69143c-9b46-49ed-a443-f2500935a881 851 0 2025-02-13 19:44:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.131 csi-node-driver-qptz9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1a821eb6e0e [] []}} ContainerID="9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" Namespace="calico-system" Pod="csi-node-driver-qptz9" WorkloadEndpoint="10.0.0.131-k8s-csi--node--driver--qptz9-" Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.379 [INFO][2824] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" Namespace="calico-system" Pod="csi-node-driver-qptz9" WorkloadEndpoint="10.0.0.131-k8s-csi--node--driver--qptz9-eth0" Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.407 [INFO][2845] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" HandleID="k8s-pod-network.9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" Workload="10.0.0.131-k8s-csi--node--driver--qptz9-eth0" Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.418 [INFO][2845] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" HandleID="k8s-pod-network.9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" Workload="10.0.0.131-k8s-csi--node--driver--qptz9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dfd50), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.131", "pod":"csi-node-driver-qptz9", "timestamp":"2025-02-13 19:45:08.407030424 +0000 UTC"}, Hostname:"10.0.0.131", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.418 [INFO][2845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.445 [INFO][2845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.445 [INFO][2845] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.131' Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.520 [INFO][2845] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" host="10.0.0.131" Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.526 [INFO][2845] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.131" Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.532 [INFO][2845] ipam/ipam.go 489: Trying affinity for 192.168.18.0/26 host="10.0.0.131" Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.534 [INFO][2845] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.0/26 host="10.0.0.131" Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.536 [INFO][2845] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="10.0.0.131" Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.536 [INFO][2845] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" host="10.0.0.131" Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.538 [INFO][2845] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7 Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.542 [INFO][2845] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" host="10.0.0.131" Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.551 [INFO][2845] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.2/26] block=192.168.18.0/26 handle="k8s-pod-network.9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" host="10.0.0.131" Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.551 [INFO][2845] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.2/26] handle="k8s-pod-network.9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" host="10.0.0.131" Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.551 [INFO][2845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:45:08.572239 containerd[1493]: 2025-02-13 19:45:08.551 [INFO][2845] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.2/26] IPv6=[] ContainerID="9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" HandleID="k8s-pod-network.9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" Workload="10.0.0.131-k8s-csi--node--driver--qptz9-eth0" Feb 13 19:45:08.573014 containerd[1493]: 2025-02-13 19:45:08.554 [INFO][2824] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" Namespace="calico-system" Pod="csi-node-driver-qptz9" WorkloadEndpoint="10.0.0.131-k8s-csi--node--driver--qptz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131-k8s-csi--node--driver--qptz9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f69143c-9b46-49ed-a443-f2500935a881", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 44, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.131", ContainerID:"", Pod:"csi-node-driver-qptz9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a821eb6e0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:45:08.573014 containerd[1493]: 2025-02-13 19:45:08.554 [INFO][2824] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.2/32] ContainerID="9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" Namespace="calico-system" Pod="csi-node-driver-qptz9" WorkloadEndpoint="10.0.0.131-k8s-csi--node--driver--qptz9-eth0" Feb 13 19:45:08.573014 containerd[1493]: 2025-02-13 19:45:08.554 [INFO][2824] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a821eb6e0e ContainerID="9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" Namespace="calico-system" Pod="csi-node-driver-qptz9" WorkloadEndpoint="10.0.0.131-k8s-csi--node--driver--qptz9-eth0" Feb 13 19:45:08.573014 containerd[1493]: 2025-02-13 19:45:08.559 [INFO][2824] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" Namespace="calico-system" Pod="csi-node-driver-qptz9" WorkloadEndpoint="10.0.0.131-k8s-csi--node--driver--qptz9-eth0" Feb 13 19:45:08.573014 containerd[1493]: 2025-02-13 19:45:08.559 [INFO][2824] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" Namespace="calico-system" Pod="csi-node-driver-qptz9" WorkloadEndpoint="10.0.0.131-k8s-csi--node--driver--qptz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131-k8s-csi--node--driver--qptz9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f69143c-9b46-49ed-a443-f2500935a881", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 44, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.131", ContainerID:"9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7", Pod:"csi-node-driver-qptz9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a821eb6e0e", MAC:"6a:e7:e7:85:1f:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:45:08.573014 containerd[1493]: 2025-02-13 19:45:08.569 [INFO][2824] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7" Namespace="calico-system" Pod="csi-node-driver-qptz9" WorkloadEndpoint="10.0.0.131-k8s-csi--node--driver--qptz9-eth0" Feb 13 19:45:08.598882 systemd[1]: run-netns-cni\x2d9fa1091c\x2d1692\x2d5300\x2d44f2\x2d1385c0fcac7a.mount: Deactivated successfully. Feb 13 19:45:08.599006 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622-shm.mount: Deactivated successfully. Feb 13 19:45:08.599117 systemd[1]: run-netns-cni\x2d5df4e0e7\x2df8ea\x2de9d5\x2dacd9\x2d4a98576fcb9e.mount: Deactivated successfully. Feb 13 19:45:08.599220 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95-shm.mount: Deactivated successfully. Feb 13 19:45:08.602514 containerd[1493]: time="2025-02-13T19:45:08.602387713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:45:08.602514 containerd[1493]: time="2025-02-13T19:45:08.602469057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:45:08.602514 containerd[1493]: time="2025-02-13T19:45:08.602489906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:08.602880 containerd[1493]: time="2025-02-13T19:45:08.602601086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:08.633635 systemd[1]: Started cri-containerd-9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7.scope - libcontainer container 9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7. Feb 13 19:45:08.648842 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:45:08.661468 containerd[1493]: time="2025-02-13T19:45:08.661424585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qptz9,Uid:7f69143c-9b46-49ed-a443-f2500935a881,Namespace:calico-system,Attempt:7,} returns sandbox id \"9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7\"" Feb 13 19:45:09.085602 kubelet[1807]: E0213 19:45:09.085546 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:09.304987 kubelet[1807]: E0213 19:45:09.304952 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:45:09.517325 kernel: bpftool[3114]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:45:09.718480 systemd-networkd[1412]: cali1b86d718697: Gained IPv6LL Feb 13 19:45:09.749257 systemd-networkd[1412]: vxlan.calico: Link UP Feb 13 19:45:09.749270 systemd-networkd[1412]: vxlan.calico: Gained carrier Feb 13 19:45:09.910605 systemd-networkd[1412]: cali1a821eb6e0e: Gained IPv6LL Feb 13 19:45:10.086440 kubelet[1807]: E0213 19:45:10.086378 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:10.998628 systemd-networkd[1412]: vxlan.calico: Gained IPv6LL Feb 13 19:45:11.087367 kubelet[1807]: E0213 19:45:11.087097 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:11.832969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1629227425.mount: Deactivated successfully. Feb 13 19:45:12.088205 kubelet[1807]: E0213 19:45:12.088077 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:12.952270 containerd[1493]: time="2025-02-13T19:45:12.952198954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:12.953026 containerd[1493]: time="2025-02-13T19:45:12.952957936Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 19:45:12.954128 containerd[1493]: time="2025-02-13T19:45:12.954101693Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:12.956995 containerd[1493]: time="2025-02-13T19:45:12.956944785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:12.958120 containerd[1493]: time="2025-02-13T19:45:12.958076600Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 4.395582082s" Feb 13 19:45:12.958120 containerd[1493]: time="2025-02-13T19:45:12.958107368Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:45:12.959207 containerd[1493]: time="2025-02-13T19:45:12.959163210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:45:12.960268 containerd[1493]: time="2025-02-13T19:45:12.960224070Z" level=info msg="CreateContainer within sandbox \"cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:45:12.976599 containerd[1493]: time="2025-02-13T19:45:12.976546376Z" level=info msg="CreateContainer within sandbox \"cc7325c5f2cd3030aff1e1aff1d928ca8266e1ae3b73d4ac0260fe1bc8246e83\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2b1cbfcda48ad156cf4906269964d7bb0658cc3df61bef404dd5b22bc7807673\"" Feb 13 19:45:12.977469 containerd[1493]: time="2025-02-13T19:45:12.977420966Z" level=info msg="StartContainer for \"2b1cbfcda48ad156cf4906269964d7bb0658cc3df61bef404dd5b22bc7807673\"" Feb 13 19:45:13.061585 systemd[1]: Started cri-containerd-2b1cbfcda48ad156cf4906269964d7bb0658cc3df61bef404dd5b22bc7807673.scope - libcontainer container 2b1cbfcda48ad156cf4906269964d7bb0658cc3df61bef404dd5b22bc7807673. Feb 13 19:45:13.088920 kubelet[1807]: E0213 19:45:13.088870 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:13.089919 containerd[1493]: time="2025-02-13T19:45:13.089868001Z" level=info msg="StartContainer for \"2b1cbfcda48ad156cf4906269964d7bb0658cc3df61bef404dd5b22bc7807673\" returns successfully" Feb 13 19:45:13.352005 kubelet[1807]: I0213 19:45:13.351913 1807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-6d6tm" podStartSLOduration=8.955040539 podStartE2EDuration="13.351894872s" podCreationTimestamp="2025-02-13 19:45:00 +0000 UTC" firstStartedPulling="2025-02-13 19:45:08.56208272 +0000 UTC m=+27.077963468" lastFinishedPulling="2025-02-13 19:45:12.958937053 +0000 UTC m=+31.474817801" observedRunningTime="2025-02-13 19:45:13.351693051 +0000 UTC m=+31.867573809" watchObservedRunningTime="2025-02-13 19:45:13.351894872 +0000 UTC m=+31.867775620" Feb 13 19:45:13.937200 update_engine[1474]: I20250213 19:45:13.937072 1474 update_attempter.cc:509] Updating boot flags... Feb 13 19:45:13.981320 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3282) Feb 13 19:45:14.023427 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3284) Feb 13 19:45:14.050361 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3284) Feb 13 19:45:14.089443 kubelet[1807]: E0213 19:45:14.089380 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:15.090418 kubelet[1807]: E0213 19:45:15.090281 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:16.090844 kubelet[1807]: E0213 19:45:16.090762 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:16.412023 containerd[1493]: time="2025-02-13T19:45:16.411744270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:16.413015 containerd[1493]: time="2025-02-13T19:45:16.412940213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:45:16.415158 containerd[1493]: time="2025-02-13T19:45:16.415094573Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:16.417466 containerd[1493]: time="2025-02-13T19:45:16.417424412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:16.418136 containerd[1493]: time="2025-02-13T19:45:16.418088533Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 3.458876883s" Feb 13 19:45:16.418136 containerd[1493]: time="2025-02-13T19:45:16.418125422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:45:16.421372 containerd[1493]: time="2025-02-13T19:45:16.421270899Z" level=info msg="CreateContainer within sandbox \"9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:45:16.444750 containerd[1493]: time="2025-02-13T19:45:16.444671628Z" level=info msg="CreateContainer within sandbox \"9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e4dcdea8a8b7247860983867f04f070f7220e085efcc515194174ca63b9fb029\"" Feb 13 19:45:16.445406 containerd[1493]: time="2025-02-13T19:45:16.445374572Z" level=info msg="StartContainer for \"e4dcdea8a8b7247860983867f04f070f7220e085efcc515194174ca63b9fb029\"" Feb 13 19:45:16.490468 systemd[1]: Started cri-containerd-e4dcdea8a8b7247860983867f04f070f7220e085efcc515194174ca63b9fb029.scope - libcontainer container e4dcdea8a8b7247860983867f04f070f7220e085efcc515194174ca63b9fb029. Feb 13 19:45:16.751781 containerd[1493]: time="2025-02-13T19:45:16.751721213Z" level=info msg="StartContainer for \"e4dcdea8a8b7247860983867f04f070f7220e085efcc515194174ca63b9fb029\" returns successfully" Feb 13 19:45:16.753204 containerd[1493]: time="2025-02-13T19:45:16.753159232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:45:17.091976 kubelet[1807]: E0213 19:45:17.091821 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:18.092881 kubelet[1807]: E0213 19:45:18.092807 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:18.541868 systemd[1]: Created slice kubepods-besteffort-podc73b825a_ffa3_4226_abe1_94350b4f395a.slice - libcontainer container kubepods-besteffort-podc73b825a_ffa3_4226_abe1_94350b4f395a.slice. Feb 13 19:45:18.699873 kubelet[1807]: I0213 19:45:18.699818 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brf8j\" (UniqueName: \"kubernetes.io/projected/c73b825a-ffa3-4226-abe1-94350b4f395a-kube-api-access-brf8j\") pod \"nfs-server-provisioner-0\" (UID: \"c73b825a-ffa3-4226-abe1-94350b4f395a\") " pod="default/nfs-server-provisioner-0" Feb 13 19:45:18.699873 kubelet[1807]: I0213 19:45:18.699873 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c73b825a-ffa3-4226-abe1-94350b4f395a-data\") pod \"nfs-server-provisioner-0\" (UID: \"c73b825a-ffa3-4226-abe1-94350b4f395a\") " pod="default/nfs-server-provisioner-0" Feb 13 19:45:18.845258 containerd[1493]: time="2025-02-13T19:45:18.845103645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c73b825a-ffa3-4226-abe1-94350b4f395a,Namespace:default,Attempt:0,}" Feb 13 19:45:19.093909 kubelet[1807]: E0213 19:45:19.093854 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:19.214898 systemd-networkd[1412]: cali60e51b789ff: Link UP Feb 13 19:45:19.215506 systemd-networkd[1412]: cali60e51b789ff: Gained carrier Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:18.965 [INFO][3343] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.131-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default c73b825a-ffa3-4226-abe1-94350b4f395a 1142 0 2025-02-13 19:45:18 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.131 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.131-k8s-nfs--server--provisioner--0-" Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:18.965 [INFO][3343] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:18.996 [INFO][3356] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" HandleID="k8s-pod-network.6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" Workload="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:19.005 [INFO][3356] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" HandleID="k8s-pod-network.6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" Workload="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7060), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.131", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:45:18.996118373 +0000 UTC"}, Hostname:"10.0.0.131", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:19.005 [INFO][3356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:19.005 [INFO][3356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:19.005 [INFO][3356] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.131' Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:19.008 [INFO][3356] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" host="10.0.0.131" Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:19.052 [INFO][3356] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.131" Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:19.058 [INFO][3356] ipam/ipam.go 489: Trying affinity for 192.168.18.0/26 host="10.0.0.131" Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:19.060 [INFO][3356] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.0/26 host="10.0.0.131" Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:19.062 [INFO][3356] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="10.0.0.131" Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:19.062 [INFO][3356] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" host="10.0.0.131" Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:19.064 [INFO][3356] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:19.087 [INFO][3356] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" host="10.0.0.131" Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:19.209 [INFO][3356] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.3/26] block=192.168.18.0/26 handle="k8s-pod-network.6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" host="10.0.0.131" Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:19.209 [INFO][3356] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.3/26] handle="k8s-pod-network.6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" host="10.0.0.131" Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:19.209 [INFO][3356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:45:19.443147 containerd[1493]: 2025-02-13 19:45:19.209 [INFO][3356] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.3/26] IPv6=[] ContainerID="6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" HandleID="k8s-pod-network.6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" Workload="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:45:19.443888 containerd[1493]: 2025-02-13 19:45:19.212 [INFO][3343] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"c73b825a-ffa3-4226-abe1-94350b4f395a", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 45, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.131", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.18.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:45:19.443888 containerd[1493]: 2025-02-13 19:45:19.212 [INFO][3343] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.3/32] ContainerID="6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:45:19.443888 containerd[1493]: 2025-02-13 19:45:19.212 [INFO][3343] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:45:19.443888 containerd[1493]: 2025-02-13 19:45:19.216 [INFO][3343] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:45:19.444038 containerd[1493]: 2025-02-13 19:45:19.216 [INFO][3343] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"c73b825a-ffa3-4226-abe1-94350b4f395a", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 45, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.131", ContainerID:"6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.18.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"16:0d:43:b0:f4:de", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:45:19.444038 containerd[1493]: 2025-02-13 19:45:19.440 [INFO][3343] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:45:19.543472 containerd[1493]: time="2025-02-13T19:45:19.543266927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:45:19.543472 containerd[1493]: time="2025-02-13T19:45:19.543361015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:45:19.543472 containerd[1493]: time="2025-02-13T19:45:19.543376694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:19.543631 containerd[1493]: time="2025-02-13T19:45:19.543467845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:19.567486 systemd[1]: Started cri-containerd-6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e.scope - libcontainer container 6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e. Feb 13 19:45:19.581237 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:45:19.608391 containerd[1493]: time="2025-02-13T19:45:19.608338108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c73b825a-ffa3-4226-abe1-94350b4f395a,Namespace:default,Attempt:0,} returns sandbox id \"6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e\"" Feb 13 19:45:20.094448 kubelet[1807]: E0213 19:45:20.094389 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:20.273216 containerd[1493]: time="2025-02-13T19:45:20.273132296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:20.274466 containerd[1493]: time="2025-02-13T19:45:20.274421423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:45:20.277135 containerd[1493]: time="2025-02-13T19:45:20.277101097Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:20.280163 containerd[1493]: time="2025-02-13T19:45:20.280068022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:20.281008 containerd[1493]: time="2025-02-13T19:45:20.280831019Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 3.527616604s" Feb 13 19:45:20.281008 containerd[1493]: time="2025-02-13T19:45:20.280872698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:45:20.282091 containerd[1493]: time="2025-02-13T19:45:20.282035246Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:45:20.283359 containerd[1493]: time="2025-02-13T19:45:20.283325505Z" level=info msg="CreateContainer within sandbox \"9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:45:20.302103 containerd[1493]: time="2025-02-13T19:45:20.302045205Z" level=info msg="CreateContainer within sandbox \"9358e5577e2cf1b5e160f740cf6bf675875a88007c649c5693737801bd913fb7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0824ba1427ca8fdaa5b64c1f8bcde0240489d5bed220cad4718709b8c42b7d80\"" Feb 13 19:45:20.302625 containerd[1493]: time="2025-02-13T19:45:20.302599268Z" level=info msg="StartContainer for \"0824ba1427ca8fdaa5b64c1f8bcde0240489d5bed220cad4718709b8c42b7d80\"" Feb 13 19:45:20.335590 systemd[1]: Started cri-containerd-0824ba1427ca8fdaa5b64c1f8bcde0240489d5bed220cad4718709b8c42b7d80.scope - libcontainer container 0824ba1427ca8fdaa5b64c1f8bcde0240489d5bed220cad4718709b8c42b7d80. Feb 13 19:45:20.371901 containerd[1493]: time="2025-02-13T19:45:20.371667671Z" level=info msg="StartContainer for \"0824ba1427ca8fdaa5b64c1f8bcde0240489d5bed220cad4718709b8c42b7d80\" returns successfully" Feb 13 19:45:20.471627 systemd-networkd[1412]: cali60e51b789ff: Gained IPv6LL Feb 13 19:45:21.095310 kubelet[1807]: E0213 19:45:21.095241 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:21.252828 kubelet[1807]: I0213 19:45:21.252784 1807 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:45:21.252828 kubelet[1807]: I0213 19:45:21.252824 1807 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:45:21.418685 kubelet[1807]: I0213 19:45:21.418525 1807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qptz9" podStartSLOduration=27.799076591 podStartE2EDuration="39.418506131s" podCreationTimestamp="2025-02-13 19:44:42 +0000 UTC" firstStartedPulling="2025-02-13 19:45:08.662480981 +0000 UTC m=+27.178361729" lastFinishedPulling="2025-02-13 19:45:20.281910521 +0000 UTC m=+38.797791269" observedRunningTime="2025-02-13 19:45:21.418467448 +0000 UTC m=+39.934348206" watchObservedRunningTime="2025-02-13 19:45:21.418506131 +0000 UTC m=+39.934386879" Feb 13 19:45:22.070050 kubelet[1807]: E0213 19:45:22.069985 1807 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:22.095656 kubelet[1807]: E0213 19:45:22.095614 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:22.233174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1219823907.mount: Deactivated successfully. Feb 13 19:45:23.096463 kubelet[1807]: E0213 19:45:23.096408 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:23.872146 containerd[1493]: time="2025-02-13T19:45:23.872075262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:23.873190 containerd[1493]: time="2025-02-13T19:45:23.873100751Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 19:45:23.874532 containerd[1493]: time="2025-02-13T19:45:23.874488733Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:23.878119 containerd[1493]: time="2025-02-13T19:45:23.878054431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:23.879219 containerd[1493]: time="2025-02-13T19:45:23.879169981Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 3.597099389s" Feb 13 19:45:23.879268 containerd[1493]: time="2025-02-13T19:45:23.879221808Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 19:45:23.881605 containerd[1493]: time="2025-02-13T19:45:23.881578162Z" level=info msg="CreateContainer within sandbox \"6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:45:23.893723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4096633575.mount: Deactivated successfully. Feb 13 19:45:23.895524 containerd[1493]: time="2025-02-13T19:45:23.895490350Z" level=info msg="CreateContainer within sandbox \"6270ede121f1489a8dd41666918b9e730ecdc717b32f46d024b9633a5507ad7e\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"223bc1be0ea9c94839d5953259623ba1823660c247b59143858890962f4cce3b\"" Feb 13 19:45:23.895969 containerd[1493]: time="2025-02-13T19:45:23.895939845Z" level=info msg="StartContainer for \"223bc1be0ea9c94839d5953259623ba1823660c247b59143858890962f4cce3b\"" Feb 13 19:45:23.932506 systemd[1]: Started cri-containerd-223bc1be0ea9c94839d5953259623ba1823660c247b59143858890962f4cce3b.scope - libcontainer container 223bc1be0ea9c94839d5953259623ba1823660c247b59143858890962f4cce3b. Feb 13 19:45:24.022972 containerd[1493]: time="2025-02-13T19:45:24.022907470Z" level=info msg="StartContainer for \"223bc1be0ea9c94839d5953259623ba1823660c247b59143858890962f4cce3b\" returns successfully" Feb 13 19:45:24.096626 kubelet[1807]: E0213 19:45:24.096539 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:25.097552 kubelet[1807]: E0213 19:45:25.097483 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:26.098055 kubelet[1807]: E0213 19:45:26.097988 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:27.098845 kubelet[1807]: E0213 19:45:27.098782 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:28.099645 kubelet[1807]: E0213 19:45:28.099577 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:29.100415 kubelet[1807]: E0213 19:45:29.100314 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:30.100824 kubelet[1807]: E0213 19:45:30.100756 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:31.101414 kubelet[1807]: E0213 19:45:31.101356 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:32.102422 kubelet[1807]: E0213 19:45:32.102363 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:33.103262 kubelet[1807]: E0213 19:45:33.103190 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:34.104252 kubelet[1807]: E0213 19:45:34.104165 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:35.104685 kubelet[1807]: E0213 19:45:35.104621 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:35.989669 kubelet[1807]: E0213 19:45:35.989634 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:45:36.004414 kubelet[1807]: I0213 19:45:36.004342 1807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=13.733858266 podStartE2EDuration="18.004323867s" podCreationTimestamp="2025-02-13 19:45:18 +0000 UTC" firstStartedPulling="2025-02-13 19:45:19.609708158 +0000 UTC m=+38.125588906" lastFinishedPulling="2025-02-13 19:45:23.880173759 +0000 UTC m=+42.396054507" observedRunningTime="2025-02-13 19:45:24.356862848 +0000 UTC m=+42.872743596" watchObservedRunningTime="2025-02-13 19:45:36.004323867 +0000 UTC m=+54.520204615" Feb 13 19:45:36.104972 kubelet[1807]: E0213 19:45:36.104922 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:37.105917 kubelet[1807]: E0213 19:45:37.105865 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:38.106740 kubelet[1807]: E0213 19:45:38.106663 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:38.287654 systemd[1]: Created slice kubepods-besteffort-pod030df302_47e7_4dec_a81d_37fbbc5a2de3.slice - libcontainer container kubepods-besteffort-pod030df302_47e7_4dec_a81d_37fbbc5a2de3.slice. Feb 13 19:45:38.407968 kubelet[1807]: I0213 19:45:38.407807 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dmdx\" (UniqueName: \"kubernetes.io/projected/030df302-47e7-4dec-a81d-37fbbc5a2de3-kube-api-access-7dmdx\") pod \"test-pod-1\" (UID: \"030df302-47e7-4dec-a81d-37fbbc5a2de3\") " pod="default/test-pod-1" Feb 13 19:45:38.407968 kubelet[1807]: I0213 19:45:38.407885 1807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c00659d4-3201-409b-a1ce-499d1fe3214d\" (UniqueName: \"kubernetes.io/nfs/030df302-47e7-4dec-a81d-37fbbc5a2de3-pvc-c00659d4-3201-409b-a1ce-499d1fe3214d\") pod \"test-pod-1\" (UID: \"030df302-47e7-4dec-a81d-37fbbc5a2de3\") " pod="default/test-pod-1" Feb 13 19:45:38.538335 kernel: FS-Cache: Loaded Feb 13 19:45:38.612907 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:45:38.613019 kernel: RPC: Registered udp transport module. Feb 13 19:45:38.613039 kernel: RPC: Registered tcp transport module. Feb 13 19:45:38.613491 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:45:38.614990 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:45:38.919677 kernel: NFS: Registering the id_resolver key type Feb 13 19:45:38.919819 kernel: Key type id_resolver registered Feb 13 19:45:38.919849 kernel: Key type id_legacy registered Feb 13 19:45:38.948358 nfsidmap[3624]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:45:38.952849 nfsidmap[3627]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:45:39.107175 kubelet[1807]: E0213 19:45:39.107120 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:39.190881 containerd[1493]: time="2025-02-13T19:45:39.190734447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:030df302-47e7-4dec-a81d-37fbbc5a2de3,Namespace:default,Attempt:0,}" Feb 13 19:45:39.406708 systemd-networkd[1412]: cali5ec59c6bf6e: Link UP Feb 13 19:45:39.407027 systemd-networkd[1412]: cali5ec59c6bf6e: Gained carrier Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.244 [INFO][3630] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.131-k8s-test--pod--1-eth0 default 030df302-47e7-4dec-a81d-37fbbc5a2de3 1247 0 2025-02-13 19:45:18 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.131 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.131-k8s-test--pod--1-" Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.244 [INFO][3630] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.131-k8s-test--pod--1-eth0" Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.274 [INFO][3644] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" HandleID="k8s-pod-network.21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" Workload="10.0.0.131-k8s-test--pod--1-eth0" Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.283 [INFO][3644] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" HandleID="k8s-pod-network.21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" Workload="10.0.0.131-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027e0a0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.131", "pod":"test-pod-1", "timestamp":"2025-02-13 19:45:39.2741831 +0000 UTC"}, Hostname:"10.0.0.131", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.284 [INFO][3644] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.284 [INFO][3644] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.284 [INFO][3644] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.131' Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.287 [INFO][3644] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" host="10.0.0.131" Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.291 [INFO][3644] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.131" Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.297 [INFO][3644] ipam/ipam.go 489: Trying affinity for 192.168.18.0/26 host="10.0.0.131" Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.299 [INFO][3644] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.0/26 host="10.0.0.131" Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.302 [INFO][3644] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="10.0.0.131" Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.302 [INFO][3644] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" host="10.0.0.131" Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.304 [INFO][3644] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8 Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.353 [INFO][3644] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" host="10.0.0.131" Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.401 [INFO][3644] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.4/26] block=192.168.18.0/26 handle="k8s-pod-network.21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" host="10.0.0.131" Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.401 [INFO][3644] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.4/26] handle="k8s-pod-network.21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" host="10.0.0.131" Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.401 [INFO][3644] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.401 [INFO][3644] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.4/26] IPv6=[] ContainerID="21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" HandleID="k8s-pod-network.21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" Workload="10.0.0.131-k8s-test--pod--1-eth0" Feb 13 19:45:39.423986 containerd[1493]: 2025-02-13 19:45:39.404 [INFO][3630] cni-plugin/k8s.go 386: Populated endpoint ContainerID="21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.131-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"030df302-47e7-4dec-a81d-37fbbc5a2de3", ResourceVersion:"1247", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 45, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.131", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.18.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:45:39.424713 containerd[1493]: 2025-02-13 19:45:39.404 [INFO][3630] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.4/32] ContainerID="21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.131-k8s-test--pod--1-eth0" Feb 13 19:45:39.424713 containerd[1493]: 2025-02-13 19:45:39.404 [INFO][3630] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.131-k8s-test--pod--1-eth0" Feb 13 19:45:39.424713 containerd[1493]: 2025-02-13 19:45:39.407 [INFO][3630] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.131-k8s-test--pod--1-eth0" Feb 13 19:45:39.424713 containerd[1493]: 2025-02-13 19:45:39.407 [INFO][3630] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.131-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"030df302-47e7-4dec-a81d-37fbbc5a2de3", ResourceVersion:"1247", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 45, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.131", ContainerID:"21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.18.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"32:66:04:5b:bb:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:45:39.424713 containerd[1493]: 2025-02-13 19:45:39.421 [INFO][3630] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.131-k8s-test--pod--1-eth0" Feb 13 19:45:39.494923 containerd[1493]: time="2025-02-13T19:45:39.494814402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:45:39.494923 containerd[1493]: time="2025-02-13T19:45:39.494887539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:45:39.494923 containerd[1493]: time="2025-02-13T19:45:39.494900032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:39.495178 containerd[1493]: time="2025-02-13T19:45:39.495002294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:39.515444 systemd[1]: Started cri-containerd-21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8.scope - libcontainer container 21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8. Feb 13 19:45:39.528269 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:45:39.553223 containerd[1493]: time="2025-02-13T19:45:39.553169178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:030df302-47e7-4dec-a81d-37fbbc5a2de3,Namespace:default,Attempt:0,} returns sandbox id \"21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8\"" Feb 13 19:45:39.554879 containerd[1493]: time="2025-02-13T19:45:39.554817125Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:45:40.085973 containerd[1493]: time="2025-02-13T19:45:40.085912526Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:40.086704 containerd[1493]: time="2025-02-13T19:45:40.086646455Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:45:40.090036 containerd[1493]: time="2025-02-13T19:45:40.090001237Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 535.137555ms" Feb 13 19:45:40.090036 containerd[1493]: time="2025-02-13T19:45:40.090035351Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:45:40.092059 containerd[1493]: time="2025-02-13T19:45:40.092033465Z" level=info msg="CreateContainer within sandbox \"21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:45:40.105834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1481181716.mount: Deactivated successfully. Feb 13 19:45:40.107546 containerd[1493]: time="2025-02-13T19:45:40.107508528Z" level=info msg="CreateContainer within sandbox \"21c39b0aaca5365336c8daad941aa83a604bac68e4833f526eec5c45363e69a8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"700fa3b4f3c9f4e5f30003805e88d7582a1cbd9c7c58074afaab9d2d24fa6759\"" Feb 13 19:45:40.107613 kubelet[1807]: E0213 19:45:40.107569 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:40.107916 containerd[1493]: time="2025-02-13T19:45:40.107898370Z" level=info msg="StartContainer for \"700fa3b4f3c9f4e5f30003805e88d7582a1cbd9c7c58074afaab9d2d24fa6759\"" Feb 13 19:45:40.140548 systemd[1]: Started cri-containerd-700fa3b4f3c9f4e5f30003805e88d7582a1cbd9c7c58074afaab9d2d24fa6759.scope - libcontainer container 700fa3b4f3c9f4e5f30003805e88d7582a1cbd9c7c58074afaab9d2d24fa6759. Feb 13 19:45:40.166314 containerd[1493]: time="2025-02-13T19:45:40.166260716Z" level=info msg="StartContainer for \"700fa3b4f3c9f4e5f30003805e88d7582a1cbd9c7c58074afaab9d2d24fa6759\" returns successfully" Feb 13 19:45:40.391423 kubelet[1807]: I0213 19:45:40.391241 1807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=21.855065832 podStartE2EDuration="22.39122144s" podCreationTimestamp="2025-02-13 19:45:18 +0000 UTC" firstStartedPulling="2025-02-13 19:45:39.554517191 +0000 UTC m=+58.070397939" lastFinishedPulling="2025-02-13 19:45:40.090672799 +0000 UTC m=+58.606553547" observedRunningTime="2025-02-13 19:45:40.391084602 +0000 UTC m=+58.906965350" watchObservedRunningTime="2025-02-13 19:45:40.39122144 +0000 UTC m=+58.907102188" Feb 13 19:45:40.630603 systemd-networkd[1412]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:45:41.108679 kubelet[1807]: E0213 19:45:41.108612 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:42.070823 kubelet[1807]: E0213 19:45:42.070755 1807 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:42.092436 containerd[1493]: time="2025-02-13T19:45:42.092402178Z" level=info msg="StopPodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\"" Feb 13 19:45:42.092846 containerd[1493]: time="2025-02-13T19:45:42.092528156Z" level=info msg="TearDown network for sandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" successfully" Feb 13 19:45:42.092846 containerd[1493]: time="2025-02-13T19:45:42.092539988Z" level=info msg="StopPodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" returns successfully" Feb 13 19:45:42.092933 containerd[1493]: time="2025-02-13T19:45:42.092910544Z" level=info msg="RemovePodSandbox for \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\"" Feb 13 19:45:42.092965 containerd[1493]: time="2025-02-13T19:45:42.092934128Z" level=info msg="Forcibly stopping sandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\"" Feb 13 19:45:42.093041 containerd[1493]: time="2025-02-13T19:45:42.092999711Z" level=info msg="TearDown network for sandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" successfully" Feb 13 19:45:42.096509 containerd[1493]: time="2025-02-13T19:45:42.096477775Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:45:42.096572 containerd[1493]: time="2025-02-13T19:45:42.096520024Z" level=info msg="RemovePodSandbox \"2b71912fe5fa505e5c5b41dd1f7982961b8d5ea003839eddd5a581feccbc4aaa\" returns successfully" Feb 13 19:45:42.096822 containerd[1493]: time="2025-02-13T19:45:42.096797926Z" level=info msg="StopPodSandbox for \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\"" Feb 13 19:45:42.096919 containerd[1493]: time="2025-02-13T19:45:42.096903484Z" level=info msg="TearDown network for sandbox \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\" successfully" Feb 13 19:45:42.096945 containerd[1493]: time="2025-02-13T19:45:42.096917150Z" level=info msg="StopPodSandbox for \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\" returns successfully" Feb 13 19:45:42.097169 containerd[1493]: time="2025-02-13T19:45:42.097145649Z" level=info msg="RemovePodSandbox for \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\"" Feb 13 19:45:42.097223 containerd[1493]: time="2025-02-13T19:45:42.097169153Z" level=info msg="Forcibly stopping sandbox \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\"" Feb 13 19:45:42.097306 containerd[1493]: time="2025-02-13T19:45:42.097260445Z" level=info msg="TearDown network for sandbox \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\" successfully" Feb 13 19:45:42.101830 containerd[1493]: time="2025-02-13T19:45:42.101769816Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:45:42.101830 containerd[1493]: time="2025-02-13T19:45:42.101822174Z" level=info msg="RemovePodSandbox \"df351911e53520b0583977d3cf9e2b94c58dc61ae91c0b8efa30f4c844c11060\" returns successfully" Feb 13 19:45:42.102234 containerd[1493]: time="2025-02-13T19:45:42.102210955Z" level=info msg="StopPodSandbox for \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\"" Feb 13 19:45:42.102383 containerd[1493]: time="2025-02-13T19:45:42.102325039Z" level=info msg="TearDown network for sandbox \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\" successfully" Feb 13 19:45:42.102383 containerd[1493]: time="2025-02-13T19:45:42.102370594Z" level=info msg="StopPodSandbox for \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\" returns successfully" Feb 13 19:45:42.102658 containerd[1493]: time="2025-02-13T19:45:42.102607760Z" level=info msg="RemovePodSandbox for \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\"" Feb 13 19:45:42.102658 containerd[1493]: time="2025-02-13T19:45:42.102635792Z" level=info msg="Forcibly stopping sandbox \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\"" Feb 13 19:45:42.102759 containerd[1493]: time="2025-02-13T19:45:42.102729870Z" level=info msg="TearDown network for sandbox \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\" successfully" Feb 13 19:45:42.105793 containerd[1493]: time="2025-02-13T19:45:42.105765942Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:45:42.105854 containerd[1493]: time="2025-02-13T19:45:42.105802060Z" level=info msg="RemovePodSandbox \"f94968e5bd97d762f13d616bb386158e64cd228525b75b87fa6a25f4679b2b11\" returns successfully" Feb 13 19:45:42.106215 containerd[1493]: time="2025-02-13T19:45:42.106055697Z" level=info msg="StopPodSandbox for \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\"" Feb 13 19:45:42.106215 containerd[1493]: time="2025-02-13T19:45:42.106146237Z" level=info msg="TearDown network for sandbox \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\" successfully" Feb 13 19:45:42.106215 containerd[1493]: time="2025-02-13T19:45:42.106171545Z" level=info msg="StopPodSandbox for \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\" returns successfully" Feb 13 19:45:42.106425 containerd[1493]: time="2025-02-13T19:45:42.106401437Z" level=info msg="RemovePodSandbox for \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\"" Feb 13 19:45:42.106466 containerd[1493]: time="2025-02-13T19:45:42.106424901Z" level=info msg="Forcibly stopping sandbox \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\"" Feb 13 19:45:42.106541 containerd[1493]: time="2025-02-13T19:45:42.106497808Z" level=info msg="TearDown network for sandbox \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\" successfully" Feb 13 19:45:42.109298 containerd[1493]: time="2025-02-13T19:45:42.109246792Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:45:42.109298 containerd[1493]: time="2025-02-13T19:45:42.109281747Z" level=info msg="RemovePodSandbox \"150b73f8de8aaf8d60c99908011d1fccafae82fe2d6ef78d33965387de6cea07\" returns successfully" Feb 13 19:45:42.109456 kubelet[1807]: E0213 19:45:42.109425 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:42.109793 containerd[1493]: time="2025-02-13T19:45:42.109535364Z" level=info msg="StopPodSandbox for \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\"" Feb 13 19:45:42.109793 containerd[1493]: time="2025-02-13T19:45:42.109627887Z" level=info msg="TearDown network for sandbox \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\" successfully" Feb 13 19:45:42.109793 containerd[1493]: time="2025-02-13T19:45:42.109641593Z" level=info msg="StopPodSandbox for \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\" returns successfully" Feb 13 19:45:42.109933 containerd[1493]: time="2025-02-13T19:45:42.109903395Z" level=info msg="RemovePodSandbox for \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\"" Feb 13 19:45:42.109978 containerd[1493]: time="2025-02-13T19:45:42.109931999Z" level=info msg="Forcibly stopping sandbox \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\"" Feb 13 19:45:42.110056 containerd[1493]: time="2025-02-13T19:45:42.110009304Z" level=info msg="TearDown network for sandbox \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\" successfully" Feb 13 19:45:42.113189 containerd[1493]: time="2025-02-13T19:45:42.113150244Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:45:42.113278 containerd[1493]: time="2025-02-13T19:45:42.113192334Z" level=info msg="RemovePodSandbox \"3d40d24744570a972a00dce2f2f2bc87616badec88e2d25a3e3b74f52db714bf\" returns successfully" Feb 13 19:45:42.113789 containerd[1493]: time="2025-02-13T19:45:42.113617402Z" level=info msg="StopPodSandbox for \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\"" Feb 13 19:45:42.113789 containerd[1493]: time="2025-02-13T19:45:42.113713221Z" level=info msg="TearDown network for sandbox \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\" successfully" Feb 13 19:45:42.113789 containerd[1493]: time="2025-02-13T19:45:42.113724783Z" level=info msg="StopPodSandbox for \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\" returns successfully" Feb 13 19:45:42.114111 containerd[1493]: time="2025-02-13T19:45:42.114088808Z" level=info msg="RemovePodSandbox for \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\"" Feb 13 19:45:42.114184 containerd[1493]: time="2025-02-13T19:45:42.114114806Z" level=info msg="Forcibly stopping sandbox \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\"" Feb 13 19:45:42.114221 containerd[1493]: time="2025-02-13T19:45:42.114187843Z" level=info msg="TearDown network for sandbox \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\" successfully" Feb 13 19:45:42.117499 containerd[1493]: time="2025-02-13T19:45:42.117435254Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:45:42.117556 containerd[1493]: time="2025-02-13T19:45:42.117523029Z" level=info msg="RemovePodSandbox \"48e61313ad670dc6bc44befd95600db6d0c7ffa52ed4ee33a5a88bef37b81736\" returns successfully" Feb 13 19:45:42.117867 containerd[1493]: time="2025-02-13T19:45:42.117828663Z" level=info msg="StopPodSandbox for \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\"" Feb 13 19:45:42.117952 containerd[1493]: time="2025-02-13T19:45:42.117926486Z" level=info msg="TearDown network for sandbox \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\" successfully" Feb 13 19:45:42.117952 containerd[1493]: time="2025-02-13T19:45:42.117945452Z" level=info msg="StopPodSandbox for \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\" returns successfully" Feb 13 19:45:42.118271 containerd[1493]: time="2025-02-13T19:45:42.118228954Z" level=info msg="RemovePodSandbox for \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\"" Feb 13 19:45:42.118271 containerd[1493]: time="2025-02-13T19:45:42.118257167Z" level=info msg="Forcibly stopping sandbox \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\"" Feb 13 19:45:42.118471 containerd[1493]: time="2025-02-13T19:45:42.118361233Z" level=info msg="TearDown network for sandbox \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\" successfully" Feb 13 19:45:42.121901 containerd[1493]: time="2025-02-13T19:45:42.121830540Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:45:42.121901 containerd[1493]: time="2025-02-13T19:45:42.121882247Z" level=info msg="RemovePodSandbox \"91d95a8785ff799df62716fecb4908a7df724b06d02cb8ccf21840e798cea622\" returns successfully" Feb 13 19:45:42.122234 containerd[1493]: time="2025-02-13T19:45:42.122197920Z" level=info msg="StopPodSandbox for \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\"" Feb 13 19:45:42.122333 containerd[1493]: time="2025-02-13T19:45:42.122304271Z" level=info msg="TearDown network for sandbox \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\" successfully" Feb 13 19:45:42.122333 containerd[1493]: time="2025-02-13T19:45:42.122314410Z" level=info msg="StopPodSandbox for \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\" returns successfully" Feb 13 19:45:42.122725 containerd[1493]: time="2025-02-13T19:45:42.122672482Z" level=info msg="RemovePodSandbox for \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\"" Feb 13 19:45:42.122725 containerd[1493]: time="2025-02-13T19:45:42.122719280Z" level=info msg="Forcibly stopping sandbox \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\"" Feb 13 19:45:42.122864 containerd[1493]: time="2025-02-13T19:45:42.122814880Z" level=info msg="TearDown network for sandbox \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\" successfully" Feb 13 19:45:42.126074 containerd[1493]: time="2025-02-13T19:45:42.126025831Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:45:42.126144 containerd[1493]: time="2025-02-13T19:45:42.126089902Z" level=info msg="RemovePodSandbox \"aa181f69b0f94cc844d2ce59066c6b0625cbc4fb39d037ed94635ae5d4fe3f19\" returns successfully" Feb 13 19:45:42.126511 containerd[1493]: time="2025-02-13T19:45:42.126489543Z" level=info msg="StopPodSandbox for \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\"" Feb 13 19:45:42.126662 containerd[1493]: time="2025-02-13T19:45:42.126604629Z" level=info msg="TearDown network for sandbox \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\" successfully" Feb 13 19:45:42.126662 containerd[1493]: time="2025-02-13T19:45:42.126649513Z" level=info msg="StopPodSandbox for \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\" returns successfully" Feb 13 19:45:42.126998 containerd[1493]: time="2025-02-13T19:45:42.126965136Z" level=info msg="RemovePodSandbox for \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\"" Feb 13 19:45:42.126998 containerd[1493]: time="2025-02-13T19:45:42.126988910Z" level=info msg="Forcibly stopping sandbox \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\"" Feb 13 19:45:42.127102 containerd[1493]: time="2025-02-13T19:45:42.127066026Z" level=info msg="TearDown network for sandbox \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\" successfully" Feb 13 19:45:42.129905 containerd[1493]: time="2025-02-13T19:45:42.129869862Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:45:42.129961 containerd[1493]: time="2025-02-13T19:45:42.129920889Z" level=info msg="RemovePodSandbox \"8d068ca3da4b9de859ef4dca592adc67918dccb03fe8a3b1e3d374a18b1136e6\" returns successfully" Feb 13 19:45:42.130246 containerd[1493]: time="2025-02-13T19:45:42.130216173Z" level=info msg="StopPodSandbox for \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\"" Feb 13 19:45:42.130332 containerd[1493]: time="2025-02-13T19:45:42.130310871Z" level=info msg="TearDown network for sandbox \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\" successfully" Feb 13 19:45:42.130332 containerd[1493]: time="2025-02-13T19:45:42.130325348Z" level=info msg="StopPodSandbox for \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\" returns successfully" Feb 13 19:45:42.130602 containerd[1493]: time="2025-02-13T19:45:42.130576750Z" level=info msg="RemovePodSandbox for \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\"" Feb 13 19:45:42.130634 containerd[1493]: time="2025-02-13T19:45:42.130601136Z" level=info msg="Forcibly stopping sandbox \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\"" Feb 13 19:45:42.130816 containerd[1493]: time="2025-02-13T19:45:42.130674754Z" level=info msg="TearDown network for sandbox \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\" successfully" Feb 13 19:45:42.133795 containerd[1493]: time="2025-02-13T19:45:42.133751945Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:45:42.133795 containerd[1493]: time="2025-02-13T19:45:42.133805786Z" level=info msg="RemovePodSandbox \"07a8ca7ca5eff2a09e38dac3b716e82b2b9cea10cf5b57cfc0fa6e80dc2cb9e8\" returns successfully" Feb 13 19:45:42.134130 containerd[1493]: time="2025-02-13T19:45:42.134100279Z" level=info msg="StopPodSandbox for \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\"" Feb 13 19:45:42.134257 containerd[1493]: time="2025-02-13T19:45:42.134221026Z" level=info msg="TearDown network for sandbox \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\" successfully" Feb 13 19:45:42.134257 containerd[1493]: time="2025-02-13T19:45:42.134242517Z" level=info msg="StopPodSandbox for \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\" returns successfully" Feb 13 19:45:42.134718 containerd[1493]: time="2025-02-13T19:45:42.134673596Z" level=info msg="RemovePodSandbox for \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\"" Feb 13 19:45:42.134775 containerd[1493]: time="2025-02-13T19:45:42.134721176Z" level=info msg="Forcibly stopping sandbox \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\"" Feb 13 19:45:42.134859 containerd[1493]: time="2025-02-13T19:45:42.134813910Z" level=info msg="TearDown network for sandbox \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\" successfully" Feb 13 19:45:42.138052 containerd[1493]: time="2025-02-13T19:45:42.138008311Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:45:42.138052 containerd[1493]: time="2025-02-13T19:45:42.138049147Z" level=info msg="RemovePodSandbox \"97be89c3f0cd4500e8e450aedd6a35764dccab10c230632108b92a1a34873e68\" returns successfully" Feb 13 19:45:42.138445 containerd[1493]: time="2025-02-13T19:45:42.138419554Z" level=info msg="StopPodSandbox for \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\"" Feb 13 19:45:42.138565 containerd[1493]: time="2025-02-13T19:45:42.138538066Z" level=info msg="TearDown network for sandbox \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\" successfully" Feb 13 19:45:42.138565 containerd[1493]: time="2025-02-13T19:45:42.138555969Z" level=info msg="StopPodSandbox for \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\" returns successfully" Feb 13 19:45:42.138802 containerd[1493]: time="2025-02-13T19:45:42.138778167Z" level=info msg="RemovePodSandbox for \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\"" Feb 13 19:45:42.138853 containerd[1493]: time="2025-02-13T19:45:42.138802593Z" level=info msg="Forcibly stopping sandbox \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\"" Feb 13 19:45:42.138939 containerd[1493]: time="2025-02-13T19:45:42.138879457Z" level=info msg="TearDown network for sandbox \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\" successfully" Feb 13 19:45:42.141707 containerd[1493]: time="2025-02-13T19:45:42.141669128Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:45:42.141769 containerd[1493]: time="2025-02-13T19:45:42.141721526Z" level=info msg="RemovePodSandbox \"6d4133ac092eeab7adde9f57b4cd31708a8043bbcfacfc7130d17431e1b99f95\" returns successfully" Feb 13 19:45:43.110304 kubelet[1807]: E0213 19:45:43.110224 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:44.110919 kubelet[1807]: E0213 19:45:44.110860 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:45.111218 kubelet[1807]: E0213 19:45:45.111150 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:46.111896 kubelet[1807]: E0213 19:45:46.111841 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:45:47.112865 kubelet[1807]: E0213 19:45:47.112765 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"