Mar 17 17:39:56.989862 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:07:40 -00 2025 Mar 17 17:39:56.989883 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:39:56.989894 kernel: BIOS-provided physical RAM map: Mar 17 17:39:56.989900 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 17:39:56.989906 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 17:39:56.989912 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 17:39:56.989920 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 17 17:39:56.989926 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 17 17:39:56.989933 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 17:39:56.989941 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 17 17:39:56.989947 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:39:56.989954 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 17:39:56.989960 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 17:39:56.989966 kernel: NX (Execute Disable) protection: active Mar 17 17:39:56.989974 kernel: APIC: Static calls initialized Mar 17 17:39:56.989983 kernel: SMBIOS 2.8 present. Mar 17 17:39:56.989990 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 17 17:39:56.989996 kernel: Hypervisor detected: KVM Mar 17 17:39:56.990003 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:39:56.990010 kernel: kvm-clock: using sched offset of 2964506382 cycles Mar 17 17:39:56.990017 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:39:56.990024 kernel: tsc: Detected 2794.750 MHz processor Mar 17 17:39:56.990031 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:39:56.990038 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:39:56.990048 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 17 17:39:56.990055 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 17:39:56.990062 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:39:56.990068 kernel: Using GB pages for direct mapping Mar 17 17:39:56.990075 kernel: ACPI: Early table checksum verification disabled Mar 17 17:39:56.990082 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 17 17:39:56.990089 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:39:56.990096 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:39:56.990103 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:39:56.990112 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 17 17:39:56.990119 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:39:56.990125 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:39:56.990132 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:39:56.990139 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:39:56.990146 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Mar 17 17:39:56.990153 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Mar 17 17:39:56.990163 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 17 17:39:56.990173 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Mar 17 17:39:56.990180 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Mar 17 17:39:56.990187 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Mar 17 17:39:56.990194 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Mar 17 17:39:56.990201 kernel: No NUMA configuration found Mar 17 17:39:56.990208 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 17 17:39:56.990218 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 17 17:39:56.990225 kernel: Zone ranges: Mar 17 17:39:56.990232 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:39:56.990239 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 17 17:39:56.990246 kernel: Normal empty Mar 17 17:39:56.990253 kernel: Movable zone start for each node Mar 17 17:39:56.990261 kernel: Early memory node ranges Mar 17 17:39:56.990268 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 17:39:56.990275 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 17 17:39:56.990282 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 17 17:39:56.990291 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:39:56.990299 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 17:39:56.990306 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 17 17:39:56.990313 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:39:56.990320 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:39:56.990327 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:39:56.990335 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:39:56.990342 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:39:56.990349 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:39:56.990358 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:39:56.990366 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:39:56.990373 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:39:56.990380 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:39:56.990387 kernel: TSC deadline timer available Mar 17 17:39:56.990394 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 17 17:39:56.990401 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:39:56.990409 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 17 17:39:56.990416 kernel: kvm-guest: setup PV sched yield Mar 17 17:39:56.990425 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 17 17:39:56.990432 kernel: Booting paravirtualized kernel on KVM Mar 17 17:39:56.990440 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:39:56.990447 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 17 17:39:56.990454 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 17 17:39:56.990461 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 17 17:39:56.990468 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 17 17:39:56.990475 kernel: kvm-guest: PV spinlocks enabled Mar 17 17:39:56.990483 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 17:39:56.990493 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:39:56.990501 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:39:56.990508 kernel: random: crng init done Mar 17 17:39:56.990515 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:39:56.990522 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:39:56.990530 kernel: Fallback order for Node 0: 0 Mar 17 17:39:56.990537 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 17 17:39:56.990600 kernel: Policy zone: DMA32 Mar 17 17:39:56.990611 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:39:56.990619 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2303K rwdata, 22744K rodata, 42992K init, 2196K bss, 136900K reserved, 0K cma-reserved) Mar 17 17:39:56.990626 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:39:56.990633 kernel: ftrace: allocating 37938 entries in 149 pages Mar 17 17:39:56.990640 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:39:56.990647 kernel: Dynamic Preempt: voluntary Mar 17 17:39:56.990655 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:39:56.990662 kernel: rcu: RCU event tracing is enabled. Mar 17 17:39:56.990670 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:39:56.990679 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:39:56.990687 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:39:56.990694 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:39:56.990701 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:39:56.990708 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:39:56.990716 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 17 17:39:56.990723 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:39:56.990730 kernel: Console: colour VGA+ 80x25 Mar 17 17:39:56.990737 kernel: printk: console [ttyS0] enabled Mar 17 17:39:56.990747 kernel: ACPI: Core revision 20230628 Mar 17 17:39:56.990754 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:39:56.990761 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:39:56.990768 kernel: x2apic enabled Mar 17 17:39:56.990776 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:39:56.990783 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 17 17:39:56.990790 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 17 17:39:56.990804 kernel: kvm-guest: setup PV IPIs Mar 17 17:39:56.990822 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:39:56.990829 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 17:39:56.990837 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Mar 17 17:39:56.990845 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 17:39:56.990855 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 17:39:56.990862 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 17:39:56.990870 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:39:56.990877 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:39:56.990885 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:39:56.990895 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:39:56.990902 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 17:39:56.990910 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 17:39:56.990918 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:39:56.990925 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:39:56.990933 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 17 17:39:56.990941 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 17 17:39:56.990949 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 17 17:39:56.990959 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:39:56.990966 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:39:56.990974 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:39:56.990981 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:39:56.990989 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 17 17:39:56.990997 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:39:56.991004 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:39:56.991012 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:39:56.991019 kernel: landlock: Up and running. Mar 17 17:39:56.991029 kernel: SELinux: Initializing. Mar 17 17:39:56.991037 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:39:56.991044 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:39:56.991052 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 17:39:56.991060 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:39:56.991067 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:39:56.991075 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:39:56.991083 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 17:39:56.991090 kernel: ... version: 0 Mar 17 17:39:56.991100 kernel: ... bit width: 48 Mar 17 17:39:56.991107 kernel: ... generic registers: 6 Mar 17 17:39:56.991115 kernel: ... value mask: 0000ffffffffffff Mar 17 17:39:56.991123 kernel: ... max period: 00007fffffffffff Mar 17 17:39:56.991130 kernel: ... fixed-purpose events: 0 Mar 17 17:39:56.991137 kernel: ... event mask: 000000000000003f Mar 17 17:39:56.991145 kernel: signal: max sigframe size: 1776 Mar 17 17:39:56.991152 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:39:56.991160 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:39:56.991170 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:39:56.991178 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:39:56.991185 kernel: .... node #0, CPUs: #1 #2 #3 Mar 17 17:39:56.991193 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:39:56.991200 kernel: smpboot: Max logical packages: 1 Mar 17 17:39:56.991208 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Mar 17 17:39:56.991215 kernel: devtmpfs: initialized Mar 17 17:39:56.991223 kernel: x86/mm: Memory block size: 128MB Mar 17 17:39:56.991231 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:39:56.991240 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:39:56.991248 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:39:56.991255 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:39:56.991263 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:39:56.991271 kernel: audit: type=2000 audit(1742233196.993:1): state=initialized audit_enabled=0 res=1 Mar 17 17:39:56.991278 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:39:56.991286 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:39:56.991293 kernel: cpuidle: using governor menu Mar 17 17:39:56.991301 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:39:56.991310 kernel: dca service started, version 1.12.1 Mar 17 17:39:56.991318 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 17:39:56.991326 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 17 17:39:56.991333 kernel: PCI: Using configuration type 1 for base access Mar 17 17:39:56.991341 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:39:56.991348 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:39:56.991356 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:39:56.991364 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:39:56.991371 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:39:56.991381 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:39:56.991389 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:39:56.991396 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:39:56.991404 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:39:56.991411 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:39:56.991419 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:39:56.991426 kernel: ACPI: Interpreter enabled Mar 17 17:39:56.991434 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 17:39:56.991441 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:39:56.991451 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:39:56.991459 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:39:56.991466 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 17:39:56.991474 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:39:56.991694 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:39:56.991831 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 17:39:56.991951 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 17:39:56.991965 kernel: PCI host bridge to bus 0000:00 Mar 17 17:39:56.992096 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:39:56.992207 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:39:56.992318 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:39:56.992425 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 17 17:39:56.992534 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 17:39:56.992661 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 17 17:39:56.992774 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:39:56.992928 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 17:39:56.993070 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 17 17:39:56.993191 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 17 17:39:56.993310 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 17 17:39:56.993429 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 17 17:39:56.993561 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:39:56.993709 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:39:56.993840 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 17 17:39:56.993961 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 17 17:39:56.994080 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 17 17:39:56.994214 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:39:56.994334 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 17:39:56.994453 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 17 17:39:56.994593 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 17 17:39:56.994729 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:39:56.994861 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 17 17:39:56.994980 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 17 17:39:56.995099 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 17 17:39:56.995219 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 17 17:39:56.995354 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 17:39:56.995480 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 17:39:56.995638 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 17:39:56.995763 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 17 17:39:56.995916 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 17 17:39:56.996075 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 17:39:56.996222 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 17 17:39:56.996241 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:39:56.996253 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:39:56.996264 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:39:56.996275 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:39:56.996286 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 17:39:56.996297 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 17:39:56.996308 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 17:39:56.996319 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 17:39:56.996330 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 17:39:56.996343 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 17:39:56.996354 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 17:39:56.996365 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 17:39:56.996376 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 17:39:56.996385 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 17:39:56.996395 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 17:39:56.996405 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 17:39:56.996416 kernel: iommu: Default domain type: Translated Mar 17 17:39:56.996427 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:39:56.996441 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:39:56.996452 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:39:56.996463 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 17:39:56.996474 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 17 17:39:56.996641 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 17:39:56.996805 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 17:39:56.996957 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:39:56.996971 kernel: vgaarb: loaded Mar 17 17:39:56.996987 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:39:56.996998 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:39:56.997009 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:39:56.997020 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:39:56.997032 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:39:56.997043 kernel: pnp: PnP ACPI init Mar 17 17:39:56.997235 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 17:39:56.997252 kernel: pnp: PnP ACPI: found 6 devices Mar 17 17:39:56.997263 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:39:56.997279 kernel: NET: Registered PF_INET protocol family Mar 17 17:39:56.997290 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:39:56.997301 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:39:56.997312 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:39:56.997323 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:39:56.997335 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:39:56.997346 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:39:56.997357 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:39:56.997370 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:39:56.997380 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:39:56.997389 kernel: NET: Registered PF_XDP protocol family Mar 17 17:39:56.997518 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:39:56.999144 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:39:56.999284 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:39:56.999420 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 17 17:39:56.999609 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 17:39:56.999745 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 17 17:39:56.999764 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:39:56.999775 kernel: Initialise system trusted keyrings Mar 17 17:39:56.999785 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:39:56.999829 kernel: Key type asymmetric registered Mar 17 17:39:56.999841 kernel: Asymmetric key parser 'x509' registered Mar 17 17:39:56.999852 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:39:56.999863 kernel: io scheduler mq-deadline registered Mar 17 17:39:56.999874 kernel: io scheduler kyber registered Mar 17 17:39:56.999885 kernel: io scheduler bfq registered Mar 17 17:39:56.999899 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:39:56.999910 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 17:39:56.999920 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 17:39:56.999931 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 17:39:56.999942 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:39:56.999953 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:39:56.999965 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:39:56.999976 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:39:56.999987 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:39:57.000153 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 17:39:57.000170 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:39:57.000308 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 17:39:57.000448 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T17:39:56 UTC (1742233196) Mar 17 17:39:57.000604 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 17 17:39:57.000620 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 17 17:39:57.000631 kernel: hpet: Lost 1 RTC interrupts Mar 17 17:39:57.000642 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:39:57.000656 kernel: Segment Routing with IPv6 Mar 17 17:39:57.000667 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:39:57.000678 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:39:57.000689 kernel: Key type dns_resolver registered Mar 17 17:39:57.000700 kernel: IPI shorthand broadcast: enabled Mar 17 17:39:57.000711 kernel: sched_clock: Marking stable (836003752, 124220213)->(990920405, -30696440) Mar 17 17:39:57.000722 kernel: registered taskstats version 1 Mar 17 17:39:57.000733 kernel: Loading compiled-in X.509 certificates Mar 17 17:39:57.000744 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 608fb88224bc0ea76afefc598557abb0413f36c0' Mar 17 17:39:57.000758 kernel: Key type .fscrypt registered Mar 17 17:39:57.000769 kernel: Key type fscrypt-provisioning registered Mar 17 17:39:57.000780 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:39:57.000790 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:39:57.000810 kernel: ima: No architecture policies found Mar 17 17:39:57.000820 kernel: clk: Disabling unused clocks Mar 17 17:39:57.000831 kernel: Freeing unused kernel image (initmem) memory: 42992K Mar 17 17:39:57.000842 kernel: Write protecting the kernel read-only data: 36864k Mar 17 17:39:57.000856 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Mar 17 17:39:57.000867 kernel: Run /init as init process Mar 17 17:39:57.000878 kernel: with arguments: Mar 17 17:39:57.000889 kernel: /init Mar 17 17:39:57.000899 kernel: with environment: Mar 17 17:39:57.000910 kernel: HOME=/ Mar 17 17:39:57.000920 kernel: TERM=linux Mar 17 17:39:57.000931 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:39:57.000944 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:39:57.000961 systemd[1]: Detected virtualization kvm. Mar 17 17:39:57.000973 systemd[1]: Detected architecture x86-64. Mar 17 17:39:57.000985 systemd[1]: Running in initrd. Mar 17 17:39:57.000996 systemd[1]: No hostname configured, using default hostname. Mar 17 17:39:57.001007 systemd[1]: Hostname set to . Mar 17 17:39:57.001019 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:39:57.001030 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:39:57.001042 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:39:57.001058 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:39:57.001084 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:39:57.001098 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:39:57.001109 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:39:57.001122 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:39:57.001139 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:39:57.001152 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:39:57.001164 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:39:57.001176 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:39:57.001187 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:39:57.001199 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:39:57.001211 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:39:57.001226 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:39:57.001238 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:39:57.001251 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:39:57.001263 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:39:57.001276 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:39:57.001288 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:39:57.001300 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:39:57.001312 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:39:57.001323 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:39:57.001337 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:39:57.001349 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:39:57.001361 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:39:57.001373 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:39:57.001386 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:39:57.001398 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:39:57.001410 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:39:57.001423 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:39:57.001435 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:39:57.001451 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:39:57.001463 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:39:57.001475 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:39:57.001491 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:39:57.001529 systemd-journald[194]: Collecting audit messages is disabled. Mar 17 17:39:57.001680 systemd-journald[194]: Journal started Mar 17 17:39:57.001704 systemd-journald[194]: Runtime Journal (/run/log/journal/4e9f6b62f154480c97e41f4b334a4fad) is 6.0M, max 48.4M, 42.3M free. Mar 17 17:39:56.981149 systemd-modules-load[195]: Inserted module 'overlay' Mar 17 17:39:57.028396 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:39:57.028424 kernel: Bridge firewalling registered Mar 17 17:39:57.028444 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:39:57.009671 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 17 17:39:57.040988 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:39:57.042616 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:39:57.065917 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:39:57.067099 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:39:57.070693 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:39:57.071350 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:39:57.084052 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:39:57.086902 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:39:57.088899 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:39:57.099750 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:39:57.102839 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:39:57.119927 dracut-cmdline[228]: dracut-dracut-053 Mar 17 17:39:57.124505 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:39:57.142906 systemd-resolved[229]: Positive Trust Anchors: Mar 17 17:39:57.142922 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:39:57.142953 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:39:57.145833 systemd-resolved[229]: Defaulting to hostname 'linux'. Mar 17 17:39:57.146982 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:39:57.153872 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:39:57.212585 kernel: SCSI subsystem initialized Mar 17 17:39:57.224596 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:39:57.238603 kernel: iscsi: registered transport (tcp) Mar 17 17:39:57.267701 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:39:57.267806 kernel: QLogic iSCSI HBA Driver Mar 17 17:39:57.332984 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:39:57.343884 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:39:57.377238 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:39:57.377327 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:39:57.378575 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:39:57.427584 kernel: raid6: avx2x4 gen() 21372 MB/s Mar 17 17:39:57.444599 kernel: raid6: avx2x2 gen() 21437 MB/s Mar 17 17:39:57.461923 kernel: raid6: avx2x1 gen() 18010 MB/s Mar 17 17:39:57.462016 kernel: raid6: using algorithm avx2x2 gen() 21437 MB/s Mar 17 17:39:57.479934 kernel: raid6: .... xor() 14244 MB/s, rmw enabled Mar 17 17:39:57.480035 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:39:57.505588 kernel: xor: automatically using best checksumming function avx Mar 17 17:39:57.694602 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:39:57.711735 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:39:57.728835 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:39:57.741170 systemd-udevd[413]: Using default interface naming scheme 'v255'. Mar 17 17:39:57.745981 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:39:57.755907 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:39:57.777833 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Mar 17 17:39:57.818218 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:39:57.828931 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:39:57.911911 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:39:57.927043 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:39:57.945834 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:39:57.949401 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:39:57.955376 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:39:57.950136 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:39:57.950829 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:39:57.962560 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 17 17:39:58.009205 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:39:58.009409 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:39:58.009426 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:39:58.009440 kernel: GPT:9289727 != 19775487 Mar 17 17:39:58.009463 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:39:58.009477 kernel: GPT:9289727 != 19775487 Mar 17 17:39:58.009490 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:39:58.009503 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:39:58.009517 kernel: AES CTR mode by8 optimization enabled Mar 17 17:39:57.963802 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:39:57.984962 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:39:58.020565 kernel: libata version 3.00 loaded. Mar 17 17:39:58.021001 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:39:58.021332 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:39:58.025995 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:39:58.027251 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:39:58.027377 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:39:58.028985 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:39:58.097397 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:39:58.100577 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (468) Mar 17 17:39:58.103582 kernel: BTRFS: device fsid 2b8ebefd-e897-48f6-96d5-0893fbb7c64a devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (466) Mar 17 17:39:58.118574 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 17:39:58.142646 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 17:39:58.142665 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 17:39:58.142860 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 17:39:58.143005 kernel: scsi host0: ahci Mar 17 17:39:58.143187 kernel: scsi host1: ahci Mar 17 17:39:58.143347 kernel: scsi host2: ahci Mar 17 17:39:58.143561 kernel: scsi host3: ahci Mar 17 17:39:58.143727 kernel: scsi host4: ahci Mar 17 17:39:58.143885 kernel: scsi host5: ahci Mar 17 17:39:58.144044 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 17 17:39:58.144059 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 17 17:39:58.144071 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 17 17:39:58.144082 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 17 17:39:58.144092 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 17 17:39:58.144106 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 17 17:39:58.131886 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:39:58.141325 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:39:58.143649 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:39:58.151205 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:39:58.193364 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:39:58.202702 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:39:58.215842 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:39:58.219754 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:39:58.238257 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:39:58.245420 disk-uuid[567]: Primary Header is updated. Mar 17 17:39:58.245420 disk-uuid[567]: Secondary Entries is updated. Mar 17 17:39:58.245420 disk-uuid[567]: Secondary Header is updated. Mar 17 17:39:58.249998 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:39:58.253569 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:39:58.457793 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 17:39:58.457887 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 17:39:58.457901 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 17:39:58.457931 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 17:39:58.459590 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 17:39:58.459697 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 17:39:58.460582 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 17:39:58.461573 kernel: ata3.00: applying bridge limits Mar 17 17:39:58.461592 kernel: ata3.00: configured for UDMA/100 Mar 17 17:39:58.462581 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:39:58.504591 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 17:39:58.519863 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:39:58.519882 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:39:59.273491 disk-uuid[578]: The operation has completed successfully. Mar 17 17:39:59.275132 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:39:59.304471 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:39:59.304666 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:39:59.326780 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:39:59.333829 sh[593]: Success Mar 17 17:39:59.350574 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 17:39:59.393690 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:39:59.403737 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:39:59.405656 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:39:59.421125 kernel: BTRFS info (device dm-0): first mount of filesystem 2b8ebefd-e897-48f6-96d5-0893fbb7c64a Mar 17 17:39:59.421166 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:39:59.421178 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:39:59.423294 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:39:59.423318 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:39:59.429411 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:39:59.431702 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:39:59.464268 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:39:59.467105 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:39:59.476454 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:39:59.476493 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:39:59.476508 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:39:59.480610 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:39:59.491829 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:39:59.494086 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:39:59.507672 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:39:59.519761 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:39:59.633816 ignition[691]: Ignition 2.20.0 Mar 17 17:39:59.633830 ignition[691]: Stage: fetch-offline Mar 17 17:39:59.633902 ignition[691]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:39:59.633913 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:39:59.635952 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:39:59.634025 ignition[691]: parsed url from cmdline: "" Mar 17 17:39:59.634029 ignition[691]: no config URL provided Mar 17 17:39:59.634034 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:39:59.634043 ignition[691]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:39:59.634080 ignition[691]: op(1): [started] loading QEMU firmware config module Mar 17 17:39:59.634086 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:39:59.661850 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:39:59.665559 ignition[691]: op(1): [finished] loading QEMU firmware config module Mar 17 17:39:59.667282 ignition[691]: parsing config with SHA512: 461a8bb33f3c74b5ab3a524add837219697270102cb4d95e5a45ae9b22c1d12551e7b48deca4351b4d83bb98d8bd392e327f511e2d5305a8c266b322f00ebd58 Mar 17 17:39:59.671458 unknown[691]: fetched base config from "system" Mar 17 17:39:59.671597 unknown[691]: fetched user config from "qemu" Mar 17 17:39:59.672015 ignition[691]: fetch-offline: fetch-offline passed Mar 17 17:39:59.672117 ignition[691]: Ignition finished successfully Mar 17 17:39:59.677262 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:39:59.689983 systemd-networkd[782]: lo: Link UP Mar 17 17:39:59.689992 systemd-networkd[782]: lo: Gained carrier Mar 17 17:39:59.691519 systemd-networkd[782]: Enumeration completed Mar 17 17:39:59.691628 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:39:59.691972 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:39:59.691977 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:39:59.692885 systemd-networkd[782]: eth0: Link UP Mar 17 17:39:59.692890 systemd-networkd[782]: eth0: Gained carrier Mar 17 17:39:59.692898 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:39:59.693337 systemd[1]: Reached target network.target - Network. Mar 17 17:39:59.695177 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:39:59.703765 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:39:59.710605 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:39:59.731096 ignition[785]: Ignition 2.20.0 Mar 17 17:39:59.731107 ignition[785]: Stage: kargs Mar 17 17:39:59.731273 ignition[785]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:39:59.731285 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:39:59.732092 ignition[785]: kargs: kargs passed Mar 17 17:39:59.732139 ignition[785]: Ignition finished successfully Mar 17 17:39:59.735600 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:39:59.747793 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:39:59.768033 ignition[795]: Ignition 2.20.0 Mar 17 17:39:59.768045 ignition[795]: Stage: disks Mar 17 17:39:59.768206 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:39:59.768218 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:39:59.770942 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:39:59.768876 ignition[795]: disks: disks passed Mar 17 17:39:59.772820 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:39:59.768920 ignition[795]: Ignition finished successfully Mar 17 17:39:59.774710 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:39:59.776651 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:39:59.777737 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:39:59.779793 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:39:59.791662 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:39:59.803023 systemd-resolved[229]: Detected conflict on linux IN A 10.0.0.38 Mar 17 17:39:59.803041 systemd-resolved[229]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Mar 17 17:39:59.806099 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:39:59.814192 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:39:59.828816 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:39:59.934567 kernel: EXT4-fs (vda9): mounted filesystem 345fc709-8965-4219-b368-16e508c3d632 r/w with ordered data mode. Quota mode: none. Mar 17 17:39:59.935033 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:39:59.935975 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:39:59.947799 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:39:59.951106 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:39:59.951539 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:39:59.957958 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (813) Mar 17 17:39:59.951609 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:39:59.964078 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:39:59.964109 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:39:59.964123 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:39:59.964137 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:39:59.951638 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:39:59.968732 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:39:59.979389 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:39:59.980625 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:40:00.035974 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:40:00.042754 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:40:00.048699 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:40:00.054685 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:40:00.171202 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:40:00.178724 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:40:00.180847 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:40:00.187579 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:40:00.236401 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:40:00.244467 ignition[925]: INFO : Ignition 2.20.0 Mar 17 17:40:00.244467 ignition[925]: INFO : Stage: mount Mar 17 17:40:00.246309 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:40:00.246309 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:40:00.246309 ignition[925]: INFO : mount: mount passed Mar 17 17:40:00.246309 ignition[925]: INFO : Ignition finished successfully Mar 17 17:40:00.247623 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:40:00.259670 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:40:00.420100 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:40:00.439892 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:40:00.447576 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) Mar 17 17:40:00.447614 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:40:00.448573 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:40:00.450083 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:40:00.452577 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:40:00.454393 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:40:00.486503 ignition[957]: INFO : Ignition 2.20.0 Mar 17 17:40:00.486503 ignition[957]: INFO : Stage: files Mar 17 17:40:00.488359 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:40:00.488359 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:40:00.488359 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:40:00.491944 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:40:00.491944 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:40:00.494714 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:40:00.496150 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:40:00.496150 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:40:00.495234 unknown[957]: wrote ssh authorized keys file for user: core Mar 17 17:40:00.500258 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:40:00.500258 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:40:00.500258 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:40:00.500258 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:40:00.500258 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:40:00.500258 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:40:00.500258 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:40:00.500258 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:40:00.500258 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:40:00.500258 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 17:40:00.777775 systemd-networkd[782]: eth0: Gained IPv6LL Mar 17 17:40:00.912992 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Mar 17 17:40:01.734934 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:40:01.734934 ignition[957]: INFO : files: op(8): [started] processing unit "containerd.service" Mar 17 17:40:01.739924 ignition[957]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:40:01.739924 ignition[957]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:40:01.739924 ignition[957]: INFO : files: op(8): [finished] processing unit "containerd.service" Mar 17 17:40:01.739924 ignition[957]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" Mar 17 17:40:01.739924 ignition[957]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:40:01.739924 ignition[957]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:40:01.739924 ignition[957]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" Mar 17 17:40:01.739924 ignition[957]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:40:01.786166 ignition[957]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:40:01.792591 ignition[957]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:40:01.794251 ignition[957]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:40:01.794251 ignition[957]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:40:01.794251 ignition[957]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:40:01.794251 ignition[957]: INFO : files: files passed Mar 17 17:40:01.794251 ignition[957]: INFO : Ignition finished successfully Mar 17 17:40:01.795629 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:40:01.803724 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:40:01.805702 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:40:01.809441 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:40:01.809563 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:40:01.818135 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:40:01.821105 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:40:01.821105 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:40:01.824642 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:40:01.827766 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:40:01.829478 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:40:01.842016 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:40:01.874298 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:40:01.875455 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:40:01.878235 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:40:01.880340 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:40:01.882683 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:40:01.885189 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:40:01.907851 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:40:01.919746 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:40:01.933013 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:40:01.933230 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:40:01.936759 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:40:01.938880 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:40:01.939045 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:40:01.943039 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:40:01.943211 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:40:01.945171 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:40:01.945556 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:40:01.946121 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:40:01.946505 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:40:01.947080 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:40:01.947466 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:40:01.947866 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:40:01.948237 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:40:01.948629 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:40:01.948783 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:40:01.965185 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:40:01.965374 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:40:01.965826 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:40:01.969382 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:40:01.969851 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:40:01.970000 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:40:01.975682 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:40:01.975800 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:40:01.976890 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:40:01.977131 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:40:01.984598 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:40:01.987378 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:40:01.989230 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:40:01.991140 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:40:01.992021 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:40:01.994060 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:40:01.995052 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:40:01.997188 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:40:01.998388 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:40:02.000962 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:40:02.001950 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:40:02.017731 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:40:02.019663 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:40:02.019805 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:40:02.024631 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:40:02.026843 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:40:02.027606 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:40:02.031038 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:40:02.033895 ignition[1011]: INFO : Ignition 2.20.0 Mar 17 17:40:02.033895 ignition[1011]: INFO : Stage: umount Mar 17 17:40:02.033895 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:40:02.033895 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:40:02.033895 ignition[1011]: INFO : umount: umount passed Mar 17 17:40:02.033895 ignition[1011]: INFO : Ignition finished successfully Mar 17 17:40:02.031143 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:40:02.034739 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:40:02.034853 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:40:02.037961 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:40:02.038071 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:40:02.040408 systemd[1]: Stopped target network.target - Network. Mar 17 17:40:02.042185 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:40:02.042377 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:40:02.044307 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:40:02.044366 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:40:02.046427 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:40:02.046487 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:40:02.048792 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:40:02.048850 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:40:02.050268 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:40:02.052528 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:40:02.055484 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:40:02.058632 systemd-networkd[782]: eth0: DHCPv6 lease lost Mar 17 17:40:02.061278 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:40:02.061456 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:40:02.063829 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:40:02.063970 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:40:02.067722 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:40:02.067790 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:40:02.079727 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:40:02.081167 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:40:02.081253 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:40:02.083757 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:40:02.083822 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:40:02.086149 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:40:02.086207 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:40:02.089091 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:40:02.089158 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:40:02.091871 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:40:02.103969 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:40:02.104161 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:40:02.115673 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:40:02.115909 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:40:02.118187 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:40:02.118239 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:40:02.120271 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:40:02.120318 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:40:02.122369 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:40:02.122422 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:40:02.124735 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:40:02.124785 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:40:02.126566 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:40:02.126624 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:40:02.144789 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:40:02.145923 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:40:02.145990 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:40:02.148278 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:40:02.148337 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:40:02.153247 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:40:02.153371 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:40:02.259986 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:40:02.260167 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:40:02.262839 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:40:02.264598 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:40:02.264807 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:40:02.275695 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:40:02.284241 systemd[1]: Switching root. Mar 17 17:40:02.317193 systemd-journald[194]: Journal stopped Mar 17 17:40:03.668879 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 17 17:40:03.668942 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:40:03.668960 kernel: SELinux: policy capability open_perms=1 Mar 17 17:40:03.668973 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:40:03.668984 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:40:03.669001 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:40:03.669017 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:40:03.669028 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:40:03.669044 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:40:03.669058 kernel: audit: type=1403 audit(1742233202.903:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:40:03.669074 systemd[1]: Successfully loaded SELinux policy in 43.771ms. Mar 17 17:40:03.669099 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.873ms. Mar 17 17:40:03.669115 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:40:03.669127 systemd[1]: Detected virtualization kvm. Mar 17 17:40:03.669139 systemd[1]: Detected architecture x86-64. Mar 17 17:40:03.669151 systemd[1]: Detected first boot. Mar 17 17:40:03.669163 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:40:03.669176 zram_generator::config[1073]: No configuration found. Mar 17 17:40:03.669189 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:40:03.669201 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:40:03.669216 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:40:03.669229 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:40:03.669245 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:40:03.669258 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:40:03.669270 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:40:03.669282 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:40:03.669295 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:40:03.669308 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:40:03.669322 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:40:03.669334 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:40:03.669348 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:40:03.669361 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:40:03.669373 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:40:03.669386 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:40:03.669398 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:40:03.669410 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:40:03.669422 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:40:03.669436 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:40:03.669448 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:40:03.669460 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:40:03.669473 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:40:03.669485 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:40:03.669497 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:40:03.669509 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:40:03.669521 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:40:03.669536 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:40:03.669560 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:40:03.669573 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:40:03.669585 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:40:03.669597 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:40:03.669618 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:40:03.669632 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:40:03.669644 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:40:03.669657 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:40:03.669673 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:40:03.669685 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:40:03.669697 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:40:03.669709 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:40:03.669721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:40:03.669733 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:40:03.669745 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:40:03.669758 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:40:03.669770 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:40:03.669785 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:40:03.669797 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:40:03.669809 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:40:03.669821 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:40:03.669836 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 17:40:03.669849 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 17 17:40:03.669862 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:40:03.669874 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:40:03.669888 kernel: loop: module loaded Mar 17 17:40:03.669901 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:40:03.669913 kernel: fuse: init (API version 7.39) Mar 17 17:40:03.669941 systemd-journald[1151]: Collecting audit messages is disabled. Mar 17 17:40:03.669965 systemd-journald[1151]: Journal started Mar 17 17:40:03.669986 systemd-journald[1151]: Runtime Journal (/run/log/journal/4e9f6b62f154480c97e41f4b334a4fad) is 6.0M, max 48.4M, 42.3M free. Mar 17 17:40:03.672598 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:40:03.676753 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:40:03.679686 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:40:03.682557 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:40:03.684984 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:40:03.686262 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:40:03.687507 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:40:03.688650 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:40:03.689990 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:40:03.691237 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:40:03.692684 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:40:03.694338 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:40:03.694589 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:40:03.696236 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:40:03.696445 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:40:03.698179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:40:03.698400 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:40:03.699974 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:40:03.700184 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:40:03.701642 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:40:03.701855 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:40:03.703376 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:40:03.704932 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:40:03.706725 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:40:03.719941 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:40:03.722623 kernel: ACPI: bus type drm_connector registered Mar 17 17:40:03.728792 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:40:03.734676 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:40:03.735961 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:40:03.738976 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:40:03.744172 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:40:03.745562 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:40:03.748448 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:40:03.749718 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:40:03.757867 systemd-journald[1151]: Time spent on flushing to /var/log/journal/4e9f6b62f154480c97e41f4b334a4fad is 25.556ms for 921 entries. Mar 17 17:40:03.757867 systemd-journald[1151]: System Journal (/var/log/journal/4e9f6b62f154480c97e41f4b334a4fad) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:40:03.801260 systemd-journald[1151]: Received client request to flush runtime journal. Mar 17 17:40:03.758861 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:40:03.764779 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:40:03.773641 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:40:03.773895 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:40:03.775796 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:40:03.777928 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:40:03.795018 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:40:03.803195 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:40:03.917567 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:40:03.919618 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:40:03.933981 udevadm[1216]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:40:03.939934 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Mar 17 17:40:03.939955 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Mar 17 17:40:03.944983 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:40:03.946913 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:40:03.950372 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:40:03.954530 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:40:03.966693 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:40:03.992976 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:40:04.001727 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:40:04.022574 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Mar 17 17:40:04.022603 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Mar 17 17:40:04.028706 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:40:04.577121 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:40:04.591023 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:40:04.616010 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Mar 17 17:40:04.633346 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:40:04.653765 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:40:04.673725 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:40:04.683068 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 17 17:40:04.712715 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1256) Mar 17 17:40:04.749607 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 17 17:40:04.754602 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:40:04.794956 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:40:04.804580 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 17 17:40:04.829716 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:40:04.913409 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 17:40:04.913693 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 17:40:04.913861 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 17:40:04.917659 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:40:04.940154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:40:04.986082 systemd-networkd[1254]: lo: Link UP Mar 17 17:40:04.986092 systemd-networkd[1254]: lo: Gained carrier Mar 17 17:40:04.987702 systemd-networkd[1254]: Enumeration completed Mar 17 17:40:04.987850 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:40:04.989086 systemd-networkd[1254]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:40:04.989094 systemd-networkd[1254]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:40:04.989994 systemd-networkd[1254]: eth0: Link UP Mar 17 17:40:04.990045 systemd-networkd[1254]: eth0: Gained carrier Mar 17 17:40:04.990106 systemd-networkd[1254]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:40:05.011344 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:40:05.019613 systemd-networkd[1254]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:40:05.024287 kernel: kvm_amd: TSC scaling supported Mar 17 17:40:05.024330 kernel: kvm_amd: Nested Virtualization enabled Mar 17 17:40:05.024372 kernel: kvm_amd: Nested Paging enabled Mar 17 17:40:05.025061 kernel: kvm_amd: LBR virtualization supported Mar 17 17:40:05.025897 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 17 17:40:05.026599 kernel: kvm_amd: Virtual GIF supported Mar 17 17:40:05.048582 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:40:05.085079 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:40:05.105940 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:40:05.107959 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:40:05.115990 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:40:05.188224 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:40:05.189804 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:40:05.200969 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:40:05.207325 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:40:05.245344 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:40:05.247179 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:40:05.248773 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:40:05.248814 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:40:05.250091 systemd[1]: Reached target machines.target - Containers. Mar 17 17:40:05.252582 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:40:05.263787 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:40:05.267232 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:40:05.268651 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:40:05.270128 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:40:05.273716 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:40:05.279071 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:40:05.281476 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:40:05.302620 kernel: loop0: detected capacity change from 0 to 140992 Mar 17 17:40:05.305809 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:40:05.314410 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:40:05.315618 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:40:05.331588 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:40:05.360639 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 17:40:05.400596 kernel: loop2: detected capacity change from 0 to 138184 Mar 17 17:40:05.439599 kernel: loop3: detected capacity change from 0 to 140992 Mar 17 17:40:05.457619 kernel: loop4: detected capacity change from 0 to 210664 Mar 17 17:40:05.471639 kernel: loop5: detected capacity change from 0 to 138184 Mar 17 17:40:05.482631 (sd-merge)[1308]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:40:05.611126 (sd-merge)[1308]: Merged extensions into '/usr'. Mar 17 17:40:05.618069 systemd[1]: Reloading requested from client PID 1296 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:40:05.618819 systemd[1]: Reloading... Mar 17 17:40:05.701847 zram_generator::config[1336]: No configuration found. Mar 17 17:40:05.899510 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:40:06.004705 systemd[1]: Reloading finished in 385 ms. Mar 17 17:40:06.169825 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:40:06.187176 systemd[1]: Starting ensure-sysext.service... Mar 17 17:40:06.193385 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:40:06.212285 systemd[1]: Reloading requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:40:06.212334 systemd[1]: Reloading... Mar 17 17:40:06.238668 ldconfig[1293]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:40:06.254804 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:40:06.255291 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:40:06.256588 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:40:06.256993 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Mar 17 17:40:06.257082 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Mar 17 17:40:06.264834 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:40:06.265043 systemd-tmpfiles[1379]: Skipping /boot Mar 17 17:40:06.348912 systemd-networkd[1254]: eth0: Gained IPv6LL Mar 17 17:40:06.355209 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:40:06.355221 systemd-tmpfiles[1379]: Skipping /boot Mar 17 17:40:06.366817 zram_generator::config[1410]: No configuration found. Mar 17 17:40:06.576111 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:40:06.682124 systemd[1]: Reloading finished in 469 ms. Mar 17 17:40:06.721323 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:40:06.726952 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:40:06.746724 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:40:06.777843 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:40:06.791881 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:40:06.801722 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:40:06.813820 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:40:06.826859 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:40:06.833553 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:40:06.833996 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:40:06.839044 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:40:06.851444 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:40:06.864365 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:40:06.869237 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:40:06.869443 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:40:06.876783 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:40:06.880471 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:40:06.880807 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:40:06.884055 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:40:06.884379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:40:06.888340 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:40:06.888763 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:40:06.894828 augenrules[1490]: No rules Mar 17 17:40:06.896272 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:40:06.896679 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:40:06.905105 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:40:06.905793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:40:06.913152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:40:06.917568 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:40:06.924422 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:40:06.927059 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:40:06.933398 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:40:06.934853 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:40:06.937142 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:40:06.939674 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:40:06.940084 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:40:06.949192 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:40:06.949604 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:40:06.952424 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:40:06.952806 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:40:06.955762 systemd-resolved[1461]: Positive Trust Anchors: Mar 17 17:40:06.955787 systemd-resolved[1461]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:40:06.955819 systemd-resolved[1461]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:40:06.963490 systemd-resolved[1461]: Defaulting to hostname 'linux'. Mar 17 17:40:06.966759 systemd[1]: Finished ensure-sysext.service. Mar 17 17:40:06.968990 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:40:06.971980 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:40:06.978960 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:40:06.981665 systemd[1]: Reached target network.target - Network. Mar 17 17:40:06.982815 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:40:06.984048 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:40:06.985449 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:40:07.093461 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:40:07.095334 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:40:07.101111 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:40:07.105029 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:40:07.111848 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:40:07.117611 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:40:07.119254 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:40:07.125018 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:40:07.126873 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:40:07.126934 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:40:07.128148 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:40:07.128478 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:40:07.130594 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:40:07.131219 augenrules[1520]: /sbin/augenrules: No change Mar 17 17:40:07.130905 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:40:07.132801 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:40:07.133092 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:40:07.135763 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:40:07.138011 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:40:07.141597 augenrules[1547]: No rules Mar 17 17:40:07.143695 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:40:07.144170 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:40:07.148796 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:40:07.148926 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:40:07.242959 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:40:08.552426 systemd-resolved[1461]: Clock change detected. Flushing caches. Mar 17 17:40:08.552462 systemd-timesyncd[1536]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:40:08.552523 systemd-timesyncd[1536]: Initial clock synchronization to Mon 2025-03-17 17:40:08.552323 UTC. Mar 17 17:40:08.553291 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:40:08.554851 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:40:08.556570 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:40:08.558277 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:40:08.560974 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:40:08.561034 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:40:08.562336 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:40:08.564131 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:40:08.565873 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:40:08.567346 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:40:08.569835 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:40:08.574541 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:40:08.578197 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:40:08.587921 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:40:08.589419 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:40:08.590585 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:40:08.591929 systemd[1]: System is tainted: cgroupsv1 Mar 17 17:40:08.591998 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:40:08.592033 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:40:08.594537 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:40:08.598936 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:40:08.603356 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:40:08.608894 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:40:08.614381 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:40:08.616807 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:40:08.622079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:40:08.628982 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:40:08.629481 jq[1565]: false Mar 17 17:40:08.642244 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:40:08.644916 extend-filesystems[1567]: Found loop3 Mar 17 17:40:08.647912 extend-filesystems[1567]: Found loop4 Mar 17 17:40:08.647912 extend-filesystems[1567]: Found loop5 Mar 17 17:40:08.647912 extend-filesystems[1567]: Found sr0 Mar 17 17:40:08.647912 extend-filesystems[1567]: Found vda Mar 17 17:40:08.647912 extend-filesystems[1567]: Found vda1 Mar 17 17:40:08.647912 extend-filesystems[1567]: Found vda2 Mar 17 17:40:08.647912 extend-filesystems[1567]: Found vda3 Mar 17 17:40:08.647912 extend-filesystems[1567]: Found usr Mar 17 17:40:08.647912 extend-filesystems[1567]: Found vda4 Mar 17 17:40:08.647912 extend-filesystems[1567]: Found vda6 Mar 17 17:40:08.647912 extend-filesystems[1567]: Found vda7 Mar 17 17:40:08.647912 extend-filesystems[1567]: Found vda9 Mar 17 17:40:08.647912 extend-filesystems[1567]: Checking size of /dev/vda9 Mar 17 17:40:08.650051 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:40:08.661956 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:40:08.674654 dbus-daemon[1563]: [system] SELinux support is enabled Mar 17 17:40:08.675008 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:40:08.677214 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:40:08.688997 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:40:08.695889 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:40:08.706013 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:40:08.714198 jq[1595]: true Mar 17 17:40:08.724111 extend-filesystems[1567]: Resized partition /dev/vda9 Mar 17 17:40:08.727905 extend-filesystems[1600]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:40:08.726338 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:40:08.726855 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:40:08.739551 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:40:08.740094 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:40:08.742464 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:40:08.744364 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:40:08.745160 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:40:08.778842 update_engine[1589]: I20250317 17:40:08.776254 1589 main.cc:92] Flatcar Update Engine starting Mar 17 17:40:08.783235 (ntainerd)[1606]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:40:08.783714 update_engine[1589]: I20250317 17:40:08.783020 1589 update_check_scheduler.cc:74] Next update check in 3m18s Mar 17 17:40:08.794801 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:40:08.794899 jq[1605]: true Mar 17 17:40:08.798380 systemd-logind[1587]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:40:08.803130 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1242) Mar 17 17:40:08.798410 systemd-logind[1587]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:40:08.804010 systemd-logind[1587]: New seat seat0. Mar 17 17:40:08.812541 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:40:08.819744 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:40:08.831105 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:40:08.842717 dbus-daemon[1563]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 17:40:08.857321 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:40:08.893422 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:40:08.951110 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:40:08.951419 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:40:08.953572 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:40:08.953786 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:40:08.958600 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:40:08.969040 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:40:09.008797 locksmithd[1639]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:40:09.038740 sshd_keygen[1593]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:40:09.056784 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:40:09.223379 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:40:09.234600 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:40:09.236682 extend-filesystems[1600]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:40:09.236682 extend-filesystems[1600]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:40:09.236682 extend-filesystems[1600]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:40:09.249844 extend-filesystems[1567]: Resized filesystem in /dev/vda9 Mar 17 17:40:09.241314 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:40:09.242073 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:40:09.256780 bash[1638]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:40:09.306627 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:40:09.309231 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:40:09.309673 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:40:09.314073 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:40:09.322132 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:40:09.350874 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:40:09.403370 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:40:09.407285 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:40:09.410040 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:40:09.833301 containerd[1606]: time="2025-03-17T17:40:09.833156555Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:40:09.892208 containerd[1606]: time="2025-03-17T17:40:09.892096798Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:40:09.894931 containerd[1606]: time="2025-03-17T17:40:09.894700480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:40:09.894931 containerd[1606]: time="2025-03-17T17:40:09.894786291Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:40:09.894931 containerd[1606]: time="2025-03-17T17:40:09.894814684Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:40:09.895142 containerd[1606]: time="2025-03-17T17:40:09.895122902Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:40:09.895217 containerd[1606]: time="2025-03-17T17:40:09.895148680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:40:09.895315 containerd[1606]: time="2025-03-17T17:40:09.895262644Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:40:09.895315 containerd[1606]: time="2025-03-17T17:40:09.895290145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:40:09.897189 containerd[1606]: time="2025-03-17T17:40:09.895679866Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:40:09.897189 containerd[1606]: time="2025-03-17T17:40:09.895714752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:40:09.897189 containerd[1606]: time="2025-03-17T17:40:09.895760387Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:40:09.897189 containerd[1606]: time="2025-03-17T17:40:09.895774924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:40:09.897189 containerd[1606]: time="2025-03-17T17:40:09.895920628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:40:09.897189 containerd[1606]: time="2025-03-17T17:40:09.896365311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:40:09.897189 containerd[1606]: time="2025-03-17T17:40:09.896588400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:40:09.897189 containerd[1606]: time="2025-03-17T17:40:09.896608277Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:40:09.897189 containerd[1606]: time="2025-03-17T17:40:09.896796590Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:40:09.897189 containerd[1606]: time="2025-03-17T17:40:09.896903551Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:40:09.952159 containerd[1606]: time="2025-03-17T17:40:09.952066762Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:40:09.952383 containerd[1606]: time="2025-03-17T17:40:09.952196565Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:40:09.952383 containerd[1606]: time="2025-03-17T17:40:09.952224037Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:40:09.952383 containerd[1606]: time="2025-03-17T17:40:09.952249715Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:40:09.952383 containerd[1606]: time="2025-03-17T17:40:09.952270013Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:40:09.952916 containerd[1606]: time="2025-03-17T17:40:09.952534318Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:40:09.953241 containerd[1606]: time="2025-03-17T17:40:09.953181832Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:40:09.953421 containerd[1606]: time="2025-03-17T17:40:09.953383300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:40:09.953465 containerd[1606]: time="2025-03-17T17:40:09.953418676Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:40:09.953465 containerd[1606]: time="2025-03-17T17:40:09.953441680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:40:09.953518 containerd[1606]: time="2025-03-17T17:40:09.953462569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:40:09.953518 containerd[1606]: time="2025-03-17T17:40:09.953481825Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:40:09.953518 containerd[1606]: time="2025-03-17T17:40:09.953500169Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:40:09.953626 containerd[1606]: time="2025-03-17T17:40:09.953518624Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:40:09.953626 containerd[1606]: time="2025-03-17T17:40:09.953537479Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:40:09.953626 containerd[1606]: time="2025-03-17T17:40:09.953556194Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:40:09.953626 containerd[1606]: time="2025-03-17T17:40:09.953575851Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:40:09.953626 containerd[1606]: time="2025-03-17T17:40:09.953593514Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:40:09.953826 containerd[1606]: time="2025-03-17T17:40:09.953624562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.953826 containerd[1606]: time="2025-03-17T17:40:09.953666201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.953826 containerd[1606]: time="2025-03-17T17:40:09.953691578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.953826 containerd[1606]: time="2025-03-17T17:40:09.953714651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.953826 containerd[1606]: time="2025-03-17T17:40:09.953764805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.953826 containerd[1606]: time="2025-03-17T17:40:09.953790383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.953826 containerd[1606]: time="2025-03-17T17:40:09.953812144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.954034 containerd[1606]: time="2025-03-17T17:40:09.953833785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.954034 containerd[1606]: time="2025-03-17T17:40:09.953853752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.954034 containerd[1606]: time="2025-03-17T17:40:09.953877717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.954034 containerd[1606]: time="2025-03-17T17:40:09.953897805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.954034 containerd[1606]: time="2025-03-17T17:40:09.953916941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.954034 containerd[1606]: time="2025-03-17T17:40:09.953936628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.954034 containerd[1606]: time="2025-03-17T17:40:09.953978917Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:40:09.954034 containerd[1606]: time="2025-03-17T17:40:09.954012219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.954034 containerd[1606]: time="2025-03-17T17:40:09.954032658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.954296 containerd[1606]: time="2025-03-17T17:40:09.954050371Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:40:09.954296 containerd[1606]: time="2025-03-17T17:40:09.954134208Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:40:09.954296 containerd[1606]: time="2025-03-17T17:40:09.954171408Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:40:09.954296 containerd[1606]: time="2025-03-17T17:40:09.954189973Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:40:09.954296 containerd[1606]: time="2025-03-17T17:40:09.954206303Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:40:09.954296 containerd[1606]: time="2025-03-17T17:40:09.954219779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.954296 containerd[1606]: time="2025-03-17T17:40:09.954243723Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:40:09.954296 containerd[1606]: time="2025-03-17T17:40:09.954259824Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:40:09.954296 containerd[1606]: time="2025-03-17T17:40:09.954278819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:40:09.956103 containerd[1606]: time="2025-03-17T17:40:09.955969389Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:40:09.956522 containerd[1606]: time="2025-03-17T17:40:09.956476780Z" level=info msg="Connect containerd service" Mar 17 17:40:09.956650 containerd[1606]: time="2025-03-17T17:40:09.956601053Z" level=info msg="using legacy CRI server" Mar 17 17:40:09.956650 containerd[1606]: time="2025-03-17T17:40:09.956617374Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:40:09.956950 containerd[1606]: time="2025-03-17T17:40:09.956865760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:40:09.957741 containerd[1606]: time="2025-03-17T17:40:09.957673093Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:40:09.962112 containerd[1606]: time="2025-03-17T17:40:09.958004685Z" level=info msg="Start subscribing containerd event" Mar 17 17:40:09.962112 containerd[1606]: time="2025-03-17T17:40:09.958117016Z" level=info msg="Start recovering state" Mar 17 17:40:09.962112 containerd[1606]: time="2025-03-17T17:40:09.958187147Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:40:09.962112 containerd[1606]: time="2025-03-17T17:40:09.958269021Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:40:09.962112 containerd[1606]: time="2025-03-17T17:40:09.958275994Z" level=info msg="Start event monitor" Mar 17 17:40:09.962112 containerd[1606]: time="2025-03-17T17:40:09.958344733Z" level=info msg="Start snapshots syncer" Mar 17 17:40:09.962112 containerd[1606]: time="2025-03-17T17:40:09.958359000Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:40:09.962112 containerd[1606]: time="2025-03-17T17:40:09.958369830Z" level=info msg="Start streaming server" Mar 17 17:40:09.962112 containerd[1606]: time="2025-03-17T17:40:09.958507398Z" level=info msg="containerd successfully booted in 0.144235s" Mar 17 17:40:09.958964 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:40:11.016552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:40:11.018682 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:40:11.021796 systemd[1]: Startup finished in 7.137s (kernel) + 6.851s (userspace) = 13.989s. Mar 17 17:40:11.040533 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:40:13.321247 kubelet[1687]: E0317 17:40:13.321151 1687 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:40:13.340220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:40:13.340600 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:40:16.956097 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:40:16.971197 systemd[1]: Started sshd@0-10.0.0.38:22-10.0.0.1:43498.service - OpenSSH per-connection server daemon (10.0.0.1:43498). Mar 17 17:40:17.029996 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 43498 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:40:17.034404 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:17.055024 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:40:17.079273 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:40:17.083824 systemd-logind[1587]: New session 1 of user core. Mar 17 17:40:17.110271 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:40:17.123357 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:40:17.128691 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:40:17.293682 systemd[1708]: Queued start job for default target default.target. Mar 17 17:40:17.294273 systemd[1708]: Created slice app.slice - User Application Slice. Mar 17 17:40:17.294309 systemd[1708]: Reached target paths.target - Paths. Mar 17 17:40:17.294327 systemd[1708]: Reached target timers.target - Timers. Mar 17 17:40:17.312968 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:40:17.323926 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:40:17.324037 systemd[1708]: Reached target sockets.target - Sockets. Mar 17 17:40:17.324057 systemd[1708]: Reached target basic.target - Basic System. Mar 17 17:40:17.324127 systemd[1708]: Reached target default.target - Main User Target. Mar 17 17:40:17.324178 systemd[1708]: Startup finished in 182ms. Mar 17 17:40:17.324877 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:40:17.337466 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:40:17.425163 systemd[1]: Started sshd@1-10.0.0.38:22-10.0.0.1:43510.service - OpenSSH per-connection server daemon (10.0.0.1:43510). Mar 17 17:40:17.489989 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 43510 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:40:17.491060 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:17.502287 systemd-logind[1587]: New session 2 of user core. Mar 17 17:40:17.515353 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:40:17.584767 sshd[1723]: Connection closed by 10.0.0.1 port 43510 Mar 17 17:40:17.585168 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:17.599331 systemd[1]: Started sshd@2-10.0.0.38:22-10.0.0.1:43514.service - OpenSSH per-connection server daemon (10.0.0.1:43514). Mar 17 17:40:17.600277 systemd[1]: sshd@1-10.0.0.38:22-10.0.0.1:43510.service: Deactivated successfully. Mar 17 17:40:17.603874 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:40:17.604112 systemd-logind[1587]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:40:17.606046 systemd-logind[1587]: Removed session 2. Mar 17 17:40:17.645271 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 43514 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:40:17.647273 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:17.653506 systemd-logind[1587]: New session 3 of user core. Mar 17 17:40:17.662310 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:40:17.714199 sshd[1731]: Connection closed by 10.0.0.1 port 43514 Mar 17 17:40:17.714543 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:17.723159 systemd[1]: Started sshd@3-10.0.0.38:22-10.0.0.1:43530.service - OpenSSH per-connection server daemon (10.0.0.1:43530). Mar 17 17:40:17.723777 systemd[1]: sshd@2-10.0.0.38:22-10.0.0.1:43514.service: Deactivated successfully. Mar 17 17:40:17.726687 systemd-logind[1587]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:40:17.727482 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:40:17.728786 systemd-logind[1587]: Removed session 3. Mar 17 17:40:17.766903 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 43530 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:40:17.769148 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:17.776443 systemd-logind[1587]: New session 4 of user core. Mar 17 17:40:17.788361 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:40:17.865007 sshd[1739]: Connection closed by 10.0.0.1 port 43530 Mar 17 17:40:17.866141 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:17.872064 systemd[1]: Started sshd@4-10.0.0.38:22-10.0.0.1:43542.service - OpenSSH per-connection server daemon (10.0.0.1:43542). Mar 17 17:40:17.872749 systemd[1]: sshd@3-10.0.0.38:22-10.0.0.1:43530.service: Deactivated successfully. Mar 17 17:40:17.876625 systemd-logind[1587]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:40:17.877993 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:40:17.879992 systemd-logind[1587]: Removed session 4. Mar 17 17:40:17.925433 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 43542 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:40:17.928340 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:17.952874 systemd-logind[1587]: New session 5 of user core. Mar 17 17:40:17.961438 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:40:18.028565 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:40:18.029016 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:40:18.047825 sudo[1748]: pam_unix(sudo:session): session closed for user root Mar 17 17:40:18.050740 sshd[1747]: Connection closed by 10.0.0.1 port 43542 Mar 17 17:40:18.050956 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:18.063274 systemd[1]: Started sshd@5-10.0.0.38:22-10.0.0.1:43556.service - OpenSSH per-connection server daemon (10.0.0.1:43556). Mar 17 17:40:18.064035 systemd[1]: sshd@4-10.0.0.38:22-10.0.0.1:43542.service: Deactivated successfully. Mar 17 17:40:18.068788 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:40:18.069045 systemd-logind[1587]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:40:18.079579 systemd-logind[1587]: Removed session 5. Mar 17 17:40:18.119771 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 43556 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:40:18.122593 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:18.133275 systemd-logind[1587]: New session 6 of user core. Mar 17 17:40:18.147434 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:40:18.211162 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:40:18.211645 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:40:18.226846 sudo[1758]: pam_unix(sudo:session): session closed for user root Mar 17 17:40:18.234219 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:40:18.234593 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:40:18.264586 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:40:18.309027 augenrules[1780]: No rules Mar 17 17:40:18.311563 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:40:18.312061 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:40:18.313871 sudo[1757]: pam_unix(sudo:session): session closed for user root Mar 17 17:40:18.316038 sshd[1756]: Connection closed by 10.0.0.1 port 43556 Mar 17 17:40:18.316526 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:18.326225 systemd[1]: Started sshd@6-10.0.0.38:22-10.0.0.1:43562.service - OpenSSH per-connection server daemon (10.0.0.1:43562). Mar 17 17:40:18.327192 systemd[1]: sshd@5-10.0.0.38:22-10.0.0.1:43556.service: Deactivated successfully. Mar 17 17:40:18.330228 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:40:18.331446 systemd-logind[1587]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:40:18.333520 systemd-logind[1587]: Removed session 6. Mar 17 17:40:18.373855 sshd[1787]: Accepted publickey for core from 10.0.0.1 port 43562 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:40:18.375620 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:18.381062 systemd-logind[1587]: New session 7 of user core. Mar 17 17:40:18.391203 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:40:18.451404 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:40:18.451902 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:40:18.480269 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:40:18.511042 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:40:18.511547 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:40:20.327644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:40:20.351573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:40:20.390787 systemd[1]: Reloading requested from client PID 1846 ('systemctl') (unit session-7.scope)... Mar 17 17:40:20.390813 systemd[1]: Reloading... Mar 17 17:40:20.545802 zram_generator::config[1887]: No configuration found. Mar 17 17:40:20.872702 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:40:20.968355 systemd[1]: Reloading finished in 576 ms. Mar 17 17:40:21.035931 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:40:21.036086 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:40:21.036896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:40:21.058430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:40:21.265705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:40:21.275402 (kubelet)[1943]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:40:21.551742 kubelet[1943]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:40:21.551742 kubelet[1943]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:40:21.551742 kubelet[1943]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:40:21.551742 kubelet[1943]: I0317 17:40:21.549854 1943 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:40:22.137404 kubelet[1943]: I0317 17:40:22.137322 1943 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:40:22.137404 kubelet[1943]: I0317 17:40:22.137368 1943 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:40:22.140324 kubelet[1943]: I0317 17:40:22.137685 1943 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:40:22.557639 kubelet[1943]: I0317 17:40:22.557286 1943 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:40:22.593375 kubelet[1943]: I0317 17:40:22.593139 1943 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:40:22.594072 kubelet[1943]: I0317 17:40:22.593887 1943 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:40:22.594534 kubelet[1943]: I0317 17:40:22.593953 1943 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.38","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:40:22.678375 kubelet[1943]: I0317 17:40:22.678305 1943 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:40:22.679359 kubelet[1943]: I0317 17:40:22.679196 1943 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:40:22.679597 kubelet[1943]: I0317 17:40:22.679558 1943 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:40:22.685625 kubelet[1943]: I0317 17:40:22.684188 1943 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:40:22.685625 kubelet[1943]: I0317 17:40:22.684228 1943 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:40:22.685625 kubelet[1943]: I0317 17:40:22.684264 1943 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:40:22.685625 kubelet[1943]: I0317 17:40:22.684307 1943 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:40:22.685625 kubelet[1943]: E0317 17:40:22.685302 1943 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:22.685625 kubelet[1943]: E0317 17:40:22.685368 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:22.694915 kubelet[1943]: I0317 17:40:22.694862 1943 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:40:22.697360 kubelet[1943]: I0317 17:40:22.697329 1943 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:40:22.697476 kubelet[1943]: W0317 17:40:22.697425 1943 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:40:22.698423 kubelet[1943]: I0317 17:40:22.698391 1943 server.go:1264] "Started kubelet" Mar 17 17:40:22.701555 kubelet[1943]: I0317 17:40:22.698883 1943 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:40:22.701555 kubelet[1943]: I0317 17:40:22.699451 1943 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:40:22.701555 kubelet[1943]: I0317 17:40:22.699509 1943 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:40:22.701555 kubelet[1943]: I0317 17:40:22.700502 1943 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:40:22.703612 kubelet[1943]: I0317 17:40:22.703523 1943 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:40:22.713417 kubelet[1943]: I0317 17:40:22.713344 1943 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:40:22.714553 kubelet[1943]: I0317 17:40:22.713479 1943 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:40:22.714553 kubelet[1943]: I0317 17:40:22.713576 1943 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:40:22.718694 kubelet[1943]: I0317 17:40:22.715249 1943 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:40:22.718694 kubelet[1943]: I0317 17:40:22.715403 1943 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:40:22.721622 kubelet[1943]: I0317 17:40:22.721362 1943 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:40:22.725283 kubelet[1943]: E0317 17:40:22.722317 1943 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:40:22.744335 kubelet[1943]: E0317 17:40:22.742672 1943 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.38\" not found" node="10.0.0.38" Mar 17 17:40:22.779137 kubelet[1943]: I0317 17:40:22.779094 1943 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:40:22.779137 kubelet[1943]: I0317 17:40:22.779121 1943 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:40:22.779137 kubelet[1943]: I0317 17:40:22.779148 1943 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:40:22.790045 kubelet[1943]: I0317 17:40:22.788396 1943 policy_none.go:49] "None policy: Start" Mar 17 17:40:22.790936 kubelet[1943]: I0317 17:40:22.790399 1943 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:40:22.790936 kubelet[1943]: I0317 17:40:22.790440 1943 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:40:22.825488 kubelet[1943]: I0317 17:40:22.823591 1943 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.38" Mar 17 17:40:22.842808 kubelet[1943]: I0317 17:40:22.842389 1943 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:40:22.844842 kubelet[1943]: I0317 17:40:22.843352 1943 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:40:22.844842 kubelet[1943]: I0317 17:40:22.843518 1943 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:40:22.844842 kubelet[1943]: I0317 17:40:22.843697 1943 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.38" Mar 17 17:40:23.013781 kubelet[1943]: I0317 17:40:23.013415 1943 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 17:40:23.015513 containerd[1606]: time="2025-03-17T17:40:23.013932867Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:40:23.023089 kubelet[1943]: I0317 17:40:23.017285 1943 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 17:40:23.071940 kubelet[1943]: I0317 17:40:23.071234 1943 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:40:23.082325 kubelet[1943]: I0317 17:40:23.077821 1943 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:40:23.082325 kubelet[1943]: I0317 17:40:23.077875 1943 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:40:23.082325 kubelet[1943]: I0317 17:40:23.077910 1943 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:40:23.082325 kubelet[1943]: E0317 17:40:23.077981 1943 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 17 17:40:23.141253 kubelet[1943]: I0317 17:40:23.140898 1943 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 17:40:23.141859 kubelet[1943]: W0317 17:40:23.141668 1943 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 17:40:23.141859 kubelet[1943]: W0317 17:40:23.141716 1943 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 17:40:23.141859 kubelet[1943]: W0317 17:40:23.141811 1943 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 17:40:23.309416 sudo[1793]: pam_unix(sudo:session): session closed for user root Mar 17 17:40:23.311800 sshd[1792]: Connection closed by 10.0.0.1 port 43562 Mar 17 17:40:23.312451 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:23.316673 systemd[1]: sshd@6-10.0.0.38:22-10.0.0.1:43562.service: Deactivated successfully. Mar 17 17:40:23.321315 systemd-logind[1587]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:40:23.321540 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:40:23.323635 systemd-logind[1587]: Removed session 7. Mar 17 17:40:23.685639 kubelet[1943]: E0317 17:40:23.685497 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:23.685639 kubelet[1943]: I0317 17:40:23.685586 1943 apiserver.go:52] "Watching apiserver" Mar 17 17:40:23.691030 kubelet[1943]: I0317 17:40:23.690938 1943 topology_manager.go:215] "Topology Admit Handler" podUID="0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534" podNamespace="calico-system" podName="calico-node-hq5rp" Mar 17 17:40:23.691194 kubelet[1943]: I0317 17:40:23.691109 1943 topology_manager.go:215] "Topology Admit Handler" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" podNamespace="calico-system" podName="csi-node-driver-t27ck" Mar 17 17:40:23.691298 kubelet[1943]: I0317 17:40:23.691196 1943 topology_manager.go:215] "Topology Admit Handler" podUID="ca0dfac3-2124-4688-abea-b2d63e394369" podNamespace="kube-system" podName="kube-proxy-fnhmt" Mar 17 17:40:23.692665 kubelet[1943]: E0317 17:40:23.691890 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:23.715844 kubelet[1943]: I0317 17:40:23.715783 1943 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:40:23.734440 kubelet[1943]: I0317 17:40:23.734263 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4c4e0744-d875-41c4-9067-a3b54356fd5d-varrun\") pod \"csi-node-driver-t27ck\" (UID: \"4c4e0744-d875-41c4-9067-a3b54356fd5d\") " pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:23.734440 kubelet[1943]: I0317 17:40:23.734339 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca0dfac3-2124-4688-abea-b2d63e394369-lib-modules\") pod \"kube-proxy-fnhmt\" (UID: \"ca0dfac3-2124-4688-abea-b2d63e394369\") " pod="kube-system/kube-proxy-fnhmt" Mar 17 17:40:23.734440 kubelet[1943]: I0317 17:40:23.734368 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534-policysync\") pod \"calico-node-hq5rp\" (UID: \"0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534\") " pod="calico-system/calico-node-hq5rp" Mar 17 17:40:23.734440 kubelet[1943]: I0317 17:40:23.734388 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534-cni-net-dir\") pod \"calico-node-hq5rp\" (UID: \"0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534\") " pod="calico-system/calico-node-hq5rp" Mar 17 17:40:23.734440 kubelet[1943]: I0317 17:40:23.734417 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4c4e0744-d875-41c4-9067-a3b54356fd5d-socket-dir\") pod \"csi-node-driver-t27ck\" (UID: \"4c4e0744-d875-41c4-9067-a3b54356fd5d\") " pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:23.734781 kubelet[1943]: I0317 17:40:23.734438 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca0dfac3-2124-4688-abea-b2d63e394369-kube-proxy\") pod \"kube-proxy-fnhmt\" (UID: \"ca0dfac3-2124-4688-abea-b2d63e394369\") " pod="kube-system/kube-proxy-fnhmt" Mar 17 17:40:23.734781 kubelet[1943]: I0317 17:40:23.734462 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwzsp\" (UniqueName: \"kubernetes.io/projected/ca0dfac3-2124-4688-abea-b2d63e394369-kube-api-access-mwzsp\") pod \"kube-proxy-fnhmt\" (UID: \"ca0dfac3-2124-4688-abea-b2d63e394369\") " pod="kube-system/kube-proxy-fnhmt" Mar 17 17:40:23.734781 kubelet[1943]: I0317 17:40:23.734485 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534-lib-modules\") pod \"calico-node-hq5rp\" (UID: \"0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534\") " pod="calico-system/calico-node-hq5rp" Mar 17 17:40:23.734781 kubelet[1943]: I0317 17:40:23.734512 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534-node-certs\") pod \"calico-node-hq5rp\" (UID: \"0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534\") " pod="calico-system/calico-node-hq5rp" Mar 17 17:40:23.734781 kubelet[1943]: I0317 17:40:23.734534 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534-var-lib-calico\") pod \"calico-node-hq5rp\" (UID: \"0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534\") " pod="calico-system/calico-node-hq5rp" Mar 17 17:40:23.734928 kubelet[1943]: I0317 17:40:23.734560 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534-cni-bin-dir\") pod \"calico-node-hq5rp\" (UID: \"0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534\") " pod="calico-system/calico-node-hq5rp" Mar 17 17:40:23.734928 kubelet[1943]: I0317 17:40:23.734578 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534-flexvol-driver-host\") pod \"calico-node-hq5rp\" (UID: \"0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534\") " pod="calico-system/calico-node-hq5rp" Mar 17 17:40:23.734928 kubelet[1943]: I0317 17:40:23.734606 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdtsx\" (UniqueName: \"kubernetes.io/projected/0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534-kube-api-access-fdtsx\") pod \"calico-node-hq5rp\" (UID: \"0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534\") " pod="calico-system/calico-node-hq5rp" Mar 17 17:40:23.734928 kubelet[1943]: I0317 17:40:23.734624 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c4e0744-d875-41c4-9067-a3b54356fd5d-kubelet-dir\") pod \"csi-node-driver-t27ck\" (UID: \"4c4e0744-d875-41c4-9067-a3b54356fd5d\") " pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:23.734928 kubelet[1943]: I0317 17:40:23.734660 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca0dfac3-2124-4688-abea-b2d63e394369-xtables-lock\") pod \"kube-proxy-fnhmt\" (UID: \"ca0dfac3-2124-4688-abea-b2d63e394369\") " pod="kube-system/kube-proxy-fnhmt" Mar 17 17:40:23.735063 kubelet[1943]: I0317 17:40:23.734677 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534-xtables-lock\") pod \"calico-node-hq5rp\" (UID: \"0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534\") " pod="calico-system/calico-node-hq5rp" Mar 17 17:40:23.735063 kubelet[1943]: I0317 17:40:23.734708 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534-tigera-ca-bundle\") pod \"calico-node-hq5rp\" (UID: \"0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534\") " pod="calico-system/calico-node-hq5rp" Mar 17 17:40:23.735063 kubelet[1943]: I0317 17:40:23.734748 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534-var-run-calico\") pod \"calico-node-hq5rp\" (UID: \"0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534\") " pod="calico-system/calico-node-hq5rp" Mar 17 17:40:23.735063 kubelet[1943]: I0317 17:40:23.734765 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534-cni-log-dir\") pod \"calico-node-hq5rp\" (UID: \"0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534\") " pod="calico-system/calico-node-hq5rp" Mar 17 17:40:23.735063 kubelet[1943]: I0317 17:40:23.734780 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4c4e0744-d875-41c4-9067-a3b54356fd5d-registration-dir\") pod \"csi-node-driver-t27ck\" (UID: \"4c4e0744-d875-41c4-9067-a3b54356fd5d\") " pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:23.735199 kubelet[1943]: I0317 17:40:23.734796 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqplz\" (UniqueName: \"kubernetes.io/projected/4c4e0744-d875-41c4-9067-a3b54356fd5d-kube-api-access-lqplz\") pod \"csi-node-driver-t27ck\" (UID: \"4c4e0744-d875-41c4-9067-a3b54356fd5d\") " pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:23.842063 kubelet[1943]: E0317 17:40:23.842026 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.842388 kubelet[1943]: W0317 17:40:23.842253 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.842388 kubelet[1943]: E0317 17:40:23.842337 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.845967 kubelet[1943]: E0317 17:40:23.845797 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.845967 kubelet[1943]: W0317 17:40:23.845825 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.845967 kubelet[1943]: E0317 17:40:23.845855 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.846262 kubelet[1943]: E0317 17:40:23.846234 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.846475 kubelet[1943]: W0317 17:40:23.846335 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.846475 kubelet[1943]: E0317 17:40:23.846358 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.846629 kubelet[1943]: E0317 17:40:23.846598 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.846629 kubelet[1943]: W0317 17:40:23.846612 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.846629 kubelet[1943]: E0317 17:40:23.846622 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.846922 kubelet[1943]: E0317 17:40:23.846904 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.846922 kubelet[1943]: W0317 17:40:23.846919 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.847065 kubelet[1943]: E0317 17:40:23.846936 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.847182 kubelet[1943]: E0317 17:40:23.847169 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.847182 kubelet[1943]: W0317 17:40:23.847181 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.847255 kubelet[1943]: E0317 17:40:23.847204 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.847515 kubelet[1943]: E0317 17:40:23.847498 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.847515 kubelet[1943]: W0317 17:40:23.847513 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.847637 kubelet[1943]: E0317 17:40:23.847528 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.847788 kubelet[1943]: E0317 17:40:23.847764 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.847788 kubelet[1943]: W0317 17:40:23.847776 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.847889 kubelet[1943]: E0317 17:40:23.847864 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.848090 kubelet[1943]: E0317 17:40:23.848074 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.848090 kubelet[1943]: W0317 17:40:23.848088 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.848190 kubelet[1943]: E0317 17:40:23.848106 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.848493 kubelet[1943]: E0317 17:40:23.848367 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.848493 kubelet[1943]: W0317 17:40:23.848391 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.848493 kubelet[1943]: E0317 17:40:23.848408 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.849086 kubelet[1943]: E0317 17:40:23.848863 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.849086 kubelet[1943]: W0317 17:40:23.848876 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.849086 kubelet[1943]: E0317 17:40:23.848913 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.849345 kubelet[1943]: E0317 17:40:23.849314 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.849345 kubelet[1943]: W0317 17:40:23.849328 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.849627 kubelet[1943]: E0317 17:40:23.849608 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.849844 kubelet[1943]: E0317 17:40:23.849824 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.849844 kubelet[1943]: W0317 17:40:23.849839 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.849920 kubelet[1943]: E0317 17:40:23.849851 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.850233 kubelet[1943]: E0317 17:40:23.850172 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.850233 kubelet[1943]: W0317 17:40:23.850185 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.850233 kubelet[1943]: E0317 17:40:23.850198 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.909853 kubelet[1943]: E0317 17:40:23.909810 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.909853 kubelet[1943]: W0317 17:40:23.909839 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.910013 kubelet[1943]: E0317 17:40:23.909873 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.913597 kubelet[1943]: E0317 17:40:23.912933 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.913597 kubelet[1943]: W0317 17:40:23.912954 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.913597 kubelet[1943]: E0317 17:40:23.912981 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.913597 kubelet[1943]: E0317 17:40:23.913294 1943 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:40:23.913597 kubelet[1943]: W0317 17:40:23.913304 1943 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:40:23.913597 kubelet[1943]: E0317 17:40:23.913316 1943 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:40:23.999433 kubelet[1943]: E0317 17:40:23.999127 1943 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:24.001446 kubelet[1943]: E0317 17:40:23.999683 1943 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:24.001524 containerd[1606]: time="2025-03-17T17:40:24.000162006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hq5rp,Uid:0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534,Namespace:calico-system,Attempt:0,}" Mar 17 17:40:24.003325 containerd[1606]: time="2025-03-17T17:40:24.001643414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fnhmt,Uid:ca0dfac3-2124-4688-abea-b2d63e394369,Namespace:kube-system,Attempt:0,}" Mar 17 17:40:24.686816 kubelet[1943]: E0317 17:40:24.686651 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:25.083612 kubelet[1943]: E0317 17:40:25.083227 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:25.686855 kubelet[1943]: E0317 17:40:25.686800 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:26.687585 kubelet[1943]: E0317 17:40:26.687501 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:27.078636 kubelet[1943]: E0317 17:40:27.078449 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:27.660093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819393684.mount: Deactivated successfully. Mar 17 17:40:27.688336 kubelet[1943]: E0317 17:40:27.688173 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:27.782459 containerd[1606]: time="2025-03-17T17:40:27.781259293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:40:27.784567 containerd[1606]: time="2025-03-17T17:40:27.784489139Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 17:40:27.791981 containerd[1606]: time="2025-03-17T17:40:27.791861121Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:40:27.816668 containerd[1606]: time="2025-03-17T17:40:27.814853542Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:40:27.828446 containerd[1606]: time="2025-03-17T17:40:27.828249700Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:40:27.856453 containerd[1606]: time="2025-03-17T17:40:27.856353254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:40:27.862864 containerd[1606]: time="2025-03-17T17:40:27.862294364Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.861954654s" Mar 17 17:40:27.879337 containerd[1606]: time="2025-03-17T17:40:27.879199100Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.877385867s" Mar 17 17:40:28.386591 containerd[1606]: time="2025-03-17T17:40:28.386430869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:28.386591 containerd[1606]: time="2025-03-17T17:40:28.386536227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:28.387180 containerd[1606]: time="2025-03-17T17:40:28.386552518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:28.387180 containerd[1606]: time="2025-03-17T17:40:28.386706336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:28.390752 containerd[1606]: time="2025-03-17T17:40:28.387207516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:28.390752 containerd[1606]: time="2025-03-17T17:40:28.387297705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:28.390752 containerd[1606]: time="2025-03-17T17:40:28.387320437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:28.390752 containerd[1606]: time="2025-03-17T17:40:28.387461151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:28.689014 kubelet[1943]: E0317 17:40:28.688792 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:28.821756 containerd[1606]: time="2025-03-17T17:40:28.821402351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hq5rp,Uid:0cf3bc0d-893a-4d68-b5bc-5fe4d19e4534,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b8ece45cc19cd83a7c62bfca68753ac1263d1534021775c22ca2d1f8989be67\"" Mar 17 17:40:28.824112 kubelet[1943]: E0317 17:40:28.822784 1943 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:28.828300 containerd[1606]: time="2025-03-17T17:40:28.825501807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:40:28.842601 containerd[1606]: time="2025-03-17T17:40:28.842550103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fnhmt,Uid:ca0dfac3-2124-4688-abea-b2d63e394369,Namespace:kube-system,Attempt:0,} returns sandbox id \"82bf50ad122bf6c8ec9fae987479cb3154ef86ae495c63b5dfe76d8123ce7504\"" Mar 17 17:40:28.843619 kubelet[1943]: E0317 17:40:28.843588 1943 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:29.079492 kubelet[1943]: E0317 17:40:29.079320 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:29.689917 kubelet[1943]: E0317 17:40:29.689690 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:30.690504 kubelet[1943]: E0317 17:40:30.690438 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:30.775239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3779024113.mount: Deactivated successfully. Mar 17 17:40:30.867060 containerd[1606]: time="2025-03-17T17:40:30.866933782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:30.869473 containerd[1606]: time="2025-03-17T17:40:30.869370691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=6857253" Mar 17 17:40:30.871708 containerd[1606]: time="2025-03-17T17:40:30.871544066Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:30.874402 containerd[1606]: time="2025-03-17T17:40:30.874352752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:30.875409 containerd[1606]: time="2025-03-17T17:40:30.875351054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 2.049798722s" Mar 17 17:40:30.875505 containerd[1606]: time="2025-03-17T17:40:30.875414383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 17 17:40:30.877220 containerd[1606]: time="2025-03-17T17:40:30.877184422Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:40:30.879773 containerd[1606]: time="2025-03-17T17:40:30.879689439Z" level=info msg="CreateContainer within sandbox \"5b8ece45cc19cd83a7c62bfca68753ac1263d1534021775c22ca2d1f8989be67\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:40:30.922689 containerd[1606]: time="2025-03-17T17:40:30.922279475Z" level=info msg="CreateContainer within sandbox \"5b8ece45cc19cd83a7c62bfca68753ac1263d1534021775c22ca2d1f8989be67\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2f3938802730312d8ef860f50a2deece42a33327fa0d21116d85d3f313888c27\"" Mar 17 17:40:30.924072 containerd[1606]: time="2025-03-17T17:40:30.923591285Z" level=info msg="StartContainer for \"2f3938802730312d8ef860f50a2deece42a33327fa0d21116d85d3f313888c27\"" Mar 17 17:40:31.079344 kubelet[1943]: E0317 17:40:31.078396 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:31.158465 containerd[1606]: time="2025-03-17T17:40:31.158365346Z" level=info msg="StartContainer for \"2f3938802730312d8ef860f50a2deece42a33327fa0d21116d85d3f313888c27\" returns successfully" Mar 17 17:40:31.300606 containerd[1606]: time="2025-03-17T17:40:31.300495390Z" level=info msg="shim disconnected" id=2f3938802730312d8ef860f50a2deece42a33327fa0d21116d85d3f313888c27 namespace=k8s.io Mar 17 17:40:31.300606 containerd[1606]: time="2025-03-17T17:40:31.300569068Z" level=warning msg="cleaning up after shim disconnected" id=2f3938802730312d8ef860f50a2deece42a33327fa0d21116d85d3f313888c27 namespace=k8s.io Mar 17 17:40:31.300606 containerd[1606]: time="2025-03-17T17:40:31.300580710Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:31.691712 kubelet[1943]: E0317 17:40:31.691622 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:31.749196 systemd[1]: run-containerd-runc-k8s.io-2f3938802730312d8ef860f50a2deece42a33327fa0d21116d85d3f313888c27-runc.KMpPGy.mount: Deactivated successfully. Mar 17 17:40:31.749473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f3938802730312d8ef860f50a2deece42a33327fa0d21116d85d3f313888c27-rootfs.mount: Deactivated successfully. Mar 17 17:40:32.126331 kubelet[1943]: E0317 17:40:32.125992 1943 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:32.343094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2566691731.mount: Deactivated successfully. Mar 17 17:40:32.691908 kubelet[1943]: E0317 17:40:32.691841 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:32.771697 containerd[1606]: time="2025-03-17T17:40:32.771612444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:32.772626 containerd[1606]: time="2025-03-17T17:40:32.772569879Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185372" Mar 17 17:40:32.774105 containerd[1606]: time="2025-03-17T17:40:32.774055054Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:32.776256 containerd[1606]: time="2025-03-17T17:40:32.776224251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:32.777088 containerd[1606]: time="2025-03-17T17:40:32.777026806Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 1.899796377s" Mar 17 17:40:32.777152 containerd[1606]: time="2025-03-17T17:40:32.777091627Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 17:40:32.778226 containerd[1606]: time="2025-03-17T17:40:32.778184035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:40:32.782695 containerd[1606]: time="2025-03-17T17:40:32.782094067Z" level=info msg="CreateContainer within sandbox \"82bf50ad122bf6c8ec9fae987479cb3154ef86ae495c63b5dfe76d8123ce7504\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:40:32.796007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2774119535.mount: Deactivated successfully. Mar 17 17:40:32.797573 containerd[1606]: time="2025-03-17T17:40:32.797524439Z" level=info msg="CreateContainer within sandbox \"82bf50ad122bf6c8ec9fae987479cb3154ef86ae495c63b5dfe76d8123ce7504\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c71d15af706a73c13ac74fbbe4d2ef87001cd1f667f7701f86660371784ff84a\"" Mar 17 17:40:32.798250 containerd[1606]: time="2025-03-17T17:40:32.798189466Z" level=info msg="StartContainer for \"c71d15af706a73c13ac74fbbe4d2ef87001cd1f667f7701f86660371784ff84a\"" Mar 17 17:40:32.973699 containerd[1606]: time="2025-03-17T17:40:32.973585764Z" level=info msg="StartContainer for \"c71d15af706a73c13ac74fbbe4d2ef87001cd1f667f7701f86660371784ff84a\" returns successfully" Mar 17 17:40:33.078853 kubelet[1943]: E0317 17:40:33.078795 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:33.129499 kubelet[1943]: E0317 17:40:33.129463 1943 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:33.261045 kubelet[1943]: I0317 17:40:33.260879 1943 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fnhmt" podStartSLOduration=7.327388814 podStartE2EDuration="11.260861966s" podCreationTimestamp="2025-03-17 17:40:22 +0000 UTC" firstStartedPulling="2025-03-17 17:40:28.844517802 +0000 UTC m=+7.549910461" lastFinishedPulling="2025-03-17 17:40:32.777990963 +0000 UTC m=+11.483383613" observedRunningTime="2025-03-17 17:40:33.259865969 +0000 UTC m=+11.965258618" watchObservedRunningTime="2025-03-17 17:40:33.260861966 +0000 UTC m=+11.966254616" Mar 17 17:40:33.692300 kubelet[1943]: E0317 17:40:33.692233 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:34.131536 kubelet[1943]: E0317 17:40:34.131491 1943 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:34.692507 kubelet[1943]: E0317 17:40:34.692394 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:35.079209 kubelet[1943]: E0317 17:40:35.078991 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:35.693198 kubelet[1943]: E0317 17:40:35.693098 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:36.693627 kubelet[1943]: E0317 17:40:36.693589 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:37.079469 kubelet[1943]: E0317 17:40:37.079274 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:37.693801 kubelet[1943]: E0317 17:40:37.693709 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:38.694560 kubelet[1943]: E0317 17:40:38.694471 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:38.943648 containerd[1606]: time="2025-03-17T17:40:38.943199919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:39.022426 containerd[1606]: time="2025-03-17T17:40:39.022266639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 17 17:40:39.078413 kubelet[1943]: E0317 17:40:39.078331 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:39.181992 containerd[1606]: time="2025-03-17T17:40:39.181910280Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:39.229075 containerd[1606]: time="2025-03-17T17:40:39.228964277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:39.229969 containerd[1606]: time="2025-03-17T17:40:39.229932502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 6.451710576s" Mar 17 17:40:39.229969 containerd[1606]: time="2025-03-17T17:40:39.229967989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 17 17:40:39.232894 containerd[1606]: time="2025-03-17T17:40:39.232826959Z" level=info msg="CreateContainer within sandbox \"5b8ece45cc19cd83a7c62bfca68753ac1263d1534021775c22ca2d1f8989be67\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:40:39.694974 kubelet[1943]: E0317 17:40:39.694911 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:39.900418 containerd[1606]: time="2025-03-17T17:40:39.900347149Z" level=info msg="CreateContainer within sandbox \"5b8ece45cc19cd83a7c62bfca68753ac1263d1534021775c22ca2d1f8989be67\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"95bc988aa097378bddaa592fd18d9160fd621bf316e4ac3faea113f607fab5d1\"" Mar 17 17:40:39.901016 containerd[1606]: time="2025-03-17T17:40:39.900989724Z" level=info msg="StartContainer for \"95bc988aa097378bddaa592fd18d9160fd621bf316e4ac3faea113f607fab5d1\"" Mar 17 17:40:40.336123 containerd[1606]: time="2025-03-17T17:40:40.336065902Z" level=info msg="StartContainer for \"95bc988aa097378bddaa592fd18d9160fd621bf316e4ac3faea113f607fab5d1\" returns successfully" Mar 17 17:40:40.695775 kubelet[1943]: E0317 17:40:40.695696 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:41.079167 kubelet[1943]: E0317 17:40:41.078966 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:41.340190 kubelet[1943]: E0317 17:40:41.340067 1943 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:41.696691 kubelet[1943]: E0317 17:40:41.696597 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:42.341518 kubelet[1943]: E0317 17:40:42.341475 1943 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:42.684415 kubelet[1943]: E0317 17:40:42.684368 1943 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:42.696893 kubelet[1943]: E0317 17:40:42.696856 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:43.078676 kubelet[1943]: E0317 17:40:43.078499 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:43.697679 kubelet[1943]: E0317 17:40:43.697583 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:44.697971 kubelet[1943]: E0317 17:40:44.697823 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:44.906776 containerd[1606]: time="2025-03-17T17:40:44.906604964Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:40:44.908708 kubelet[1943]: I0317 17:40:44.908682 1943 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:40:44.933495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95bc988aa097378bddaa592fd18d9160fd621bf316e4ac3faea113f607fab5d1-rootfs.mount: Deactivated successfully. Mar 17 17:40:45.083191 containerd[1606]: time="2025-03-17T17:40:45.083064447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t27ck,Uid:4c4e0744-d875-41c4-9067-a3b54356fd5d,Namespace:calico-system,Attempt:0,}" Mar 17 17:40:46.288059 kubelet[1943]: E0317 17:40:45.698640 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:46.417896 containerd[1606]: time="2025-03-17T17:40:46.417835725Z" level=info msg="shim disconnected" id=95bc988aa097378bddaa592fd18d9160fd621bf316e4ac3faea113f607fab5d1 namespace=k8s.io Mar 17 17:40:46.417896 containerd[1606]: time="2025-03-17T17:40:46.417889065Z" level=warning msg="cleaning up after shim disconnected" id=95bc988aa097378bddaa592fd18d9160fd621bf316e4ac3faea113f607fab5d1 namespace=k8s.io Mar 17 17:40:46.417896 containerd[1606]: time="2025-03-17T17:40:46.417897949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:46.451317 kubelet[1943]: I0317 17:40:46.451247 1943 topology_manager.go:215] "Topology Admit Handler" podUID="c3c82e7e-fb50-42a4-b476-e62745be75f0" podNamespace="default" podName="nginx-deployment-85f456d6dd-d457j" Mar 17 17:40:46.466172 kubelet[1943]: W0317 17:40:46.466108 1943 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:10.0.0.38" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '10.0.0.38' and this object Mar 17 17:40:46.466338 kubelet[1943]: E0317 17:40:46.466213 1943 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:10.0.0.38" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '10.0.0.38' and this object Mar 17 17:40:46.514939 containerd[1606]: time="2025-03-17T17:40:46.514877583Z" level=error msg="Failed to destroy network for sandbox \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:46.515375 containerd[1606]: time="2025-03-17T17:40:46.515336584Z" level=error msg="encountered an error cleaning up failed sandbox \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:46.515528 containerd[1606]: time="2025-03-17T17:40:46.515399258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t27ck,Uid:4c4e0744-d875-41c4-9067-a3b54356fd5d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:46.516086 kubelet[1943]: E0317 17:40:46.515661 1943 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:46.516086 kubelet[1943]: E0317 17:40:46.515756 1943 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:46.516086 kubelet[1943]: E0317 17:40:46.515779 1943 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:46.516303 kubelet[1943]: E0317 17:40:46.515831 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t27ck_calico-system(4c4e0744-d875-41c4-9067-a3b54356fd5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t27ck_calico-system(4c4e0744-d875-41c4-9067-a3b54356fd5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:46.517449 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024-shm.mount: Deactivated successfully. Mar 17 17:40:46.635217 kubelet[1943]: I0317 17:40:46.635134 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbgzq\" (UniqueName: \"kubernetes.io/projected/c3c82e7e-fb50-42a4-b476-e62745be75f0-kube-api-access-rbgzq\") pod \"nginx-deployment-85f456d6dd-d457j\" (UID: \"c3c82e7e-fb50-42a4-b476-e62745be75f0\") " pod="default/nginx-deployment-85f456d6dd-d457j" Mar 17 17:40:46.699902 kubelet[1943]: E0317 17:40:46.699777 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:47.352100 kubelet[1943]: E0317 17:40:47.352039 1943 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:47.353309 kubelet[1943]: I0317 17:40:47.352936 1943 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024" Mar 17 17:40:47.353453 containerd[1606]: time="2025-03-17T17:40:47.353153268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:40:47.353544 containerd[1606]: time="2025-03-17T17:40:47.353511745Z" level=info msg="StopPodSandbox for \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\"" Mar 17 17:40:47.353776 containerd[1606]: time="2025-03-17T17:40:47.353753404Z" level=info msg="Ensure that sandbox 8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024 in task-service has been cleanup successfully" Mar 17 17:40:47.354001 containerd[1606]: time="2025-03-17T17:40:47.353972949Z" level=info msg="TearDown network for sandbox \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\" successfully" Mar 17 17:40:47.354001 containerd[1606]: time="2025-03-17T17:40:47.353990366Z" level=info msg="StopPodSandbox for \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\" returns successfully" Mar 17 17:40:47.354883 containerd[1606]: time="2025-03-17T17:40:47.354859834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t27ck,Uid:4c4e0744-d875-41c4-9067-a3b54356fd5d,Namespace:calico-system,Attempt:1,}" Mar 17 17:40:47.356244 systemd[1]: run-netns-cni\x2df01e7806\x2d11fd\x2dea94\x2de71d\x2d453c104f41b4.mount: Deactivated successfully. Mar 17 17:40:47.654943 containerd[1606]: time="2025-03-17T17:40:47.654856819Z" level=error msg="Failed to destroy network for sandbox \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:47.655498 containerd[1606]: time="2025-03-17T17:40:47.655367729Z" level=error msg="encountered an error cleaning up failed sandbox \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:47.655498 containerd[1606]: time="2025-03-17T17:40:47.655429423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t27ck,Uid:4c4e0744-d875-41c4-9067-a3b54356fd5d,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:47.655808 kubelet[1943]: E0317 17:40:47.655756 1943 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:47.655898 kubelet[1943]: E0317 17:40:47.655840 1943 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:47.655898 kubelet[1943]: E0317 17:40:47.655873 1943 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:47.655977 kubelet[1943]: E0317 17:40:47.655947 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t27ck_calico-system(4c4e0744-d875-41c4-9067-a3b54356fd5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t27ck_calico-system(4c4e0744-d875-41c4-9067-a3b54356fd5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:47.657791 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038-shm.mount: Deactivated successfully. Mar 17 17:40:47.700069 kubelet[1943]: E0317 17:40:47.700000 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:47.745531 kubelet[1943]: E0317 17:40:47.745228 1943 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 17 17:40:47.745531 kubelet[1943]: E0317 17:40:47.745298 1943 projected.go:200] Error preparing data for projected volume kube-api-access-rbgzq for pod default/nginx-deployment-85f456d6dd-d457j: failed to sync configmap cache: timed out waiting for the condition Mar 17 17:40:47.745531 kubelet[1943]: E0317 17:40:47.745407 1943 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3c82e7e-fb50-42a4-b476-e62745be75f0-kube-api-access-rbgzq podName:c3c82e7e-fb50-42a4-b476-e62745be75f0 nodeName:}" failed. No retries permitted until 2025-03-17 17:40:48.245373703 +0000 UTC m=+26.950766352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rbgzq" (UniqueName: "kubernetes.io/projected/c3c82e7e-fb50-42a4-b476-e62745be75f0-kube-api-access-rbgzq") pod "nginx-deployment-85f456d6dd-d457j" (UID: "c3c82e7e-fb50-42a4-b476-e62745be75f0") : failed to sync configmap cache: timed out waiting for the condition Mar 17 17:40:48.383566 kubelet[1943]: I0317 17:40:48.382771 1943 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038" Mar 17 17:40:48.390074 containerd[1606]: time="2025-03-17T17:40:48.384408420Z" level=info msg="StopPodSandbox for \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\"" Mar 17 17:40:48.390074 containerd[1606]: time="2025-03-17T17:40:48.384671848Z" level=info msg="Ensure that sandbox 9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038 in task-service has been cleanup successfully" Mar 17 17:40:48.390074 containerd[1606]: time="2025-03-17T17:40:48.384871909Z" level=info msg="TearDown network for sandbox \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\" successfully" Mar 17 17:40:48.390074 containerd[1606]: time="2025-03-17T17:40:48.384885780Z" level=info msg="StopPodSandbox for \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\" returns successfully" Mar 17 17:40:48.390074 containerd[1606]: time="2025-03-17T17:40:48.388374685Z" level=info msg="StopPodSandbox for \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\"" Mar 17 17:40:48.390074 containerd[1606]: time="2025-03-17T17:40:48.388493398Z" level=info msg="TearDown network for sandbox \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\" successfully" Mar 17 17:40:48.390074 containerd[1606]: time="2025-03-17T17:40:48.388505727Z" level=info msg="StopPodSandbox for \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\" returns successfully" Mar 17 17:40:48.390878 systemd[1]: run-netns-cni\x2d1f537360\x2d48f7\x2dc943\x2de330\x2d8791c3816442.mount: Deactivated successfully. Mar 17 17:40:48.393346 containerd[1606]: time="2025-03-17T17:40:48.392512664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t27ck,Uid:4c4e0744-d875-41c4-9067-a3b54356fd5d,Namespace:calico-system,Attempt:2,}" Mar 17 17:40:48.557098 containerd[1606]: time="2025-03-17T17:40:48.556564966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-d457j,Uid:c3c82e7e-fb50-42a4-b476-e62745be75f0,Namespace:default,Attempt:0,}" Mar 17 17:40:48.607358 containerd[1606]: time="2025-03-17T17:40:48.607277066Z" level=error msg="Failed to destroy network for sandbox \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:48.612146 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06-shm.mount: Deactivated successfully. Mar 17 17:40:48.613290 containerd[1606]: time="2025-03-17T17:40:48.612756783Z" level=error msg="encountered an error cleaning up failed sandbox \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:48.613290 containerd[1606]: time="2025-03-17T17:40:48.612844589Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t27ck,Uid:4c4e0744-d875-41c4-9067-a3b54356fd5d,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:48.613679 kubelet[1943]: E0317 17:40:48.613611 1943 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:48.613768 kubelet[1943]: E0317 17:40:48.613702 1943 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:48.613768 kubelet[1943]: E0317 17:40:48.613752 1943 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:48.614344 kubelet[1943]: E0317 17:40:48.613806 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t27ck_calico-system(4c4e0744-d875-41c4-9067-a3b54356fd5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t27ck_calico-system(4c4e0744-d875-41c4-9067-a3b54356fd5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:48.701782 kubelet[1943]: E0317 17:40:48.700981 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:48.877291 containerd[1606]: time="2025-03-17T17:40:48.876369216Z" level=error msg="Failed to destroy network for sandbox \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:48.880216 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6-shm.mount: Deactivated successfully. Mar 17 17:40:48.885075 containerd[1606]: time="2025-03-17T17:40:48.883867340Z" level=error msg="encountered an error cleaning up failed sandbox \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:48.885075 containerd[1606]: time="2025-03-17T17:40:48.883961746Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-d457j,Uid:c3c82e7e-fb50-42a4-b476-e62745be75f0,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:48.885278 kubelet[1943]: E0317 17:40:48.884706 1943 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:48.885278 kubelet[1943]: E0317 17:40:48.884809 1943 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-d457j" Mar 17 17:40:48.885278 kubelet[1943]: E0317 17:40:48.884838 1943 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-d457j" Mar 17 17:40:48.885402 kubelet[1943]: E0317 17:40:48.884891 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-d457j_default(c3c82e7e-fb50-42a4-b476-e62745be75f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-d457j_default(c3c82e7e-fb50-42a4-b476-e62745be75f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-d457j" podUID="c3c82e7e-fb50-42a4-b476-e62745be75f0" Mar 17 17:40:49.401543 kubelet[1943]: I0317 17:40:49.400972 1943 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06" Mar 17 17:40:49.402383 containerd[1606]: time="2025-03-17T17:40:49.402157149Z" level=info msg="StopPodSandbox for \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\"" Mar 17 17:40:49.402627 containerd[1606]: time="2025-03-17T17:40:49.402499137Z" level=info msg="Ensure that sandbox 178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06 in task-service has been cleanup successfully" Mar 17 17:40:49.409267 kubelet[1943]: I0317 17:40:49.408279 1943 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6" Mar 17 17:40:49.409406 containerd[1606]: time="2025-03-17T17:40:49.408826356Z" level=info msg="StopPodSandbox for \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\"" Mar 17 17:40:49.409406 containerd[1606]: time="2025-03-17T17:40:49.409047072Z" level=info msg="Ensure that sandbox b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6 in task-service has been cleanup successfully" Mar 17 17:40:49.416025 containerd[1606]: time="2025-03-17T17:40:49.415613076Z" level=info msg="TearDown network for sandbox \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\" successfully" Mar 17 17:40:49.416025 containerd[1606]: time="2025-03-17T17:40:49.415679841Z" level=info msg="StopPodSandbox for \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\" returns successfully" Mar 17 17:40:49.417028 containerd[1606]: time="2025-03-17T17:40:49.416786287Z" level=info msg="TearDown network for sandbox \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\" successfully" Mar 17 17:40:49.417028 containerd[1606]: time="2025-03-17T17:40:49.416846882Z" level=info msg="StopPodSandbox for \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\" returns successfully" Mar 17 17:40:49.417545 containerd[1606]: time="2025-03-17T17:40:49.417509732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-d457j,Uid:c3c82e7e-fb50-42a4-b476-e62745be75f0,Namespace:default,Attempt:1,}" Mar 17 17:40:49.418194 containerd[1606]: time="2025-03-17T17:40:49.417995945Z" level=info msg="StopPodSandbox for \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\"" Mar 17 17:40:49.418194 containerd[1606]: time="2025-03-17T17:40:49.418153222Z" level=info msg="TearDown network for sandbox \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\" successfully" Mar 17 17:40:49.418194 containerd[1606]: time="2025-03-17T17:40:49.418167815Z" level=info msg="StopPodSandbox for \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\" returns successfully" Mar 17 17:40:49.419448 containerd[1606]: time="2025-03-17T17:40:49.419386367Z" level=info msg="StopPodSandbox for \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\"" Mar 17 17:40:49.419571 containerd[1606]: time="2025-03-17T17:40:49.419541982Z" level=info msg="TearDown network for sandbox \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\" successfully" Mar 17 17:40:49.419571 containerd[1606]: time="2025-03-17T17:40:49.419559809Z" level=info msg="StopPodSandbox for \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\" returns successfully" Mar 17 17:40:49.420962 containerd[1606]: time="2025-03-17T17:40:49.420927345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t27ck,Uid:4c4e0744-d875-41c4-9067-a3b54356fd5d,Namespace:calico-system,Attempt:3,}" Mar 17 17:40:49.578634 systemd[1]: run-netns-cni\x2d05485eee\x2dac38\x2d1091\x2db5ff\x2db0df726f6e31.mount: Deactivated successfully. Mar 17 17:40:49.578921 systemd[1]: run-netns-cni\x2d29c0ddd1\x2d9a91\x2d0a9b\x2d058f\x2d8a2c376d06e0.mount: Deactivated successfully. Mar 17 17:40:49.710823 kubelet[1943]: E0317 17:40:49.706707 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:49.819712 containerd[1606]: time="2025-03-17T17:40:49.819652094Z" level=error msg="Failed to destroy network for sandbox \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:49.828274 containerd[1606]: time="2025-03-17T17:40:49.825222526Z" level=error msg="encountered an error cleaning up failed sandbox \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:49.828274 containerd[1606]: time="2025-03-17T17:40:49.825318226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t27ck,Uid:4c4e0744-d875-41c4-9067-a3b54356fd5d,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:49.827354 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118-shm.mount: Deactivated successfully. Mar 17 17:40:49.828516 kubelet[1943]: E0317 17:40:49.825616 1943 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:49.828516 kubelet[1943]: E0317 17:40:49.825697 1943 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:49.828516 kubelet[1943]: E0317 17:40:49.825737 1943 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:49.828622 kubelet[1943]: E0317 17:40:49.825792 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t27ck_calico-system(4c4e0744-d875-41c4-9067-a3b54356fd5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t27ck_calico-system(4c4e0744-d875-41c4-9067-a3b54356fd5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:49.832635 containerd[1606]: time="2025-03-17T17:40:49.830832399Z" level=error msg="Failed to destroy network for sandbox \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:49.842553 containerd[1606]: time="2025-03-17T17:40:49.842264921Z" level=error msg="encountered an error cleaning up failed sandbox \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:49.842553 containerd[1606]: time="2025-03-17T17:40:49.842359539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-d457j,Uid:c3c82e7e-fb50-42a4-b476-e62745be75f0,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:49.843009 kubelet[1943]: E0317 17:40:49.842617 1943 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:49.843009 kubelet[1943]: E0317 17:40:49.842689 1943 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-d457j" Mar 17 17:40:49.843009 kubelet[1943]: E0317 17:40:49.842714 1943 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-d457j" Mar 17 17:40:49.843121 kubelet[1943]: E0317 17:40:49.842788 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-d457j_default(c3c82e7e-fb50-42a4-b476-e62745be75f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-d457j_default(c3c82e7e-fb50-42a4-b476-e62745be75f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-d457j" podUID="c3c82e7e-fb50-42a4-b476-e62745be75f0" Mar 17 17:40:50.413263 kubelet[1943]: I0317 17:40:50.413211 1943 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118" Mar 17 17:40:50.413954 containerd[1606]: time="2025-03-17T17:40:50.413905039Z" level=info msg="StopPodSandbox for \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\"" Mar 17 17:40:50.414393 containerd[1606]: time="2025-03-17T17:40:50.414167695Z" level=info msg="Ensure that sandbox 6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118 in task-service has been cleanup successfully" Mar 17 17:40:50.414438 kubelet[1943]: I0317 17:40:50.414261 1943 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3" Mar 17 17:40:50.414474 containerd[1606]: time="2025-03-17T17:40:50.414408267Z" level=info msg="TearDown network for sandbox \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\" successfully" Mar 17 17:40:50.414474 containerd[1606]: time="2025-03-17T17:40:50.414424523Z" level=info msg="StopPodSandbox for \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\" returns successfully" Mar 17 17:40:50.414917 containerd[1606]: time="2025-03-17T17:40:50.414879426Z" level=info msg="StopPodSandbox for \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\"" Mar 17 17:40:50.415061 containerd[1606]: time="2025-03-17T17:40:50.415001179Z" level=info msg="TearDown network for sandbox \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\" successfully" Mar 17 17:40:50.415061 containerd[1606]: time="2025-03-17T17:40:50.415047493Z" level=info msg="StopPodSandbox for \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\" returns successfully" Mar 17 17:40:50.415128 containerd[1606]: time="2025-03-17T17:40:50.415098634Z" level=info msg="StopPodSandbox for \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\"" Mar 17 17:40:50.415274 containerd[1606]: time="2025-03-17T17:40:50.415254622Z" level=info msg="Ensure that sandbox acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3 in task-service has been cleanup successfully" Mar 17 17:40:50.415511 containerd[1606]: time="2025-03-17T17:40:50.415442851Z" level=info msg="TearDown network for sandbox \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\" successfully" Mar 17 17:40:50.415511 containerd[1606]: time="2025-03-17T17:40:50.415461070Z" level=info msg="StopPodSandbox for \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\" returns successfully" Mar 17 17:40:50.415600 containerd[1606]: time="2025-03-17T17:40:50.415574289Z" level=info msg="StopPodSandbox for \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\"" Mar 17 17:40:50.415673 containerd[1606]: time="2025-03-17T17:40:50.415652103Z" level=info msg="StopPodSandbox for \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\"" Mar 17 17:40:50.415712 containerd[1606]: time="2025-03-17T17:40:50.415663532Z" level=info msg="TearDown network for sandbox \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\" successfully" Mar 17 17:40:50.415763 containerd[1606]: time="2025-03-17T17:40:50.415744250Z" level=info msg="TearDown network for sandbox \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\" successfully" Mar 17 17:40:50.415763 containerd[1606]: time="2025-03-17T17:40:50.415758172Z" level=info msg="StopPodSandbox for \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\" returns successfully" Mar 17 17:40:50.415817 containerd[1606]: time="2025-03-17T17:40:50.415757621Z" level=info msg="StopPodSandbox for \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\" returns successfully" Mar 17 17:40:50.415970 containerd[1606]: time="2025-03-17T17:40:50.415948304Z" level=info msg="StopPodSandbox for \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\"" Mar 17 17:40:50.416040 containerd[1606]: time="2025-03-17T17:40:50.416025867Z" level=info msg="TearDown network for sandbox \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\" successfully" Mar 17 17:40:50.416040 containerd[1606]: time="2025-03-17T17:40:50.416037966Z" level=info msg="StopPodSandbox for \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\" returns successfully" Mar 17 17:40:50.416155 containerd[1606]: time="2025-03-17T17:40:50.416113666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-d457j,Uid:c3c82e7e-fb50-42a4-b476-e62745be75f0,Namespace:default,Attempt:2,}" Mar 17 17:40:50.416404 containerd[1606]: time="2025-03-17T17:40:50.416378947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t27ck,Uid:4c4e0744-d875-41c4-9067-a3b54356fd5d,Namespace:calico-system,Attempt:4,}" Mar 17 17:40:50.578162 systemd[1]: run-netns-cni\x2dae39f72f\x2db6b3\x2d2398\x2d5fc6\x2d8fcac637907d.mount: Deactivated successfully. Mar 17 17:40:50.578355 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3-shm.mount: Deactivated successfully. Mar 17 17:40:50.578500 systemd[1]: run-netns-cni\x2d9181d303\x2d015c\x2dfecb\x2d37ed\x2d24259b245da6.mount: Deactivated successfully. Mar 17 17:40:50.707953 kubelet[1943]: E0317 17:40:50.707791 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:51.708519 kubelet[1943]: E0317 17:40:51.708444 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:52.709366 kubelet[1943]: E0317 17:40:52.709293 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:53.710195 kubelet[1943]: E0317 17:40:53.710127 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:54.042845 containerd[1606]: time="2025-03-17T17:40:54.042680857Z" level=error msg="Failed to destroy network for sandbox \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:54.045583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c-shm.mount: Deactivated successfully. Mar 17 17:40:54.049528 containerd[1606]: time="2025-03-17T17:40:54.049467179Z" level=error msg="encountered an error cleaning up failed sandbox \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:54.049642 containerd[1606]: time="2025-03-17T17:40:54.049549456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-d457j,Uid:c3c82e7e-fb50-42a4-b476-e62745be75f0,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:54.049933 kubelet[1943]: E0317 17:40:54.049863 1943 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:54.049979 kubelet[1943]: E0317 17:40:54.049964 1943 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-d457j" Mar 17 17:40:54.050008 kubelet[1943]: E0317 17:40:54.049993 1943 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-d457j" Mar 17 17:40:54.050087 kubelet[1943]: E0317 17:40:54.050051 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-d457j_default(c3c82e7e-fb50-42a4-b476-e62745be75f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-d457j_default(c3c82e7e-fb50-42a4-b476-e62745be75f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-d457j" podUID="c3c82e7e-fb50-42a4-b476-e62745be75f0" Mar 17 17:40:54.056784 containerd[1606]: time="2025-03-17T17:40:54.056706291Z" level=error msg="Failed to destroy network for sandbox \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:54.057213 containerd[1606]: time="2025-03-17T17:40:54.057187687Z" level=error msg="encountered an error cleaning up failed sandbox \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:54.057265 containerd[1606]: time="2025-03-17T17:40:54.057245762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t27ck,Uid:4c4e0744-d875-41c4-9067-a3b54356fd5d,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:54.057534 kubelet[1943]: E0317 17:40:54.057492 1943 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:54.057584 kubelet[1943]: E0317 17:40:54.057564 1943 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:54.057624 kubelet[1943]: E0317 17:40:54.057589 1943 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:54.057678 kubelet[1943]: E0317 17:40:54.057642 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t27ck_calico-system(4c4e0744-d875-41c4-9067-a3b54356fd5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t27ck_calico-system(4c4e0744-d875-41c4-9067-a3b54356fd5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:54.059049 update_engine[1589]: I20250317 17:40:54.058977 1589 update_attempter.cc:509] Updating boot flags... Mar 17 17:40:54.060820 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786-shm.mount: Deactivated successfully. Mar 17 17:40:54.096931 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2715) Mar 17 17:40:54.162802 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2718) Mar 17 17:40:54.221831 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2718) Mar 17 17:40:54.433065 kubelet[1943]: I0317 17:40:54.433013 1943 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786" Mar 17 17:40:54.433669 containerd[1606]: time="2025-03-17T17:40:54.433625806Z" level=info msg="StopPodSandbox for \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\"" Mar 17 17:40:54.433867 containerd[1606]: time="2025-03-17T17:40:54.433848485Z" level=info msg="Ensure that sandbox 4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786 in task-service has been cleanup successfully" Mar 17 17:40:54.434567 kubelet[1943]: I0317 17:40:54.434530 1943 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c" Mar 17 17:40:54.435102 containerd[1606]: time="2025-03-17T17:40:54.435073541Z" level=info msg="StopPodSandbox for \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\"" Mar 17 17:40:54.435409 containerd[1606]: time="2025-03-17T17:40:54.435376442Z" level=info msg="Ensure that sandbox 8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c in task-service has been cleanup successfully" Mar 17 17:40:54.435613 containerd[1606]: time="2025-03-17T17:40:54.435571564Z" level=info msg="TearDown network for sandbox \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\" successfully" Mar 17 17:40:54.435613 containerd[1606]: time="2025-03-17T17:40:54.435602865Z" level=info msg="StopPodSandbox for \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\" returns successfully" Mar 17 17:40:54.435882 containerd[1606]: time="2025-03-17T17:40:54.435856665Z" level=info msg="StopPodSandbox for \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\"" Mar 17 17:40:54.436062 containerd[1606]: time="2025-03-17T17:40:54.435986410Z" level=info msg="TearDown network for sandbox \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\" successfully" Mar 17 17:40:54.436062 containerd[1606]: time="2025-03-17T17:40:54.436001084Z" level=info msg="StopPodSandbox for \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\" returns successfully" Mar 17 17:40:54.436062 containerd[1606]: time="2025-03-17T17:40:54.436033017Z" level=info msg="TearDown network for sandbox \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\" successfully" Mar 17 17:40:54.436062 containerd[1606]: time="2025-03-17T17:40:54.436047511Z" level=info msg="StopPodSandbox for \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\" returns successfully" Mar 17 17:40:54.436476 containerd[1606]: time="2025-03-17T17:40:54.436451409Z" level=info msg="StopPodSandbox for \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\"" Mar 17 17:40:54.436577 containerd[1606]: time="2025-03-17T17:40:54.436543832Z" level=info msg="TearDown network for sandbox \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\" successfully" Mar 17 17:40:54.436622 containerd[1606]: time="2025-03-17T17:40:54.436574803Z" level=info msg="StopPodSandbox for \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\" returns successfully" Mar 17 17:40:54.436664 containerd[1606]: time="2025-03-17T17:40:54.436642004Z" level=info msg="StopPodSandbox for \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\"" Mar 17 17:40:54.436663 systemd[1]: run-netns-cni\x2d2e603a4b\x2d6954\x2d5d11\x2da817\x2da34d04e35cb8.mount: Deactivated successfully. Mar 17 17:40:54.436811 containerd[1606]: time="2025-03-17T17:40:54.436786723Z" level=info msg="TearDown network for sandbox \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\" successfully" Mar 17 17:40:54.436811 containerd[1606]: time="2025-03-17T17:40:54.436808309Z" level=info msg="StopPodSandbox for \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\" returns successfully" Mar 17 17:40:54.437330 containerd[1606]: time="2025-03-17T17:40:54.437304950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-d457j,Uid:c3c82e7e-fb50-42a4-b476-e62745be75f0,Namespace:default,Attempt:3,}" Mar 17 17:40:54.437921 containerd[1606]: time="2025-03-17T17:40:54.437835537Z" level=info msg="StopPodSandbox for \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\"" Mar 17 17:40:54.438581 containerd[1606]: time="2025-03-17T17:40:54.438017238Z" level=info msg="TearDown network for sandbox \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\" successfully" Mar 17 17:40:54.438581 containerd[1606]: time="2025-03-17T17:40:54.438038223Z" level=info msg="StopPodSandbox for \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\" returns successfully" Mar 17 17:40:54.438581 containerd[1606]: time="2025-03-17T17:40:54.438338029Z" level=info msg="StopPodSandbox for \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\"" Mar 17 17:40:54.438581 containerd[1606]: time="2025-03-17T17:40:54.438446889Z" level=info msg="TearDown network for sandbox \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\" successfully" Mar 17 17:40:54.438581 containerd[1606]: time="2025-03-17T17:40:54.438458287Z" level=info msg="StopPodSandbox for \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\" returns successfully" Mar 17 17:40:54.439791 containerd[1606]: time="2025-03-17T17:40:54.438710825Z" level=info msg="StopPodSandbox for \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\"" Mar 17 17:40:54.439948 containerd[1606]: time="2025-03-17T17:40:54.439920365Z" level=info msg="TearDown network for sandbox \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\" successfully" Mar 17 17:40:54.439948 containerd[1606]: time="2025-03-17T17:40:54.439945306Z" level=info msg="StopPodSandbox for \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\" returns successfully" Mar 17 17:40:54.440522 containerd[1606]: time="2025-03-17T17:40:54.440491099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t27ck,Uid:4c4e0744-d875-41c4-9067-a3b54356fd5d,Namespace:calico-system,Attempt:5,}" Mar 17 17:40:54.441065 systemd[1]: run-netns-cni\x2defe7a97f\x2d7186\x2d8307\x2d85a4\x2d4ecb169961af.mount: Deactivated successfully. Mar 17 17:40:54.711037 kubelet[1943]: E0317 17:40:54.710899 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:55.711560 kubelet[1943]: E0317 17:40:55.711516 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:56.076774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount794405162.mount: Deactivated successfully. Mar 17 17:40:56.712846 kubelet[1943]: E0317 17:40:56.712717 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:57.713542 kubelet[1943]: E0317 17:40:57.713450 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:57.881717 containerd[1606]: time="2025-03-17T17:40:57.881635436Z" level=error msg="Failed to destroy network for sandbox \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:57.882345 containerd[1606]: time="2025-03-17T17:40:57.882123031Z" level=error msg="encountered an error cleaning up failed sandbox \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:57.882345 containerd[1606]: time="2025-03-17T17:40:57.882180538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-d457j,Uid:c3c82e7e-fb50-42a4-b476-e62745be75f0,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:57.882490 kubelet[1943]: E0317 17:40:57.882414 1943 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:57.882545 kubelet[1943]: E0317 17:40:57.882481 1943 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-d457j" Mar 17 17:40:57.882545 kubelet[1943]: E0317 17:40:57.882517 1943 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-d457j" Mar 17 17:40:57.882629 kubelet[1943]: E0317 17:40:57.882574 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-d457j_default(c3c82e7e-fb50-42a4-b476-e62745be75f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-d457j_default(c3c82e7e-fb50-42a4-b476-e62745be75f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-d457j" podUID="c3c82e7e-fb50-42a4-b476-e62745be75f0" Mar 17 17:40:57.883988 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c-shm.mount: Deactivated successfully. Mar 17 17:40:57.915123 containerd[1606]: time="2025-03-17T17:40:57.915062605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:57.921543 containerd[1606]: time="2025-03-17T17:40:57.921469970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 17 17:40:57.926456 containerd[1606]: time="2025-03-17T17:40:57.926388966Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:57.932276 containerd[1606]: time="2025-03-17T17:40:57.932213955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:57.933092 containerd[1606]: time="2025-03-17T17:40:57.933039373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 10.579847103s" Mar 17 17:40:57.933092 containerd[1606]: time="2025-03-17T17:40:57.933085330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 17 17:40:57.943366 containerd[1606]: time="2025-03-17T17:40:57.943325382Z" level=info msg="CreateContainer within sandbox \"5b8ece45cc19cd83a7c62bfca68753ac1263d1534021775c22ca2d1f8989be67\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:40:57.976863 containerd[1606]: time="2025-03-17T17:40:57.976592240Z" level=error msg="Failed to destroy network for sandbox \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:57.977035 containerd[1606]: time="2025-03-17T17:40:57.977004288Z" level=error msg="encountered an error cleaning up failed sandbox \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:57.977094 containerd[1606]: time="2025-03-17T17:40:57.977067054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t27ck,Uid:4c4e0744-d875-41c4-9067-a3b54356fd5d,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:57.977412 kubelet[1943]: E0317 17:40:57.977341 1943 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:40:57.977626 kubelet[1943]: E0317 17:40:57.977428 1943 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:57.977626 kubelet[1943]: E0317 17:40:57.977456 1943 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t27ck" Mar 17 17:40:57.977731 kubelet[1943]: E0317 17:40:57.977669 1943 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t27ck_calico-system(4c4e0744-d875-41c4-9067-a3b54356fd5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t27ck_calico-system(4c4e0744-d875-41c4-9067-a3b54356fd5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t27ck" podUID="4c4e0744-d875-41c4-9067-a3b54356fd5d" Mar 17 17:40:57.991838 containerd[1606]: time="2025-03-17T17:40:57.991746090Z" level=info msg="CreateContainer within sandbox \"5b8ece45cc19cd83a7c62bfca68753ac1263d1534021775c22ca2d1f8989be67\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"99151742ac07568a2adde9c10f60c3caae79212aed75700deed29f347d076bc0\"" Mar 17 17:40:57.992803 containerd[1606]: time="2025-03-17T17:40:57.992507999Z" level=info msg="StartContainer for \"99151742ac07568a2adde9c10f60c3caae79212aed75700deed29f347d076bc0\"" Mar 17 17:40:58.068850 containerd[1606]: time="2025-03-17T17:40:58.068786053Z" level=info msg="StartContainer for \"99151742ac07568a2adde9c10f60c3caae79212aed75700deed29f347d076bc0\" returns successfully" Mar 17 17:40:58.141081 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:40:58.141228 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:40:58.444010 kubelet[1943]: E0317 17:40:58.443977 1943 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:58.447040 kubelet[1943]: I0317 17:40:58.447003 1943 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f" Mar 17 17:40:58.447580 containerd[1606]: time="2025-03-17T17:40:58.447543267Z" level=info msg="StopPodSandbox for \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\"" Mar 17 17:40:58.447851 containerd[1606]: time="2025-03-17T17:40:58.447823965Z" level=info msg="Ensure that sandbox 6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f in task-service has been cleanup successfully" Mar 17 17:40:58.448261 containerd[1606]: time="2025-03-17T17:40:58.448071157Z" level=info msg="TearDown network for sandbox \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\" successfully" Mar 17 17:40:58.448261 containerd[1606]: time="2025-03-17T17:40:58.448093204Z" level=info msg="StopPodSandbox for \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\" returns successfully" Mar 17 17:40:58.448660 containerd[1606]: time="2025-03-17T17:40:58.448493707Z" level=info msg="StopPodSandbox for \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\"" Mar 17 17:40:58.448660 containerd[1606]: time="2025-03-17T17:40:58.448593907Z" level=info msg="TearDown network for sandbox \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\" successfully" Mar 17 17:40:58.448660 containerd[1606]: time="2025-03-17T17:40:58.448607631Z" level=info msg="StopPodSandbox for \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\" returns successfully" Mar 17 17:40:58.448957 containerd[1606]: time="2025-03-17T17:40:58.448933576Z" level=info msg="StopPodSandbox for \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\"" Mar 17 17:40:58.449181 containerd[1606]: time="2025-03-17T17:40:58.449090423Z" level=info msg="TearDown network for sandbox \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\" successfully" Mar 17 17:40:58.449181 containerd[1606]: time="2025-03-17T17:40:58.449110007Z" level=info msg="StopPodSandbox for \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\" returns successfully" Mar 17 17:40:58.449270 kubelet[1943]: I0317 17:40:58.449224 1943 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c" Mar 17 17:40:58.450063 containerd[1606]: time="2025-03-17T17:40:58.449530794Z" level=info msg="StopPodSandbox for \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\"" Mar 17 17:40:58.450063 containerd[1606]: time="2025-03-17T17:40:58.449623392Z" level=info msg="TearDown network for sandbox \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\" successfully" Mar 17 17:40:58.450063 containerd[1606]: time="2025-03-17T17:40:58.449638387Z" level=info msg="StopPodSandbox for \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\" returns successfully" Mar 17 17:40:58.450063 containerd[1606]: time="2025-03-17T17:40:58.449762669Z" level=info msg="StopPodSandbox for \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\"" Mar 17 17:40:58.450063 containerd[1606]: time="2025-03-17T17:40:58.449928101Z" level=info msg="Ensure that sandbox 40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c in task-service has been cleanup successfully" Mar 17 17:40:58.450334 containerd[1606]: time="2025-03-17T17:40:58.450312767Z" level=info msg="StopPodSandbox for \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\"" Mar 17 17:40:58.450493 containerd[1606]: time="2025-03-17T17:40:58.450467901Z" level=info msg="TearDown network for sandbox \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\" successfully" Mar 17 17:40:58.450493 containerd[1606]: time="2025-03-17T17:40:58.450490239Z" level=info msg="StopPodSandbox for \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\" returns successfully" Mar 17 17:40:58.450622 containerd[1606]: time="2025-03-17T17:40:58.450575845Z" level=info msg="TearDown network for sandbox \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\" successfully" Mar 17 17:40:58.450622 containerd[1606]: time="2025-03-17T17:40:58.450589768Z" level=info msg="StopPodSandbox for \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\" returns successfully" Mar 17 17:40:58.450899 containerd[1606]: time="2025-03-17T17:40:58.450843140Z" level=info msg="StopPodSandbox for \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\"" Mar 17 17:40:58.451002 containerd[1606]: time="2025-03-17T17:40:58.450977399Z" level=info msg="TearDown network for sandbox \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\" successfully" Mar 17 17:40:58.451002 containerd[1606]: time="2025-03-17T17:40:58.450998194Z" level=info msg="StopPodSandbox for \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\" returns successfully" Mar 17 17:40:58.451272 containerd[1606]: time="2025-03-17T17:40:58.451245716Z" level=info msg="StopPodSandbox for \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\"" Mar 17 17:40:58.451362 containerd[1606]: time="2025-03-17T17:40:58.451337903Z" level=info msg="TearDown network for sandbox \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\" successfully" Mar 17 17:40:58.451362 containerd[1606]: time="2025-03-17T17:40:58.451358248Z" level=info msg="StopPodSandbox for \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\" returns successfully" Mar 17 17:40:58.451506 containerd[1606]: time="2025-03-17T17:40:58.451491385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t27ck,Uid:4c4e0744-d875-41c4-9067-a3b54356fd5d,Namespace:calico-system,Attempt:6,}" Mar 17 17:40:58.452133 containerd[1606]: time="2025-03-17T17:40:58.452095093Z" level=info msg="StopPodSandbox for \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\"" Mar 17 17:40:58.452216 containerd[1606]: time="2025-03-17T17:40:58.452189424Z" level=info msg="TearDown network for sandbox \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\" successfully" Mar 17 17:40:58.452216 containerd[1606]: time="2025-03-17T17:40:58.452208296Z" level=info msg="StopPodSandbox for \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\" returns successfully" Mar 17 17:40:58.452564 containerd[1606]: time="2025-03-17T17:40:58.452529063Z" level=info msg="StopPodSandbox for \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\"" Mar 17 17:40:58.452641 containerd[1606]: time="2025-03-17T17:40:58.452621911Z" level=info msg="TearDown network for sandbox \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\" successfully" Mar 17 17:40:58.452712 containerd[1606]: time="2025-03-17T17:40:58.452638479Z" level=info msg="StopPodSandbox for \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\" returns successfully" Mar 17 17:40:58.453105 containerd[1606]: time="2025-03-17T17:40:58.453038712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-d457j,Uid:c3c82e7e-fb50-42a4-b476-e62745be75f0,Namespace:default,Attempt:4,}" Mar 17 17:40:58.714532 kubelet[1943]: E0317 17:40:58.714353 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:58.739964 systemd[1]: run-netns-cni\x2d482bad3a\x2d1da2\x2d63d7\x2de8d1\x2d03fbe265edc7.mount: Deactivated successfully. Mar 17 17:40:58.740215 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f-shm.mount: Deactivated successfully. Mar 17 17:40:58.740397 systemd[1]: run-netns-cni\x2d534f0fee\x2d591c\x2d7dbb\x2d530e\x2da22cb3738cdb.mount: Deactivated successfully. Mar 17 17:40:59.286165 systemd-networkd[1254]: cali0b137fa5b28: Link UP Mar 17 17:40:59.286435 systemd-networkd[1254]: cali0b137fa5b28: Gained carrier Mar 17 17:40:59.315851 kubelet[1943]: I0317 17:40:59.315477 1943 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hq5rp" podStartSLOduration=8.204716921 podStartE2EDuration="37.315436368s" podCreationTimestamp="2025-03-17 17:40:22 +0000 UTC" firstStartedPulling="2025-03-17 17:40:28.824257554 +0000 UTC m=+7.529650213" lastFinishedPulling="2025-03-17 17:40:57.934977011 +0000 UTC m=+36.640369660" observedRunningTime="2025-03-17 17:40:58.465201226 +0000 UTC m=+37.170593895" watchObservedRunningTime="2025-03-17 17:40:59.315436368 +0000 UTC m=+38.020829017" Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.546 [INFO][2875] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.596 [INFO][2875] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.38-k8s-csi--node--driver--t27ck-eth0 csi-node-driver- calico-system 4c4e0744-d875-41c4-9067-a3b54356fd5d 735 0 2025-03-17 17:40:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:69ddf5d45d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.38 csi-node-driver-t27ck eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0b137fa5b28 [] []}} ContainerID="2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" Namespace="calico-system" Pod="csi-node-driver-t27ck" WorkloadEndpoint="10.0.0.38-k8s-csi--node--driver--t27ck-" Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.596 [INFO][2875] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" Namespace="calico-system" Pod="csi-node-driver-t27ck" WorkloadEndpoint="10.0.0.38-k8s-csi--node--driver--t27ck-eth0" Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.666 [INFO][2890] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" HandleID="k8s-pod-network.2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" Workload="10.0.0.38-k8s-csi--node--driver--t27ck-eth0" Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.676 [INFO][2890] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" HandleID="k8s-pod-network.2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" Workload="10.0.0.38-k8s-csi--node--driver--t27ck-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000284200), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.38", "pod":"csi-node-driver-t27ck", "timestamp":"2025-03-17 17:40:58.666695204 +0000 UTC"}, Hostname:"10.0.0.38", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.676 [INFO][2890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.677 [INFO][2890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.677 [INFO][2890] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.38' Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.697 [INFO][2890] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" host="10.0.0.38" Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.705 [INFO][2890] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.38" Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.764 [INFO][2890] ipam/ipam.go 489: Trying affinity for 192.168.116.64/26 host="10.0.0.38" Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.766 [INFO][2890] ipam/ipam.go 155: Attempting to load block cidr=192.168.116.64/26 host="10.0.0.38" Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.770 [INFO][2890] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="10.0.0.38" Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.770 [INFO][2890] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" host="10.0.0.38" Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.772 [INFO][2890] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662 Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.789 [INFO][2890] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" host="10.0.0.38" Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.857 [INFO][2890] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.116.65/26] block=192.168.116.64/26 handle="k8s-pod-network.2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" host="10.0.0.38" Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.857 [INFO][2890] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.116.65/26] handle="k8s-pod-network.2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" host="10.0.0.38" Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.858 [INFO][2890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:40:59.319018 containerd[1606]: 2025-03-17 17:40:58.858 [INFO][2890] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.65/26] IPv6=[] ContainerID="2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" HandleID="k8s-pod-network.2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" Workload="10.0.0.38-k8s-csi--node--driver--t27ck-eth0" Mar 17 17:40:59.320380 containerd[1606]: 2025-03-17 17:40:58.866 [INFO][2875] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" Namespace="calico-system" Pod="csi-node-driver-t27ck" WorkloadEndpoint="10.0.0.38-k8s-csi--node--driver--t27ck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38-k8s-csi--node--driver--t27ck-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c4e0744-d875-41c4-9067-a3b54356fd5d", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 40, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.38", ContainerID:"", Pod:"csi-node-driver-t27ck", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b137fa5b28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:40:59.320380 containerd[1606]: 2025-03-17 17:40:58.866 [INFO][2875] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.116.65/32] ContainerID="2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" Namespace="calico-system" Pod="csi-node-driver-t27ck" WorkloadEndpoint="10.0.0.38-k8s-csi--node--driver--t27ck-eth0" Mar 17 17:40:59.320380 containerd[1606]: 2025-03-17 17:40:58.866 [INFO][2875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b137fa5b28 ContainerID="2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" Namespace="calico-system" Pod="csi-node-driver-t27ck" WorkloadEndpoint="10.0.0.38-k8s-csi--node--driver--t27ck-eth0" Mar 17 17:40:59.320380 containerd[1606]: 2025-03-17 17:40:59.286 [INFO][2875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" Namespace="calico-system" Pod="csi-node-driver-t27ck" WorkloadEndpoint="10.0.0.38-k8s-csi--node--driver--t27ck-eth0" Mar 17 17:40:59.320380 containerd[1606]: 2025-03-17 17:40:59.287 [INFO][2875] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" Namespace="calico-system" Pod="csi-node-driver-t27ck" WorkloadEndpoint="10.0.0.38-k8s-csi--node--driver--t27ck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38-k8s-csi--node--driver--t27ck-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4c4e0744-d875-41c4-9067-a3b54356fd5d", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 40, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.38", ContainerID:"2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662", Pod:"csi-node-driver-t27ck", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0b137fa5b28", MAC:"1a:62:79:66:27:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:40:59.320380 containerd[1606]: 2025-03-17 17:40:59.315 [INFO][2875] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662" Namespace="calico-system" Pod="csi-node-driver-t27ck" WorkloadEndpoint="10.0.0.38-k8s-csi--node--driver--t27ck-eth0" Mar 17 17:40:59.358145 containerd[1606]: time="2025-03-17T17:40:59.357169027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:59.358145 containerd[1606]: time="2025-03-17T17:40:59.357856546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:59.358145 containerd[1606]: time="2025-03-17T17:40:59.357878513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:59.358145 containerd[1606]: time="2025-03-17T17:40:59.358028009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:59.377863 systemd[1]: run-containerd-runc-k8s.io-2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662-runc.mWqzGD.mount: Deactivated successfully. Mar 17 17:40:59.390322 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:40:59.407527 containerd[1606]: time="2025-03-17T17:40:59.407468665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t27ck,Uid:4c4e0744-d875-41c4-9067-a3b54356fd5d,Namespace:calico-system,Attempt:6,} returns sandbox id \"2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662\"" Mar 17 17:40:59.409977 containerd[1606]: time="2025-03-17T17:40:59.409944010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:40:59.456437 kubelet[1943]: E0317 17:40:59.456407 1943 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:59.653008 systemd-networkd[1254]: calic21ea0c4978: Link UP Mar 17 17:40:59.654155 systemd-networkd[1254]: calic21ea0c4978: Gained carrier Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.338 [INFO][2913] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.353 [INFO][2913] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.38-k8s-nginx--deployment--85f456d6dd--d457j-eth0 nginx-deployment-85f456d6dd- default c3c82e7e-fb50-42a4-b476-e62745be75f0 990 0 2025-03-17 17:40:46 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.38 nginx-deployment-85f456d6dd-d457j eth0 default [] [] [kns.default ksa.default.default] calic21ea0c4978 [] []}} ContainerID="7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" Namespace="default" Pod="nginx-deployment-85f456d6dd-d457j" WorkloadEndpoint="10.0.0.38-k8s-nginx--deployment--85f456d6dd--d457j-" Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.354 [INFO][2913] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" Namespace="default" Pod="nginx-deployment-85f456d6dd-d457j" WorkloadEndpoint="10.0.0.38-k8s-nginx--deployment--85f456d6dd--d457j-eth0" Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.390 [INFO][2956] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" HandleID="k8s-pod-network.7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" Workload="10.0.0.38-k8s-nginx--deployment--85f456d6dd--d457j-eth0" Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.605 [INFO][2956] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" HandleID="k8s-pod-network.7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" Workload="10.0.0.38-k8s-nginx--deployment--85f456d6dd--d457j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00052f3e0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.38", "pod":"nginx-deployment-85f456d6dd-d457j", "timestamp":"2025-03-17 17:40:59.390656478 +0000 UTC"}, Hostname:"10.0.0.38", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.605 [INFO][2956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.605 [INFO][2956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.605 [INFO][2956] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.38' Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.608 [INFO][2956] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" host="10.0.0.38" Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.614 [INFO][2956] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.38" Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.624 [INFO][2956] ipam/ipam.go 489: Trying affinity for 192.168.116.64/26 host="10.0.0.38" Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.629 [INFO][2956] ipam/ipam.go 155: Attempting to load block cidr=192.168.116.64/26 host="10.0.0.38" Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.632 [INFO][2956] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="10.0.0.38" Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.632 [INFO][2956] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" host="10.0.0.38" Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.634 [INFO][2956] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0 Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.640 [INFO][2956] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" host="10.0.0.38" Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.647 [INFO][2956] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.116.66/26] block=192.168.116.64/26 handle="k8s-pod-network.7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" host="10.0.0.38" Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.647 [INFO][2956] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.116.66/26] handle="k8s-pod-network.7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" host="10.0.0.38" Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.647 [INFO][2956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:40:59.666546 containerd[1606]: 2025-03-17 17:40:59.647 [INFO][2956] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.66/26] IPv6=[] ContainerID="7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" HandleID="k8s-pod-network.7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" Workload="10.0.0.38-k8s-nginx--deployment--85f456d6dd--d457j-eth0" Mar 17 17:40:59.667153 containerd[1606]: 2025-03-17 17:40:59.650 [INFO][2913] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" Namespace="default" Pod="nginx-deployment-85f456d6dd-d457j" WorkloadEndpoint="10.0.0.38-k8s-nginx--deployment--85f456d6dd--d457j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38-k8s-nginx--deployment--85f456d6dd--d457j-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"c3c82e7e-fb50-42a4-b476-e62745be75f0", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 40, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.38", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-d457j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.116.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic21ea0c4978", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:40:59.667153 containerd[1606]: 2025-03-17 17:40:59.650 [INFO][2913] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.116.66/32] ContainerID="7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" Namespace="default" Pod="nginx-deployment-85f456d6dd-d457j" WorkloadEndpoint="10.0.0.38-k8s-nginx--deployment--85f456d6dd--d457j-eth0" Mar 17 17:40:59.667153 containerd[1606]: 2025-03-17 17:40:59.650 [INFO][2913] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic21ea0c4978 ContainerID="7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" Namespace="default" Pod="nginx-deployment-85f456d6dd-d457j" WorkloadEndpoint="10.0.0.38-k8s-nginx--deployment--85f456d6dd--d457j-eth0" Mar 17 17:40:59.667153 containerd[1606]: 2025-03-17 17:40:59.656 [INFO][2913] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" Namespace="default" Pod="nginx-deployment-85f456d6dd-d457j" WorkloadEndpoint="10.0.0.38-k8s-nginx--deployment--85f456d6dd--d457j-eth0" Mar 17 17:40:59.667153 containerd[1606]: 2025-03-17 17:40:59.656 [INFO][2913] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" Namespace="default" Pod="nginx-deployment-85f456d6dd-d457j" WorkloadEndpoint="10.0.0.38-k8s-nginx--deployment--85f456d6dd--d457j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38-k8s-nginx--deployment--85f456d6dd--d457j-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"c3c82e7e-fb50-42a4-b476-e62745be75f0", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 40, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.38", ContainerID:"7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0", Pod:"nginx-deployment-85f456d6dd-d457j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.116.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic21ea0c4978", MAC:"6e:26:7b:60:2c:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:40:59.667153 containerd[1606]: 2025-03-17 17:40:59.663 [INFO][2913] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0" Namespace="default" Pod="nginx-deployment-85f456d6dd-d457j" WorkloadEndpoint="10.0.0.38-k8s-nginx--deployment--85f456d6dd--d457j-eth0" Mar 17 17:40:59.689068 containerd[1606]: time="2025-03-17T17:40:59.688949661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:59.689068 containerd[1606]: time="2025-03-17T17:40:59.689016576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:59.689068 containerd[1606]: time="2025-03-17T17:40:59.689030139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:59.689310 containerd[1606]: time="2025-03-17T17:40:59.689126986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:59.714600 kubelet[1943]: E0317 17:40:59.714537 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:40:59.725372 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:40:59.758325 containerd[1606]: time="2025-03-17T17:40:59.758275424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-d457j,Uid:c3c82e7e-fb50-42a4-b476-e62745be75f0,Namespace:default,Attempt:4,} returns sandbox id \"7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0\"" Mar 17 17:41:00.209893 kernel: bpftool[3198]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:41:00.389900 systemd-networkd[1254]: cali0b137fa5b28: Gained IPv6LL Mar 17 17:41:00.482130 systemd-networkd[1254]: vxlan.calico: Link UP Mar 17 17:41:00.482142 systemd-networkd[1254]: vxlan.calico: Gained carrier Mar 17 17:41:00.715519 kubelet[1943]: E0317 17:41:00.715474 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:01.541985 systemd-networkd[1254]: vxlan.calico: Gained IPv6LL Mar 17 17:41:01.605969 systemd-networkd[1254]: calic21ea0c4978: Gained IPv6LL Mar 17 17:41:01.715869 kubelet[1943]: E0317 17:41:01.715784 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:02.320932 containerd[1606]: time="2025-03-17T17:41:02.320858473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:02.323444 containerd[1606]: time="2025-03-17T17:41:02.323395826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 17 17:41:02.326358 containerd[1606]: time="2025-03-17T17:41:02.326303754Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:02.332077 containerd[1606]: time="2025-03-17T17:41:02.332037637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:02.332742 containerd[1606]: time="2025-03-17T17:41:02.332682772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 2.922708991s" Mar 17 17:41:02.332742 containerd[1606]: time="2025-03-17T17:41:02.332715889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 17 17:41:02.335737 containerd[1606]: time="2025-03-17T17:41:02.334682818Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:41:02.335956 containerd[1606]: time="2025-03-17T17:41:02.335911829Z" level=info msg="CreateContainer within sandbox \"2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:41:02.460255 containerd[1606]: time="2025-03-17T17:41:02.460188663Z" level=info msg="CreateContainer within sandbox \"2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"866523a61df6498f59d13f3fcdd8d661c067e684d858b09f638ae75f9db4c3de\"" Mar 17 17:41:02.460948 containerd[1606]: time="2025-03-17T17:41:02.460891598Z" level=info msg="StartContainer for \"866523a61df6498f59d13f3fcdd8d661c067e684d858b09f638ae75f9db4c3de\"" Mar 17 17:41:02.684536 kubelet[1943]: E0317 17:41:02.684469 1943 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:02.716063 kubelet[1943]: E0317 17:41:02.716006 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:02.774168 containerd[1606]: time="2025-03-17T17:41:02.774107237Z" level=info msg="StartContainer for \"866523a61df6498f59d13f3fcdd8d661c067e684d858b09f638ae75f9db4c3de\" returns successfully" Mar 17 17:41:03.716842 kubelet[1943]: E0317 17:41:03.716789 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:04.717965 kubelet[1943]: E0317 17:41:04.717897 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:05.719020 kubelet[1943]: E0317 17:41:05.718944 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:06.719767 kubelet[1943]: E0317 17:41:06.719715 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:07.257914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1965526622.mount: Deactivated successfully. Mar 17 17:41:07.723745 kubelet[1943]: E0317 17:41:07.721316 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:08.608112 containerd[1606]: time="2025-03-17T17:41:08.608043279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:08.609339 containerd[1606]: time="2025-03-17T17:41:08.609287831Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73060131" Mar 17 17:41:08.610903 containerd[1606]: time="2025-03-17T17:41:08.610867421Z" level=info msg="ImageCreate event name:\"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:08.613751 containerd[1606]: time="2025-03-17T17:41:08.613704967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:08.615278 containerd[1606]: time="2025-03-17T17:41:08.615250205Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 6.280531445s" Mar 17 17:41:08.615367 containerd[1606]: time="2025-03-17T17:41:08.615344785Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 17 17:41:08.618035 containerd[1606]: time="2025-03-17T17:41:08.618002900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:41:08.618507 containerd[1606]: time="2025-03-17T17:41:08.618477858Z" level=info msg="CreateContainer within sandbox \"7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 17:41:08.634187 containerd[1606]: time="2025-03-17T17:41:08.634125183Z" level=info msg="CreateContainer within sandbox \"7443374c7fa64879cfcd6703f495d98814ac411b1636047d61787377992e11b0\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"299988d842e8dc4b0a2a7a7b8c21ce5aa13bb75d90524cb42cf08dc52ca83d3b\"" Mar 17 17:41:08.634964 containerd[1606]: time="2025-03-17T17:41:08.634916426Z" level=info msg="StartContainer for \"299988d842e8dc4b0a2a7a7b8c21ce5aa13bb75d90524cb42cf08dc52ca83d3b\"" Mar 17 17:41:08.722200 kubelet[1943]: E0317 17:41:08.722141 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:09.069980 containerd[1606]: time="2025-03-17T17:41:09.069923043Z" level=info msg="StartContainer for \"299988d842e8dc4b0a2a7a7b8c21ce5aa13bb75d90524cb42cf08dc52ca83d3b\" returns successfully" Mar 17 17:41:09.496603 kubelet[1943]: I0317 17:41:09.496520 1943 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-d457j" podStartSLOduration=14.638767264 podStartE2EDuration="23.496502878s" podCreationTimestamp="2025-03-17 17:40:46 +0000 UTC" firstStartedPulling="2025-03-17 17:40:59.759610233 +0000 UTC m=+38.465002892" lastFinishedPulling="2025-03-17 17:41:08.617345847 +0000 UTC m=+47.322738506" observedRunningTime="2025-03-17 17:41:09.496415842 +0000 UTC m=+48.201808491" watchObservedRunningTime="2025-03-17 17:41:09.496502878 +0000 UTC m=+48.201895527" Mar 17 17:41:09.723142 kubelet[1943]: E0317 17:41:09.723068 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:10.723713 kubelet[1943]: E0317 17:41:10.723615 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:10.790703 containerd[1606]: time="2025-03-17T17:41:10.790624262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:10.792535 containerd[1606]: time="2025-03-17T17:41:10.792493841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 17 17:41:10.796754 containerd[1606]: time="2025-03-17T17:41:10.793956507Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:10.798545 containerd[1606]: time="2025-03-17T17:41:10.798483078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:10.799516 containerd[1606]: time="2025-03-17T17:41:10.799461275Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 2.181418123s" Mar 17 17:41:10.799595 containerd[1606]: time="2025-03-17T17:41:10.799517446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 17 17:41:10.802582 containerd[1606]: time="2025-03-17T17:41:10.802535587Z" level=info msg="CreateContainer within sandbox \"2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:41:10.832062 containerd[1606]: time="2025-03-17T17:41:10.832001674Z" level=info msg="CreateContainer within sandbox \"2aa6eb9096caa8b6c4b625d9c5eefc3ceb03d7c47e4cb56c3b9b0c7021c0d662\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"280199628f64a1704338882c103b742fc1ce43f989353fd956472e067d6d86a1\"" Mar 17 17:41:10.833753 containerd[1606]: time="2025-03-17T17:41:10.832753835Z" level=info msg="StartContainer for \"280199628f64a1704338882c103b742fc1ce43f989353fd956472e067d6d86a1\"" Mar 17 17:41:10.928102 containerd[1606]: time="2025-03-17T17:41:10.927787263Z" level=info msg="StartContainer for \"280199628f64a1704338882c103b742fc1ce43f989353fd956472e067d6d86a1\" returns successfully" Mar 17 17:41:11.723847 kubelet[1943]: E0317 17:41:11.723792 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:11.890584 kubelet[1943]: I0317 17:41:11.890520 1943 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:41:11.890584 kubelet[1943]: I0317 17:41:11.890575 1943 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:41:12.724653 kubelet[1943]: E0317 17:41:12.724584 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:13.724875 kubelet[1943]: E0317 17:41:13.724791 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:14.726007 kubelet[1943]: E0317 17:41:14.725933 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:15.726193 kubelet[1943]: E0317 17:41:15.726126 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:16.726359 kubelet[1943]: E0317 17:41:16.726286 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:17.441027 kubelet[1943]: E0317 17:41:17.440986 1943 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:17.669633 kubelet[1943]: I0317 17:41:17.669560 1943 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-t27ck" podStartSLOduration=44.278204179 podStartE2EDuration="55.669537594s" podCreationTimestamp="2025-03-17 17:40:22 +0000 UTC" firstStartedPulling="2025-03-17 17:40:59.409417858 +0000 UTC m=+38.114810507" lastFinishedPulling="2025-03-17 17:41:10.800751272 +0000 UTC m=+49.506143922" observedRunningTime="2025-03-17 17:41:11.65714552 +0000 UTC m=+50.362538169" watchObservedRunningTime="2025-03-17 17:41:17.669537594 +0000 UTC m=+56.374930243" Mar 17 17:41:17.726999 kubelet[1943]: E0317 17:41:17.726834 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:17.787569 kubelet[1943]: I0317 17:41:17.787494 1943 topology_manager.go:215] "Topology Admit Handler" podUID="c62c7b85-4c75-435e-be88-cca2e7d5a5c8" podNamespace="default" podName="nfs-server-provisioner-0" Mar 17 17:41:17.859964 kubelet[1943]: I0317 17:41:17.859899 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c62c7b85-4c75-435e-be88-cca2e7d5a5c8-data\") pod \"nfs-server-provisioner-0\" (UID: \"c62c7b85-4c75-435e-be88-cca2e7d5a5c8\") " pod="default/nfs-server-provisioner-0" Mar 17 17:41:17.859964 kubelet[1943]: I0317 17:41:17.859959 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fthk\" (UniqueName: \"kubernetes.io/projected/c62c7b85-4c75-435e-be88-cca2e7d5a5c8-kube-api-access-4fthk\") pod \"nfs-server-provisioner-0\" (UID: \"c62c7b85-4c75-435e-be88-cca2e7d5a5c8\") " pod="default/nfs-server-provisioner-0" Mar 17 17:41:18.093414 containerd[1606]: time="2025-03-17T17:41:18.093287795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c62c7b85-4c75-435e-be88-cca2e7d5a5c8,Namespace:default,Attempt:0,}" Mar 17 17:41:18.561899 systemd-networkd[1254]: cali60e51b789ff: Link UP Mar 17 17:41:18.562712 systemd-networkd[1254]: cali60e51b789ff: Gained carrier Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.477 [INFO][3487] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.38-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default c62c7b85-4c75-435e-be88-cca2e7d5a5c8 1209 0 2025-03-17 17:41:17 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.38 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.38-k8s-nfs--server--provisioner--0-" Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.478 [INFO][3487] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.38-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.513 [INFO][3502] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" HandleID="k8s-pod-network.53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" Workload="10.0.0.38-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.523 [INFO][3502] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" HandleID="k8s-pod-network.53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" Workload="10.0.0.38-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027f420), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.38", "pod":"nfs-server-provisioner-0", "timestamp":"2025-03-17 17:41:18.513211013 +0000 UTC"}, Hostname:"10.0.0.38", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.523 [INFO][3502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.523 [INFO][3502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.524 [INFO][3502] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.38' Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.526 [INFO][3502] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" host="10.0.0.38" Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.532 [INFO][3502] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.38" Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.537 [INFO][3502] ipam/ipam.go 489: Trying affinity for 192.168.116.64/26 host="10.0.0.38" Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.539 [INFO][3502] ipam/ipam.go 155: Attempting to load block cidr=192.168.116.64/26 host="10.0.0.38" Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.541 [INFO][3502] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="10.0.0.38" Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.541 [INFO][3502] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" host="10.0.0.38" Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.544 [INFO][3502] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.550 [INFO][3502] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" host="10.0.0.38" Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.555 [INFO][3502] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.116.67/26] block=192.168.116.64/26 handle="k8s-pod-network.53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" host="10.0.0.38" Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.555 [INFO][3502] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.116.67/26] handle="k8s-pod-network.53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" host="10.0.0.38" Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.555 [INFO][3502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:41:18.578968 containerd[1606]: 2025-03-17 17:41:18.555 [INFO][3502] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.67/26] IPv6=[] ContainerID="53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" HandleID="k8s-pod-network.53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" Workload="10.0.0.38-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:41:18.579736 containerd[1606]: 2025-03-17 17:41:18.559 [INFO][3487] cni-plugin/k8s.go 386: Populated endpoint ContainerID="53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.38-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"c62c7b85-4c75-435e-be88-cca2e7d5a5c8", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 41, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.38", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.116.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:41:18.579736 containerd[1606]: 2025-03-17 17:41:18.559 [INFO][3487] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.116.67/32] ContainerID="53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.38-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:41:18.579736 containerd[1606]: 2025-03-17 17:41:18.559 [INFO][3487] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.38-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:41:18.579736 containerd[1606]: 2025-03-17 17:41:18.562 [INFO][3487] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.38-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:41:18.579955 containerd[1606]: 2025-03-17 17:41:18.563 [INFO][3487] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.38-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"c62c7b85-4c75-435e-be88-cca2e7d5a5c8", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 41, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.38", ContainerID:"53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.116.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"a6:3a:c9:a4:e2:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:41:18.579955 containerd[1606]: 2025-03-17 17:41:18.574 [INFO][3487] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.38-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:41:18.604902 containerd[1606]: time="2025-03-17T17:41:18.604780630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:41:18.604902 containerd[1606]: time="2025-03-17T17:41:18.604852901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:41:18.604902 containerd[1606]: time="2025-03-17T17:41:18.604867278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:18.605196 containerd[1606]: time="2025-03-17T17:41:18.604973793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:18.633788 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:41:18.668995 containerd[1606]: time="2025-03-17T17:41:18.668949859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c62c7b85-4c75-435e-be88-cca2e7d5a5c8,Namespace:default,Attempt:0,} returns sandbox id \"53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a\"" Mar 17 17:41:18.670440 containerd[1606]: time="2025-03-17T17:41:18.670408927Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 17:41:18.727610 kubelet[1943]: E0317 17:41:18.727520 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:19.728518 kubelet[1943]: E0317 17:41:19.728451 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:20.357981 systemd-networkd[1254]: cali60e51b789ff: Gained IPv6LL Mar 17 17:41:20.728832 kubelet[1943]: E0317 17:41:20.728768 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:21.265546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2120100977.mount: Deactivated successfully. Mar 17 17:41:21.729503 kubelet[1943]: E0317 17:41:21.729441 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:22.685157 kubelet[1943]: E0317 17:41:22.685108 1943 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:22.723705 containerd[1606]: time="2025-03-17T17:41:22.723661421Z" level=info msg="StopPodSandbox for \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\"" Mar 17 17:41:22.724274 containerd[1606]: time="2025-03-17T17:41:22.723798332Z" level=info msg="TearDown network for sandbox \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\" successfully" Mar 17 17:41:22.724274 containerd[1606]: time="2025-03-17T17:41:22.723860347Z" level=info msg="StopPodSandbox for \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\" returns successfully" Mar 17 17:41:22.725698 containerd[1606]: time="2025-03-17T17:41:22.724501625Z" level=info msg="RemovePodSandbox for \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\"" Mar 17 17:41:22.725698 containerd[1606]: time="2025-03-17T17:41:22.724528105Z" level=info msg="Forcibly stopping sandbox \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\"" Mar 17 17:41:22.725698 containerd[1606]: time="2025-03-17T17:41:22.724612770Z" level=info msg="TearDown network for sandbox \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\" successfully" Mar 17 17:41:22.727827 containerd[1606]: time="2025-03-17T17:41:22.727655064Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:41:22.727827 containerd[1606]: time="2025-03-17T17:41:22.727734069Z" level=info msg="RemovePodSandbox \"8a3cc296440e416156e734928d767f66029d108773ec8516199e8d3a45b51024\" returns successfully" Mar 17 17:41:22.728283 containerd[1606]: time="2025-03-17T17:41:22.728240630Z" level=info msg="StopPodSandbox for \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\"" Mar 17 17:41:22.728446 containerd[1606]: time="2025-03-17T17:41:22.728350021Z" level=info msg="TearDown network for sandbox \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\" successfully" Mar 17 17:41:22.728446 containerd[1606]: time="2025-03-17T17:41:22.728400465Z" level=info msg="StopPodSandbox for \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\" returns successfully" Mar 17 17:41:22.729746 containerd[1606]: time="2025-03-17T17:41:22.728761888Z" level=info msg="RemovePodSandbox for \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\"" Mar 17 17:41:22.729746 containerd[1606]: time="2025-03-17T17:41:22.728811290Z" level=info msg="Forcibly stopping sandbox \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\"" Mar 17 17:41:22.729746 containerd[1606]: time="2025-03-17T17:41:22.728930348Z" level=info msg="TearDown network for sandbox \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\" successfully" Mar 17 17:41:22.730074 kubelet[1943]: E0317 17:41:22.730033 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:22.732322 containerd[1606]: time="2025-03-17T17:41:22.732280678Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:41:22.732462 containerd[1606]: time="2025-03-17T17:41:22.732328607Z" level=info msg="RemovePodSandbox \"9e0a1504cb30272bcfdf99612a07ad62ccd7633f810328ae023b2b56d1a65038\" returns successfully" Mar 17 17:41:22.732688 containerd[1606]: time="2025-03-17T17:41:22.732660177Z" level=info msg="StopPodSandbox for \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\"" Mar 17 17:41:22.732854 containerd[1606]: time="2025-03-17T17:41:22.732835388Z" level=info msg="TearDown network for sandbox \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\" successfully" Mar 17 17:41:22.732935 containerd[1606]: time="2025-03-17T17:41:22.732853321Z" level=info msg="StopPodSandbox for \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\" returns successfully" Mar 17 17:41:22.736069 containerd[1606]: time="2025-03-17T17:41:22.736042896Z" level=info msg="RemovePodSandbox for \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\"" Mar 17 17:41:22.736218 containerd[1606]: time="2025-03-17T17:41:22.736185228Z" level=info msg="Forcibly stopping sandbox \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\"" Mar 17 17:41:22.736458 containerd[1606]: time="2025-03-17T17:41:22.736402327Z" level=info msg="TearDown network for sandbox \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\" successfully" Mar 17 17:41:22.739381 containerd[1606]: time="2025-03-17T17:41:22.739347402Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:41:22.739461 containerd[1606]: time="2025-03-17T17:41:22.739401471Z" level=info msg="RemovePodSandbox \"178a0e3f9d6ade24cdadd35fa151ad6597174655f03fa071ac4f0ee194fbfb06\" returns successfully" Mar 17 17:41:22.739910 containerd[1606]: time="2025-03-17T17:41:22.739703567Z" level=info msg="StopPodSandbox for \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\"" Mar 17 17:41:22.739910 containerd[1606]: time="2025-03-17T17:41:22.739823718Z" level=info msg="TearDown network for sandbox \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\" successfully" Mar 17 17:41:22.739910 containerd[1606]: time="2025-03-17T17:41:22.739837964Z" level=info msg="StopPodSandbox for \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\" returns successfully" Mar 17 17:41:22.740149 containerd[1606]: time="2025-03-17T17:41:22.740109132Z" level=info msg="RemovePodSandbox for \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\"" Mar 17 17:41:22.740149 containerd[1606]: time="2025-03-17T17:41:22.740135060Z" level=info msg="Forcibly stopping sandbox \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\"" Mar 17 17:41:22.740267 containerd[1606]: time="2025-03-17T17:41:22.740206221Z" level=info msg="TearDown network for sandbox \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\" successfully" Mar 17 17:41:22.743435 containerd[1606]: time="2025-03-17T17:41:22.743231023Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:41:22.743435 containerd[1606]: time="2025-03-17T17:41:22.743287787Z" level=info msg="RemovePodSandbox \"6365bed170e5ec3c9475a52b93db824f9d73fcc71a7f21921d63089cd9a8e118\" returns successfully" Mar 17 17:41:22.743622 containerd[1606]: time="2025-03-17T17:41:22.743595193Z" level=info msg="StopPodSandbox for \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\"" Mar 17 17:41:22.743806 containerd[1606]: time="2025-03-17T17:41:22.743698833Z" level=info msg="TearDown network for sandbox \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\" successfully" Mar 17 17:41:22.743806 containerd[1606]: time="2025-03-17T17:41:22.743716595Z" level=info msg="StopPodSandbox for \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\" returns successfully" Mar 17 17:41:22.744181 containerd[1606]: time="2025-03-17T17:41:22.744145164Z" level=info msg="RemovePodSandbox for \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\"" Mar 17 17:41:22.744218 containerd[1606]: time="2025-03-17T17:41:22.744184556Z" level=info msg="Forcibly stopping sandbox \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\"" Mar 17 17:41:22.744335 containerd[1606]: time="2025-03-17T17:41:22.744279381Z" level=info msg="TearDown network for sandbox \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\" successfully" Mar 17 17:41:22.747652 containerd[1606]: time="2025-03-17T17:41:22.747617408Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:41:22.747652 containerd[1606]: time="2025-03-17T17:41:22.747659145Z" level=info msg="RemovePodSandbox \"4e61d45dbb36600b585ae1682e05da624c7acf80163fde66c7f62e8b2c755786\" returns successfully" Mar 17 17:41:22.748160 containerd[1606]: time="2025-03-17T17:41:22.748133496Z" level=info msg="StopPodSandbox for \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\"" Mar 17 17:41:22.748258 containerd[1606]: time="2025-03-17T17:41:22.748237058Z" level=info msg="TearDown network for sandbox \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\" successfully" Mar 17 17:41:22.748258 containerd[1606]: time="2025-03-17T17:41:22.748253297Z" level=info msg="StopPodSandbox for \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\" returns successfully" Mar 17 17:41:22.748792 containerd[1606]: time="2025-03-17T17:41:22.748759899Z" level=info msg="RemovePodSandbox for \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\"" Mar 17 17:41:22.748844 containerd[1606]: time="2025-03-17T17:41:22.748793801Z" level=info msg="Forcibly stopping sandbox \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\"" Mar 17 17:41:22.748936 containerd[1606]: time="2025-03-17T17:41:22.748893484Z" level=info msg="TearDown network for sandbox \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\" successfully" Mar 17 17:41:22.752365 containerd[1606]: time="2025-03-17T17:41:22.752344650Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:41:22.752550 containerd[1606]: time="2025-03-17T17:41:22.752451787Z" level=info msg="RemovePodSandbox \"6c8765ddf8778b17ffca135984f613dcb3c8f653db415a506335361d34186a7f\" returns successfully" Mar 17 17:41:22.752802 containerd[1606]: time="2025-03-17T17:41:22.752774860Z" level=info msg="StopPodSandbox for \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\"" Mar 17 17:41:22.752901 containerd[1606]: time="2025-03-17T17:41:22.752871539Z" level=info msg="TearDown network for sandbox \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\" successfully" Mar 17 17:41:22.752901 containerd[1606]: time="2025-03-17T17:41:22.752895583Z" level=info msg="StopPodSandbox for \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\" returns successfully" Mar 17 17:41:22.753166 containerd[1606]: time="2025-03-17T17:41:22.753143198Z" level=info msg="RemovePodSandbox for \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\"" Mar 17 17:41:22.753166 containerd[1606]: time="2025-03-17T17:41:22.753162744Z" level=info msg="Forcibly stopping sandbox \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\"" Mar 17 17:41:22.753257 containerd[1606]: time="2025-03-17T17:41:22.753226722Z" level=info msg="TearDown network for sandbox \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\" successfully" Mar 17 17:41:22.756656 containerd[1606]: time="2025-03-17T17:41:22.756625841Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:41:22.756838 containerd[1606]: time="2025-03-17T17:41:22.756779134Z" level=info msg="RemovePodSandbox \"b545d38e1c6c0bc199aabb7fce94db1cf717e3760a4e576d64af07e0fbee31e6\" returns successfully" Mar 17 17:41:22.757300 containerd[1606]: time="2025-03-17T17:41:22.757100174Z" level=info msg="StopPodSandbox for \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\"" Mar 17 17:41:22.757300 containerd[1606]: time="2025-03-17T17:41:22.757207130Z" level=info msg="TearDown network for sandbox \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\" successfully" Mar 17 17:41:22.757300 containerd[1606]: time="2025-03-17T17:41:22.757219543Z" level=info msg="StopPodSandbox for \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\" returns successfully" Mar 17 17:41:22.757514 containerd[1606]: time="2025-03-17T17:41:22.757486013Z" level=info msg="RemovePodSandbox for \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\"" Mar 17 17:41:22.757514 containerd[1606]: time="2025-03-17T17:41:22.757508584Z" level=info msg="Forcibly stopping sandbox \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\"" Mar 17 17:41:22.757614 containerd[1606]: time="2025-03-17T17:41:22.757569587Z" level=info msg="TearDown network for sandbox \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\" successfully" Mar 17 17:41:22.760789 containerd[1606]: time="2025-03-17T17:41:22.760758140Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:41:22.760852 containerd[1606]: time="2025-03-17T17:41:22.760801268Z" level=info msg="RemovePodSandbox \"acb19ac8b9b705198667ef5debb533bf216a3eb38db90e11f5d268d38057aaf3\" returns successfully" Mar 17 17:41:22.761123 containerd[1606]: time="2025-03-17T17:41:22.761100278Z" level=info msg="StopPodSandbox for \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\"" Mar 17 17:41:22.761216 containerd[1606]: time="2025-03-17T17:41:22.761191927Z" level=info msg="TearDown network for sandbox \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\" successfully" Mar 17 17:41:22.761216 containerd[1606]: time="2025-03-17T17:41:22.761212354Z" level=info msg="StopPodSandbox for \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\" returns successfully" Mar 17 17:41:22.761548 containerd[1606]: time="2025-03-17T17:41:22.761487831Z" level=info msg="RemovePodSandbox for \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\"" Mar 17 17:41:22.761585 containerd[1606]: time="2025-03-17T17:41:22.761544625Z" level=info msg="Forcibly stopping sandbox \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\"" Mar 17 17:41:22.761664 containerd[1606]: time="2025-03-17T17:41:22.761636144Z" level=info msg="TearDown network for sandbox \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\" successfully" Mar 17 17:41:22.766589 containerd[1606]: time="2025-03-17T17:41:22.766547314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:41:22.766665 containerd[1606]: time="2025-03-17T17:41:22.766605441Z" level=info msg="RemovePodSandbox \"8d403688f80d5354cedf6f315d9262f0f869588e362d00b3547dc4894a72cf2c\" returns successfully" Mar 17 17:41:22.767748 containerd[1606]: time="2025-03-17T17:41:22.766924006Z" level=info msg="StopPodSandbox for \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\"" Mar 17 17:41:22.767748 containerd[1606]: time="2025-03-17T17:41:22.767038547Z" level=info msg="TearDown network for sandbox \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\" successfully" Mar 17 17:41:22.767748 containerd[1606]: time="2025-03-17T17:41:22.767052663Z" level=info msg="StopPodSandbox for \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\" returns successfully" Mar 17 17:41:22.767748 containerd[1606]: time="2025-03-17T17:41:22.767343267Z" level=info msg="RemovePodSandbox for \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\"" Mar 17 17:41:22.767748 containerd[1606]: time="2025-03-17T17:41:22.767371790Z" level=info msg="Forcibly stopping sandbox \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\"" Mar 17 17:41:22.767748 containerd[1606]: time="2025-03-17T17:41:22.767431900Z" level=info msg="TearDown network for sandbox \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\" successfully" Mar 17 17:41:22.773674 containerd[1606]: time="2025-03-17T17:41:22.773614969Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:41:22.773674 containerd[1606]: time="2025-03-17T17:41:22.773667185Z" level=info msg="RemovePodSandbox \"40dcd459c2cefce67935525fb424bb4fa76eab28bccbbc57b11a5d18b061803c\" returns successfully" Mar 17 17:41:23.730660 kubelet[1943]: E0317 17:41:23.730589 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:24.697015 containerd[1606]: time="2025-03-17T17:41:24.696949284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:24.731192 kubelet[1943]: E0317 17:41:24.731114 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:24.753982 containerd[1606]: time="2025-03-17T17:41:24.753870134Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Mar 17 17:41:24.780265 containerd[1606]: time="2025-03-17T17:41:24.780208356Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:24.818933 containerd[1606]: time="2025-03-17T17:41:24.818843343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:24.820282 containerd[1606]: time="2025-03-17T17:41:24.820231272Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.149779717s" Mar 17 17:41:24.820282 containerd[1606]: time="2025-03-17T17:41:24.820287164Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Mar 17 17:41:24.822567 containerd[1606]: time="2025-03-17T17:41:24.822532312Z" level=info msg="CreateContainer within sandbox \"53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 17:41:25.175674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4000096267.mount: Deactivated successfully. Mar 17 17:41:25.266107 containerd[1606]: time="2025-03-17T17:41:25.266050076Z" level=info msg="CreateContainer within sandbox \"53a6e85f73cddcc3db10264a1bb26bbae6a2c17a772ecf9495e07fefbbeb700a\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6d249971df627a11fce79b330084584e223d3135271cf2ce42fb8367a952612b\"" Mar 17 17:41:25.266718 containerd[1606]: time="2025-03-17T17:41:25.266656855Z" level=info msg="StartContainer for \"6d249971df627a11fce79b330084584e223d3135271cf2ce42fb8367a952612b\"" Mar 17 17:41:25.438754 containerd[1606]: time="2025-03-17T17:41:25.438551562Z" level=info msg="StartContainer for \"6d249971df627a11fce79b330084584e223d3135271cf2ce42fb8367a952612b\" returns successfully" Mar 17 17:41:25.731999 kubelet[1943]: E0317 17:41:25.731817 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:26.732307 kubelet[1943]: E0317 17:41:26.732238 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:27.733025 kubelet[1943]: E0317 17:41:27.732967 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:28.734086 kubelet[1943]: E0317 17:41:28.734007 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:29.734693 kubelet[1943]: E0317 17:41:29.734617 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:30.735607 kubelet[1943]: E0317 17:41:30.735559 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:31.735905 kubelet[1943]: E0317 17:41:31.735848 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:32.736696 kubelet[1943]: E0317 17:41:32.736614 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:33.737876 kubelet[1943]: E0317 17:41:33.737787 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:34.738530 kubelet[1943]: E0317 17:41:34.738466 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:34.926933 kubelet[1943]: I0317 17:41:34.926832 1943 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.775914947 podStartE2EDuration="17.926813893s" podCreationTimestamp="2025-03-17 17:41:17 +0000 UTC" firstStartedPulling="2025-03-17 17:41:18.670190848 +0000 UTC m=+57.375583497" lastFinishedPulling="2025-03-17 17:41:24.821089794 +0000 UTC m=+63.526482443" observedRunningTime="2025-03-17 17:41:25.673768263 +0000 UTC m=+64.379160912" watchObservedRunningTime="2025-03-17 17:41:34.926813893 +0000 UTC m=+73.632206542" Mar 17 17:41:34.927155 kubelet[1943]: I0317 17:41:34.927085 1943 topology_manager.go:215] "Topology Admit Handler" podUID="d173e623-35b1-4825-a7d1-8ca39cbb8251" podNamespace="default" podName="test-pod-1" Mar 17 17:41:35.064514 kubelet[1943]: I0317 17:41:35.064053 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krrls\" (UniqueName: \"kubernetes.io/projected/d173e623-35b1-4825-a7d1-8ca39cbb8251-kube-api-access-krrls\") pod \"test-pod-1\" (UID: \"d173e623-35b1-4825-a7d1-8ca39cbb8251\") " pod="default/test-pod-1" Mar 17 17:41:35.064514 kubelet[1943]: I0317 17:41:35.064110 1943 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1b0497d-452b-4791-bcf1-0d7da0ada00f\" (UniqueName: \"kubernetes.io/nfs/d173e623-35b1-4825-a7d1-8ca39cbb8251-pvc-b1b0497d-452b-4791-bcf1-0d7da0ada00f\") pod \"test-pod-1\" (UID: \"d173e623-35b1-4825-a7d1-8ca39cbb8251\") " pod="default/test-pod-1" Mar 17 17:41:35.194775 kernel: FS-Cache: Loaded Mar 17 17:41:35.267112 kernel: RPC: Registered named UNIX socket transport module. Mar 17 17:41:35.267261 kernel: RPC: Registered udp transport module. Mar 17 17:41:35.267282 kernel: RPC: Registered tcp transport module. Mar 17 17:41:35.267773 kernel: RPC: Registered tcp-with-tls transport module. Mar 17 17:41:35.269433 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 17:41:35.567847 kernel: NFS: Registering the id_resolver key type Mar 17 17:41:35.568055 kernel: Key type id_resolver registered Mar 17 17:41:35.568079 kernel: Key type id_legacy registered Mar 17 17:41:35.598292 nfsidmap[3701]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 17 17:41:35.604231 nfsidmap[3704]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 17 17:41:35.739490 kubelet[1943]: E0317 17:41:35.739413 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:35.832024 containerd[1606]: time="2025-03-17T17:41:35.831714268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d173e623-35b1-4825-a7d1-8ca39cbb8251,Namespace:default,Attempt:0,}" Mar 17 17:41:36.149560 systemd-networkd[1254]: cali5ec59c6bf6e: Link UP Mar 17 17:41:36.150377 systemd-networkd[1254]: cali5ec59c6bf6e: Gained carrier Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:35.985 [INFO][3707] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.38-k8s-test--pod--1-eth0 default d173e623-35b1-4825-a7d1-8ca39cbb8251 1288 0 2025-03-17 17:41:17 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.38 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.38-k8s-test--pod--1-" Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:35.985 [INFO][3707] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.38-k8s-test--pod--1-eth0" Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.018 [INFO][3722] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" HandleID="k8s-pod-network.fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" Workload="10.0.0.38-k8s-test--pod--1-eth0" Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.029 [INFO][3722] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" HandleID="k8s-pod-network.fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" Workload="10.0.0.38-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002decb0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.38", "pod":"test-pod-1", "timestamp":"2025-03-17 17:41:36.01877074 +0000 UTC"}, Hostname:"10.0.0.38", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.029 [INFO][3722] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.029 [INFO][3722] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.029 [INFO][3722] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.38' Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.031 [INFO][3722] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" host="10.0.0.38" Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.035 [INFO][3722] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.38" Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.040 [INFO][3722] ipam/ipam.go 489: Trying affinity for 192.168.116.64/26 host="10.0.0.38" Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.041 [INFO][3722] ipam/ipam.go 155: Attempting to load block cidr=192.168.116.64/26 host="10.0.0.38" Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.043 [INFO][3722] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="10.0.0.38" Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.043 [INFO][3722] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" host="10.0.0.38" Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.045 [INFO][3722] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714 Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.079 [INFO][3722] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" host="10.0.0.38" Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.143 [INFO][3722] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.116.68/26] block=192.168.116.64/26 handle="k8s-pod-network.fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" host="10.0.0.38" Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.143 [INFO][3722] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.116.68/26] handle="k8s-pod-network.fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" host="10.0.0.38" Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.143 [INFO][3722] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.143 [INFO][3722] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.68/26] IPv6=[] ContainerID="fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" HandleID="k8s-pod-network.fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" Workload="10.0.0.38-k8s-test--pod--1-eth0" Mar 17 17:41:36.166903 containerd[1606]: 2025-03-17 17:41:36.147 [INFO][3707] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.38-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"d173e623-35b1-4825-a7d1-8ca39cbb8251", ResourceVersion:"1288", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 41, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.38", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.116.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:41:36.167555 containerd[1606]: 2025-03-17 17:41:36.147 [INFO][3707] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.116.68/32] ContainerID="fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.38-k8s-test--pod--1-eth0" Mar 17 17:41:36.167555 containerd[1606]: 2025-03-17 17:41:36.147 [INFO][3707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.38-k8s-test--pod--1-eth0" Mar 17 17:41:36.167555 containerd[1606]: 2025-03-17 17:41:36.150 [INFO][3707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.38-k8s-test--pod--1-eth0" Mar 17 17:41:36.167555 containerd[1606]: 2025-03-17 17:41:36.150 [INFO][3707] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.38-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.38-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"d173e623-35b1-4825-a7d1-8ca39cbb8251", ResourceVersion:"1288", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 41, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.38", ContainerID:"fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.116.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"02:46:c2:4d:67:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:41:36.167555 containerd[1606]: 2025-03-17 17:41:36.163 [INFO][3707] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.38-k8s-test--pod--1-eth0" Mar 17 17:41:36.205762 containerd[1606]: time="2025-03-17T17:41:36.205616005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:41:36.205762 containerd[1606]: time="2025-03-17T17:41:36.205710219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:41:36.205762 containerd[1606]: time="2025-03-17T17:41:36.205734876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:36.205966 containerd[1606]: time="2025-03-17T17:41:36.205868785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:36.232751 systemd-resolved[1461]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:41:36.262224 containerd[1606]: time="2025-03-17T17:41:36.262176784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d173e623-35b1-4825-a7d1-8ca39cbb8251,Namespace:default,Attempt:0,} returns sandbox id \"fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714\"" Mar 17 17:41:36.263887 containerd[1606]: time="2025-03-17T17:41:36.263806566Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:41:36.740695 kubelet[1943]: E0317 17:41:36.740610 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:36.798873 containerd[1606]: time="2025-03-17T17:41:36.798801567Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:36.800162 containerd[1606]: time="2025-03-17T17:41:36.800041083Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 17 17:41:36.804941 containerd[1606]: time="2025-03-17T17:41:36.804844148Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 540.975909ms" Mar 17 17:41:36.804941 containerd[1606]: time="2025-03-17T17:41:36.804913066Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 17 17:41:36.808320 containerd[1606]: time="2025-03-17T17:41:36.808275622Z" level=info msg="CreateContainer within sandbox \"fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 17:41:36.833317 containerd[1606]: time="2025-03-17T17:41:36.833252325Z" level=info msg="CreateContainer within sandbox \"fdf02c3c636be6cdcaa527dc8ee7692cf5aa5049f7a18292672f07dddb81a714\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"f2c7a71b0bc3b2097e20c7069efd1fdf98894663315871bd15559ff2afe175d8\"" Mar 17 17:41:36.834525 containerd[1606]: time="2025-03-17T17:41:36.834464009Z" level=info msg="StartContainer for \"f2c7a71b0bc3b2097e20c7069efd1fdf98894663315871bd15559ff2afe175d8\"" Mar 17 17:41:36.915223 containerd[1606]: time="2025-03-17T17:41:36.915130542Z" level=info msg="StartContainer for \"f2c7a71b0bc3b2097e20c7069efd1fdf98894663315871bd15559ff2afe175d8\" returns successfully" Mar 17 17:41:37.563317 kubelet[1943]: I0317 17:41:37.563240 1943 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=20.020652922 podStartE2EDuration="20.563216573s" podCreationTimestamp="2025-03-17 17:41:17 +0000 UTC" firstStartedPulling="2025-03-17 17:41:36.263552493 +0000 UTC m=+74.968945142" lastFinishedPulling="2025-03-17 17:41:36.806116144 +0000 UTC m=+75.511508793" observedRunningTime="2025-03-17 17:41:37.562877982 +0000 UTC m=+76.268270631" watchObservedRunningTime="2025-03-17 17:41:37.563216573 +0000 UTC m=+76.268609222" Mar 17 17:41:37.741678 kubelet[1943]: E0317 17:41:37.741553 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:37.766065 systemd-networkd[1254]: cali5ec59c6bf6e: Gained IPv6LL Mar 17 17:41:38.742441 kubelet[1943]: E0317 17:41:38.742319 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:39.743196 kubelet[1943]: E0317 17:41:39.743088 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:41:40.744263 kubelet[1943]: E0317 17:41:40.744198 1943 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"