Feb 13 15:54:09.093311 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:06:02 -00 2025 Feb 13 15:54:09.093360 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:54:09.093379 kernel: BIOS-provided physical RAM map: Feb 13 15:54:09.093393 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 15:54:09.093406 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 15:54:09.093420 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 15:54:09.093437 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 15:54:09.093451 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 15:54:09.093469 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd328fff] usable Feb 13 15:54:09.093482 kernel: BIOS-e820: [mem 0x00000000bd329000-0x00000000bd330fff] ACPI data Feb 13 15:54:09.093496 kernel: BIOS-e820: [mem 0x00000000bd331000-0x00000000bf8ecfff] usable Feb 13 15:54:09.093511 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Feb 13 15:54:09.093525 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 15:54:09.093541 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 15:54:09.093563 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 15:54:09.093580 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 15:54:09.093596 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 15:54:09.093613 kernel: NX (Execute Disable) protection: active Feb 13 15:54:09.093629 kernel: APIC: Static calls initialized Feb 13 15:54:09.093645 kernel: efi: EFI v2.7 by EDK II Feb 13 15:54:09.093672 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd329018 Feb 13 15:54:09.093689 kernel: random: crng init done Feb 13 15:54:09.093705 kernel: secureboot: Secure boot disabled Feb 13 15:54:09.093721 kernel: SMBIOS 2.4 present. Feb 13 15:54:09.093741 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 15:54:09.093758 kernel: Hypervisor detected: KVM Feb 13 15:54:09.093774 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:54:09.093790 kernel: kvm-clock: using sched offset of 12996406779 cycles Feb 13 15:54:09.093808 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:54:09.093825 kernel: tsc: Detected 2299.998 MHz processor Feb 13 15:54:09.093842 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:54:09.093859 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:54:09.093875 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 15:54:09.093892 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 15:54:09.093912 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:54:09.093928 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 15:54:09.093944 kernel: Using GB pages for direct mapping Feb 13 15:54:09.093959 kernel: ACPI: Early table checksum verification disabled Feb 13 15:54:09.093975 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 15:54:09.093991 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 15:54:09.094014 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 15:54:09.094034 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 15:54:09.094050 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 15:54:09.094066 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 15:54:09.094083 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 15:54:09.094100 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 15:54:09.094117 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 15:54:09.094134 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 15:54:09.094155 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 15:54:09.094172 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 15:54:09.094206 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 15:54:09.094222 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 15:54:09.094238 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 15:54:09.094255 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 15:54:09.094271 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 15:54:09.094288 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 15:54:09.094305 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 15:54:09.094327 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 15:54:09.094343 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:54:09.094359 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:54:09.094376 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 15:54:09.094393 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 15:54:09.094410 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 15:54:09.094427 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 15:54:09.094445 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 15:54:09.094463 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Feb 13 15:54:09.094485 kernel: Zone ranges: Feb 13 15:54:09.094502 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:54:09.094520 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 15:54:09.094538 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 15:54:09.094555 kernel: Movable zone start for each node Feb 13 15:54:09.094572 kernel: Early memory node ranges Feb 13 15:54:09.094590 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 15:54:09.094608 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 15:54:09.094625 kernel: node 0: [mem 0x0000000000100000-0x00000000bd328fff] Feb 13 15:54:09.094646 kernel: node 0: [mem 0x00000000bd331000-0x00000000bf8ecfff] Feb 13 15:54:09.094673 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 15:54:09.094690 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 15:54:09.094708 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 15:54:09.094726 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:54:09.094744 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 15:54:09.094761 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 15:54:09.094779 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Feb 13 15:54:09.094796 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 15:54:09.094816 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 15:54:09.094833 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 15:54:09.094849 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:54:09.094866 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:54:09.094882 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:54:09.094898 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:54:09.094914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:54:09.094930 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:54:09.094946 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:54:09.094967 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:54:09.094984 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 15:54:09.095000 kernel: Booting paravirtualized kernel on KVM Feb 13 15:54:09.095017 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:54:09.095034 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:54:09.095050 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:54:09.095066 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:54:09.095082 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:54:09.095099 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:54:09.095119 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:54:09.095138 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:54:09.095155 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:54:09.095172 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 15:54:09.095214 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:54:09.095232 kernel: Fallback order for Node 0: 0 Feb 13 15:54:09.095250 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Feb 13 15:54:09.095267 kernel: Policy zone: Normal Feb 13 15:54:09.095290 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:54:09.095306 kernel: software IO TLB: area num 2. Feb 13 15:54:09.095323 kernel: Memory: 7511320K/7860552K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 348976K reserved, 0K cma-reserved) Feb 13 15:54:09.095340 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:54:09.095356 kernel: Kernel/User page tables isolation: enabled Feb 13 15:54:09.095373 kernel: ftrace: allocating 37890 entries in 149 pages Feb 13 15:54:09.095389 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:54:09.095406 kernel: Dynamic Preempt: voluntary Feb 13 15:54:09.095440 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:54:09.095464 kernel: rcu: RCU event tracing is enabled. Feb 13 15:54:09.095482 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:54:09.095499 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:54:09.095520 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:54:09.095539 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:54:09.095556 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:54:09.095574 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:54:09.095592 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 15:54:09.095614 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:54:09.095632 kernel: Console: colour dummy device 80x25 Feb 13 15:54:09.095650 kernel: printk: console [ttyS0] enabled Feb 13 15:54:09.095684 kernel: ACPI: Core revision 20230628 Feb 13 15:54:09.095702 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:54:09.095720 kernel: x2apic enabled Feb 13 15:54:09.095738 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:54:09.095756 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 15:54:09.095774 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 15:54:09.095797 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 15:54:09.095815 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 15:54:09.095833 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 15:54:09.095852 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:54:09.095870 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 15:54:09.095888 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 15:54:09.095906 kernel: Spectre V2 : Mitigation: IBRS Feb 13 15:54:09.095925 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:54:09.095947 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:54:09.095965 kernel: RETBleed: Mitigation: IBRS Feb 13 15:54:09.095984 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:54:09.096002 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 15:54:09.096021 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:54:09.096039 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 15:54:09.096057 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:54:09.096076 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:54:09.096094 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:54:09.096116 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:54:09.096134 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:54:09.096153 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 15:54:09.096172 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:54:09.096205 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:54:09.096222 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:54:09.096240 kernel: landlock: Up and running. Feb 13 15:54:09.096258 kernel: SELinux: Initializing. Feb 13 15:54:09.096275 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:54:09.096297 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:54:09.096315 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 15:54:09.096334 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:54:09.096353 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:54:09.096372 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:54:09.096389 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 15:54:09.096406 kernel: signal: max sigframe size: 1776 Feb 13 15:54:09.096424 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:54:09.096442 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:54:09.096464 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:54:09.096483 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:54:09.096502 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:54:09.096520 kernel: .... node #0, CPUs: #1 Feb 13 15:54:09.096540 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 15:54:09.096579 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:54:09.096597 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:54:09.096615 kernel: smpboot: Max logical packages: 1 Feb 13 15:54:09.096638 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 15:54:09.096666 kernel: devtmpfs: initialized Feb 13 15:54:09.096684 kernel: x86/mm: Memory block size: 128MB Feb 13 15:54:09.096702 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 15:54:09.096721 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:54:09.096739 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:54:09.096757 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:54:09.096776 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:54:09.096795 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:54:09.096818 kernel: audit: type=2000 audit(1739462047.951:1): state=initialized audit_enabled=0 res=1 Feb 13 15:54:09.096836 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:54:09.096854 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:54:09.096873 kernel: cpuidle: using governor menu Feb 13 15:54:09.096891 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:54:09.096910 kernel: dca service started, version 1.12.1 Feb 13 15:54:09.096928 kernel: PCI: Using configuration type 1 for base access Feb 13 15:54:09.096947 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:54:09.096966 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:54:09.096988 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:54:09.097007 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:54:09.097026 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:54:09.097044 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:54:09.097063 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:54:09.097081 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:54:09.097100 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:54:09.097119 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 15:54:09.097137 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:54:09.097159 kernel: ACPI: Interpreter enabled Feb 13 15:54:09.097177 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:54:09.097210 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:54:09.097237 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:54:09.097255 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 15:54:09.097273 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 15:54:09.097288 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:54:09.097962 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:54:09.098638 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 15:54:09.098862 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 15:54:09.098886 kernel: PCI host bridge to bus 0000:00 Feb 13 15:54:09.099067 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:54:09.099256 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:54:09.099427 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:54:09.099599 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 15:54:09.099791 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:54:09.100005 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 15:54:09.100237 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 15:54:09.100436 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 15:54:09.100617 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 15:54:09.100825 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 15:54:09.101019 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 15:54:09.103287 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 15:54:09.103512 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:54:09.103712 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 15:54:09.103899 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 15:54:09.104093 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:54:09.104328 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 15:54:09.104521 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 15:54:09.104546 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:54:09.104566 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:54:09.104584 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:54:09.104603 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:54:09.104622 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 15:54:09.104640 kernel: iommu: Default domain type: Translated Feb 13 15:54:09.104667 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:54:09.104685 kernel: efivars: Registered efivars operations Feb 13 15:54:09.104709 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:54:09.104727 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:54:09.104746 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 15:54:09.104765 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 15:54:09.104783 kernel: e820: reserve RAM buffer [mem 0xbd329000-0xbfffffff] Feb 13 15:54:09.104801 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 15:54:09.104819 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 15:54:09.104837 kernel: vgaarb: loaded Feb 13 15:54:09.104856 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:54:09.104878 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:54:09.104898 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:54:09.104916 kernel: pnp: PnP ACPI init Feb 13 15:54:09.104934 kernel: pnp: PnP ACPI: found 7 devices Feb 13 15:54:09.104953 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:54:09.104972 kernel: NET: Registered PF_INET protocol family Feb 13 15:54:09.104991 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:54:09.105010 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 15:54:09.105029 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:54:09.105051 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:54:09.105070 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 15:54:09.105088 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 15:54:09.105107 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:54:09.105126 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:54:09.105145 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:54:09.105163 kernel: NET: Registered PF_XDP protocol family Feb 13 15:54:09.105471 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:54:09.105651 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:54:09.105835 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:54:09.106019 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 15:54:09.106243 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 15:54:09.106271 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:54:09.106291 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 15:54:09.106310 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 15:54:09.106335 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:54:09.106354 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 15:54:09.106372 kernel: clocksource: Switched to clocksource tsc Feb 13 15:54:09.106390 kernel: Initialise system trusted keyrings Feb 13 15:54:09.106409 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 15:54:09.106427 kernel: Key type asymmetric registered Feb 13 15:54:09.106445 kernel: Asymmetric key parser 'x509' registered Feb 13 15:54:09.106464 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:54:09.106482 kernel: io scheduler mq-deadline registered Feb 13 15:54:09.106505 kernel: io scheduler kyber registered Feb 13 15:54:09.106523 kernel: io scheduler bfq registered Feb 13 15:54:09.106540 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:54:09.106559 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 15:54:09.106758 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 15:54:09.106782 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 15:54:09.106965 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 15:54:09.106990 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 15:54:09.107172 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 15:54:09.107218 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:54:09.107237 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:54:09.107257 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 15:54:09.107276 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 15:54:09.107294 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 15:54:09.107485 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 15:54:09.107511 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:54:09.107530 kernel: i8042: Warning: Keylock active Feb 13 15:54:09.107554 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:54:09.107573 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:54:09.107763 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 15:54:09.107938 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 15:54:09.108121 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T15:54:08 UTC (1739462048) Feb 13 15:54:09.108701 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 15:54:09.108864 kernel: intel_pstate: CPU model not supported Feb 13 15:54:09.108886 kernel: pstore: Using crash dump compression: deflate Feb 13 15:54:09.108911 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:54:09.108930 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:54:09.109083 kernel: Segment Routing with IPv6 Feb 13 15:54:09.109103 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:54:09.109122 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:54:09.109140 kernel: Key type dns_resolver registered Feb 13 15:54:09.109159 kernel: IPI shorthand broadcast: enabled Feb 13 15:54:09.109327 kernel: sched_clock: Marking stable (848004814, 157089495)->(1025284784, -20190475) Feb 13 15:54:09.109346 kernel: registered taskstats version 1 Feb 13 15:54:09.109370 kernel: Loading compiled-in X.509 certificates Feb 13 15:54:09.109524 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 3d19ae6dcd850c11d55bf09bd44e00c45ed399eb' Feb 13 15:54:09.109543 kernel: Key type .fscrypt registered Feb 13 15:54:09.109561 kernel: Key type fscrypt-provisioning registered Feb 13 15:54:09.109580 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:54:09.109599 kernel: ima: No architecture policies found Feb 13 15:54:09.109618 kernel: clk: Disabling unused clocks Feb 13 15:54:09.109636 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 15:54:09.109662 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:54:09.109685 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 15:54:09.109704 kernel: Run /init as init process Feb 13 15:54:09.109722 kernel: with arguments: Feb 13 15:54:09.109741 kernel: /init Feb 13 15:54:09.109759 kernel: with environment: Feb 13 15:54:09.109778 kernel: HOME=/ Feb 13 15:54:09.109797 kernel: TERM=linux Feb 13 15:54:09.109815 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:54:09.109834 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 13 15:54:09.109861 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:54:09.109885 systemd[1]: Detected virtualization google. Feb 13 15:54:09.109906 systemd[1]: Detected architecture x86-64. Feb 13 15:54:09.109926 systemd[1]: Running in initrd. Feb 13 15:54:09.109946 systemd[1]: No hostname configured, using default hostname. Feb 13 15:54:09.109965 systemd[1]: Hostname set to . Feb 13 15:54:09.109985 systemd[1]: Initializing machine ID from random generator. Feb 13 15:54:09.110009 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:54:09.110029 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:54:09.110049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:54:09.110070 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:54:09.110090 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:54:09.110110 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:54:09.110131 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:54:09.110158 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:54:09.110220 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:54:09.110242 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:54:09.110260 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:54:09.110278 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:54:09.110302 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:54:09.110322 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:54:09.110342 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:54:09.110361 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:54:09.110379 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:54:09.110397 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:54:09.110417 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:54:09.110434 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:54:09.110453 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:54:09.110480 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:54:09.110498 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:54:09.110516 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:54:09.110535 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:54:09.110554 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:54:09.110572 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:54:09.110591 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:54:09.110609 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:54:09.110672 systemd-journald[184]: Collecting audit messages is disabled. Feb 13 15:54:09.110717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:54:09.110736 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:54:09.110757 systemd-journald[184]: Journal started Feb 13 15:54:09.110801 systemd-journald[184]: Runtime Journal (/run/log/journal/8bbb0bafbb98471faac5a14652d0431b) is 8.0M, max 148.6M, 140.6M free. Feb 13 15:54:09.114923 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:54:09.119252 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:54:09.121917 systemd-modules-load[185]: Inserted module 'overlay' Feb 13 15:54:09.124807 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:54:09.147406 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:54:09.157955 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:54:09.167222 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:54:09.167254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:54:09.174229 kernel: Bridge firewalling registered Feb 13 15:54:09.173977 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 13 15:54:09.175538 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:54:09.179929 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:54:09.184692 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:54:09.189752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:54:09.211506 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:54:09.214396 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:54:09.240433 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:54:09.244858 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:54:09.248828 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:54:09.261416 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:54:09.269431 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:54:09.290728 dracut-cmdline[216]: dracut-dracut-053 Feb 13 15:54:09.295580 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:54:09.323741 systemd-resolved[217]: Positive Trust Anchors: Feb 13 15:54:09.324318 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:54:09.324533 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:54:09.330952 systemd-resolved[217]: Defaulting to hostname 'linux'. Feb 13 15:54:09.336018 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:54:09.356455 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:54:09.401238 kernel: SCSI subsystem initialized Feb 13 15:54:09.412238 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:54:09.424210 kernel: iscsi: registered transport (tcp) Feb 13 15:54:09.449238 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:54:09.449319 kernel: QLogic iSCSI HBA Driver Feb 13 15:54:09.501141 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:54:09.506420 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:54:09.545697 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:54:09.545783 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:54:09.545832 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:54:09.591227 kernel: raid6: avx2x4 gen() 18292 MB/s Feb 13 15:54:09.608230 kernel: raid6: avx2x2 gen() 18055 MB/s Feb 13 15:54:09.625621 kernel: raid6: avx2x1 gen() 13923 MB/s Feb 13 15:54:09.625708 kernel: raid6: using algorithm avx2x4 gen() 18292 MB/s Feb 13 15:54:09.643724 kernel: raid6: .... xor() 7779 MB/s, rmw enabled Feb 13 15:54:09.643800 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:54:09.666218 kernel: xor: automatically using best checksumming function avx Feb 13 15:54:09.833227 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:54:09.846923 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:54:09.854468 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:54:09.885407 systemd-udevd[400]: Using default interface naming scheme 'v255'. Feb 13 15:54:09.892625 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:54:09.905978 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:54:09.935632 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Feb 13 15:54:09.973052 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:54:09.977388 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:54:10.069964 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:54:10.084467 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:54:10.125307 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:54:10.146802 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:54:10.168470 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:54:10.229528 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:54:10.181780 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:54:10.223422 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:54:10.287672 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:54:10.293222 kernel: scsi host0: Virtio SCSI HBA Feb 13 15:54:10.303209 kernel: AES CTR mode by8 optimization enabled Feb 13 15:54:10.316158 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:54:10.334344 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 15:54:10.347446 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:54:10.347779 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:54:10.384153 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 15:54:10.451865 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 15:54:10.452057 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 15:54:10.452271 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 15:54:10.452506 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 15:54:10.452770 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:54:10.452799 kernel: GPT:17805311 != 25165823 Feb 13 15:54:10.452822 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:54:10.452843 kernel: GPT:17805311 != 25165823 Feb 13 15:54:10.452977 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:54:10.453016 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:54:10.453044 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 15:54:10.406551 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:54:10.452323 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:54:10.452605 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:54:10.528898 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (447) Feb 13 15:54:10.528940 kernel: BTRFS: device fsid 0e178e67-0100-48b1-87c9-422b9a68652a devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (445) Feb 13 15:54:10.463582 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:54:10.512619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:54:10.571584 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 15:54:10.572112 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:54:10.617073 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 15:54:10.623860 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 15:54:10.652929 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 15:54:10.668395 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 15:54:10.698447 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:54:10.710496 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:54:10.744659 disk-uuid[543]: Primary Header is updated. Feb 13 15:54:10.744659 disk-uuid[543]: Secondary Entries is updated. Feb 13 15:54:10.744659 disk-uuid[543]: Secondary Header is updated. Feb 13 15:54:10.766214 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:54:10.788217 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:54:10.810420 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:54:11.801583 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:54:11.801666 disk-uuid[544]: The operation has completed successfully. Feb 13 15:54:11.881142 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:54:11.881322 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:54:11.910416 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:54:11.943302 sh[566]: Success Feb 13 15:54:11.967227 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:54:12.055521 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:54:12.062651 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:54:12.089647 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:54:12.123616 kernel: BTRFS info (device dm-0): first mount of filesystem 0e178e67-0100-48b1-87c9-422b9a68652a Feb 13 15:54:12.123709 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:54:12.123735 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:54:12.134158 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:54:12.140091 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:54:12.170239 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:54:12.175164 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:54:12.176137 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:54:12.182487 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:54:12.194452 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:54:12.253238 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:54:12.253348 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:54:12.253375 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:54:12.275748 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:54:12.275818 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:54:12.287719 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:54:12.305372 kernel: BTRFS info (device sda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:54:12.314762 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:54:12.339504 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:54:12.435612 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:54:12.463611 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:54:12.530525 ignition[665]: Ignition 2.20.0 Feb 13 15:54:12.530547 ignition[665]: Stage: fetch-offline Feb 13 15:54:12.532673 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:54:12.530646 ignition[665]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:54:12.546308 systemd-networkd[749]: lo: Link UP Feb 13 15:54:12.530664 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:54:12.546314 systemd-networkd[749]: lo: Gained carrier Feb 13 15:54:12.530822 ignition[665]: parsed url from cmdline: "" Feb 13 15:54:12.548063 systemd-networkd[749]: Enumeration completed Feb 13 15:54:12.530830 ignition[665]: no config URL provided Feb 13 15:54:12.548854 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:54:12.530838 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:54:12.548860 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:54:12.530851 ignition[665]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:54:12.550878 systemd-networkd[749]: eth0: Link UP Feb 13 15:54:12.530862 ignition[665]: failed to fetch config: resource requires networking Feb 13 15:54:12.550884 systemd-networkd[749]: eth0: Gained carrier Feb 13 15:54:12.531138 ignition[665]: Ignition finished successfully Feb 13 15:54:12.550894 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:54:12.638689 ignition[758]: Ignition 2.20.0 Feb 13 15:54:12.559294 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.29/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 15:54:12.638699 ignition[758]: Stage: fetch Feb 13 15:54:12.574610 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:54:12.638891 ignition[758]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:54:12.593049 systemd[1]: Reached target network.target - Network. Feb 13 15:54:12.638909 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:54:12.609550 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:54:12.639053 ignition[758]: parsed url from cmdline: "" Feb 13 15:54:12.648301 unknown[758]: fetched base config from "system" Feb 13 15:54:12.639058 ignition[758]: no config URL provided Feb 13 15:54:12.648312 unknown[758]: fetched base config from "system" Feb 13 15:54:12.639066 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:54:12.648321 unknown[758]: fetched user config from "gcp" Feb 13 15:54:12.639076 ignition[758]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:54:12.650836 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:54:12.639103 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 15:54:12.683404 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:54:12.642875 ignition[758]: GET result: OK Feb 13 15:54:12.722289 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:54:12.642938 ignition[758]: parsing config with SHA512: 19643ebabb4ea4f7294a301f3a653a2f45a2c63a73490e40a7b164ce7214b2f5a1469c95d26c3c8e20b71ee92c4137d00c13913006a767c6c6b7efc98f725e60 Feb 13 15:54:12.746408 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:54:12.648682 ignition[758]: fetch: fetch complete Feb 13 15:54:12.781321 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:54:12.648692 ignition[758]: fetch: fetch passed Feb 13 15:54:12.803565 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:54:12.648763 ignition[758]: Ignition finished successfully Feb 13 15:54:12.820362 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:54:12.720113 ignition[765]: Ignition 2.20.0 Feb 13 15:54:12.839369 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:54:12.720124 ignition[765]: Stage: kargs Feb 13 15:54:12.854364 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:54:12.720372 ignition[765]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:54:12.869350 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:54:12.720385 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:54:12.893395 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:54:12.721104 ignition[765]: kargs: kargs passed Feb 13 15:54:12.721155 ignition[765]: Ignition finished successfully Feb 13 15:54:12.779155 ignition[770]: Ignition 2.20.0 Feb 13 15:54:12.779167 ignition[770]: Stage: disks Feb 13 15:54:12.779395 ignition[770]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:54:12.779408 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:54:12.780102 ignition[770]: disks: disks passed Feb 13 15:54:12.780149 ignition[770]: Ignition finished successfully Feb 13 15:54:12.948531 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 15:54:13.132205 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:54:13.165344 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:54:13.283236 kernel: EXT4-fs (sda9): mounted filesystem e45e00fd-a630-4f0f-91bb-bc879e42a47e r/w with ordered data mode. Quota mode: none. Feb 13 15:54:13.283840 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:54:13.284729 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:54:13.305328 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:54:13.337360 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:54:13.353800 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (787) Feb 13 15:54:13.380160 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:54:13.380269 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:54:13.380295 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:54:13.384715 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:54:13.424399 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:54:13.424445 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:54:13.384780 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:54:13.384812 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:54:13.411687 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:54:13.432628 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:54:13.456435 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:54:13.600262 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:54:13.611338 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:54:13.622786 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:54:13.632311 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:54:13.759861 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:54:13.766341 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:54:13.799228 kernel: BTRFS info (device sda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:54:13.810424 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:54:13.820474 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:54:13.824526 systemd-networkd[749]: eth0: Gained IPv6LL Feb 13 15:54:13.862213 ignition[899]: INFO : Ignition 2.20.0 Feb 13 15:54:13.862213 ignition[899]: INFO : Stage: mount Feb 13 15:54:13.876521 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:54:13.876521 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:54:13.876521 ignition[899]: INFO : mount: mount passed Feb 13 15:54:13.876521 ignition[899]: INFO : Ignition finished successfully Feb 13 15:54:13.866137 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:54:13.889618 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:54:13.903411 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:54:13.949454 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:54:14.020738 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (912) Feb 13 15:54:14.020790 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:54:14.020816 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:54:14.020839 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:54:14.020862 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:54:14.020885 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:54:14.029763 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:54:14.072602 ignition[929]: INFO : Ignition 2.20.0 Feb 13 15:54:14.072602 ignition[929]: INFO : Stage: files Feb 13 15:54:14.087352 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:54:14.087352 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:54:14.087352 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:54:14.087352 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:54:14.087352 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:54:14.087352 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:54:14.087352 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:54:14.087352 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:54:14.087352 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:54:14.087352 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:54:14.087352 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:54:14.087352 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:54:14.087352 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:54:14.087352 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:54:14.087352 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:54:14.087352 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:54:14.079966 unknown[929]: wrote ssh authorized keys file for user: core Feb 13 15:54:21.894159 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 15:54:22.234486 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:54:22.252337 ignition[929]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:54:22.252337 ignition[929]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:54:22.252337 ignition[929]: INFO : files: files passed Feb 13 15:54:22.252337 ignition[929]: INFO : Ignition finished successfully Feb 13 15:54:22.236550 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:54:22.259552 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:54:22.283503 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:54:22.331999 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:54:22.332313 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:54:22.403483 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:54:22.403483 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:54:22.352820 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:54:22.454447 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:54:22.375214 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:54:22.399465 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:54:22.471475 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:54:22.471596 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:54:22.490257 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:54:22.510487 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:54:22.528553 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:54:22.535500 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:54:22.611782 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:54:22.629415 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:54:22.665530 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:54:22.665835 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:54:22.696691 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:54:22.706754 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:54:22.706958 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:54:22.740679 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:54:22.751741 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:54:22.768729 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:54:22.783731 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:54:22.812767 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:54:22.822765 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:54:22.850798 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:54:22.860741 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:54:22.881749 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:54:22.898729 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:54:22.915745 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:54:22.915951 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:54:22.956447 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:54:22.956848 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:54:22.974716 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:54:22.974894 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:54:22.994681 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:54:22.994880 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:54:23.041711 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:54:23.041936 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:54:23.051723 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:54:23.051899 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:54:23.078566 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:54:23.126381 ignition[981]: INFO : Ignition 2.20.0 Feb 13 15:54:23.126381 ignition[981]: INFO : Stage: umount Feb 13 15:54:23.126381 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:54:23.126381 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:54:23.126381 ignition[981]: INFO : umount: umount passed Feb 13 15:54:23.126381 ignition[981]: INFO : Ignition finished successfully Feb 13 15:54:23.121238 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:54:23.134489 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:54:23.134733 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:54:23.194632 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:54:23.194835 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:54:23.226642 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:54:23.227693 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:54:23.227808 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:54:23.234057 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:54:23.234170 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:54:23.253673 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:54:23.253797 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:54:23.270905 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:54:23.270971 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:54:23.296576 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:54:23.296655 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:54:23.304616 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:54:23.304688 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:54:23.331548 systemd[1]: Stopped target network.target - Network. Feb 13 15:54:23.339565 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:54:23.339650 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:54:23.354637 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:54:23.372558 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:54:23.376282 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:54:23.390564 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:54:23.408547 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:54:23.423602 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:54:23.423668 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:54:23.438635 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:54:23.438699 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:54:23.455617 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:54:23.455696 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:54:23.472660 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:54:23.472737 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:54:23.489616 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:54:23.489696 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:54:23.516808 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:54:23.521253 systemd-networkd[749]: eth0: DHCPv6 lease lost Feb 13 15:54:23.536579 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:54:23.556897 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:54:23.557031 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:54:23.566020 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:54:23.566449 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:54:23.584181 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:54:23.584324 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:54:23.628365 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:54:23.636352 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:54:23.636481 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:54:23.648453 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:54:23.648561 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:54:23.659416 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:54:23.659525 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:54:23.677426 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:54:23.677539 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:54:23.696699 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:54:23.715918 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:54:23.716093 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:54:23.743637 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:54:24.150375 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 13 15:54:23.743710 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:54:23.763611 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:54:23.763667 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:54:23.791543 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:54:23.791634 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:54:23.821665 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:54:23.821751 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:54:23.848630 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:54:23.848718 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:54:23.900473 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:54:23.922353 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:54:23.922473 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:54:23.943462 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:54:23.943556 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:54:23.964440 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:54:23.964531 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:54:23.985532 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:54:23.985632 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:54:24.007951 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:54:24.008081 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:54:24.025772 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:54:24.025892 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:54:24.048668 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:54:24.073435 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:54:24.103378 systemd[1]: Switching root. Feb 13 15:54:24.400325 systemd-journald[184]: Journal stopped Feb 13 15:54:09.093311 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:06:02 -00 2025 Feb 13 15:54:09.093360 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:54:09.093379 kernel: BIOS-provided physical RAM map: Feb 13 15:54:09.093393 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 15:54:09.093406 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 15:54:09.093420 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 15:54:09.093437 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 15:54:09.093451 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 15:54:09.093469 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd328fff] usable Feb 13 15:54:09.093482 kernel: BIOS-e820: [mem 0x00000000bd329000-0x00000000bd330fff] ACPI data Feb 13 15:54:09.093496 kernel: BIOS-e820: [mem 0x00000000bd331000-0x00000000bf8ecfff] usable Feb 13 15:54:09.093511 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Feb 13 15:54:09.093525 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 15:54:09.093541 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 15:54:09.093563 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 15:54:09.093580 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 15:54:09.093596 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 15:54:09.093613 kernel: NX (Execute Disable) protection: active Feb 13 15:54:09.093629 kernel: APIC: Static calls initialized Feb 13 15:54:09.093645 kernel: efi: EFI v2.7 by EDK II Feb 13 15:54:09.093672 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd329018 Feb 13 15:54:09.093689 kernel: random: crng init done Feb 13 15:54:09.093705 kernel: secureboot: Secure boot disabled Feb 13 15:54:09.093721 kernel: SMBIOS 2.4 present. Feb 13 15:54:09.093741 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 15:54:09.093758 kernel: Hypervisor detected: KVM Feb 13 15:54:09.093774 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:54:09.093790 kernel: kvm-clock: using sched offset of 12996406779 cycles Feb 13 15:54:09.093808 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:54:09.093825 kernel: tsc: Detected 2299.998 MHz processor Feb 13 15:54:09.093842 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:54:09.093859 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:54:09.093875 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 15:54:09.093892 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 15:54:09.093912 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:54:09.093928 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 15:54:09.093944 kernel: Using GB pages for direct mapping Feb 13 15:54:09.093959 kernel: ACPI: Early table checksum verification disabled Feb 13 15:54:09.093975 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 15:54:09.093991 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 15:54:09.094014 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 15:54:09.094034 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 15:54:09.094050 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 15:54:09.094066 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 15:54:09.094083 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 15:54:09.094100 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 15:54:09.094117 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 15:54:09.094134 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 15:54:09.094155 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 15:54:09.094172 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 15:54:09.094206 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 15:54:09.094222 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 15:54:09.094238 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 15:54:09.094255 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 15:54:09.094271 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 15:54:09.094288 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 15:54:09.094305 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 15:54:09.094327 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 15:54:09.094343 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:54:09.094359 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:54:09.094376 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 15:54:09.094393 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 15:54:09.094410 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 15:54:09.094427 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 15:54:09.094445 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 15:54:09.094463 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Feb 13 15:54:09.094485 kernel: Zone ranges: Feb 13 15:54:09.094502 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:54:09.094520 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 15:54:09.094538 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 15:54:09.094555 kernel: Movable zone start for each node Feb 13 15:54:09.094572 kernel: Early memory node ranges Feb 13 15:54:09.094590 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 15:54:09.094608 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 15:54:09.094625 kernel: node 0: [mem 0x0000000000100000-0x00000000bd328fff] Feb 13 15:54:09.094646 kernel: node 0: [mem 0x00000000bd331000-0x00000000bf8ecfff] Feb 13 15:54:09.094673 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 15:54:09.094690 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 15:54:09.094708 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 15:54:09.094726 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:54:09.094744 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 15:54:09.094761 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 15:54:09.094779 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Feb 13 15:54:09.094796 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 15:54:09.094816 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 15:54:09.094833 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 15:54:09.094849 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:54:09.094866 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:54:09.094882 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:54:09.094898 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:54:09.094914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:54:09.094930 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:54:09.094946 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:54:09.094967 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:54:09.094984 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 15:54:09.095000 kernel: Booting paravirtualized kernel on KVM Feb 13 15:54:09.095017 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:54:09.095034 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:54:09.095050 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:54:09.095066 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:54:09.095082 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:54:09.095099 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:54:09.095119 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:54:09.095138 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:54:09.095155 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:54:09.095172 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 15:54:09.095214 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:54:09.095232 kernel: Fallback order for Node 0: 0 Feb 13 15:54:09.095250 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Feb 13 15:54:09.095267 kernel: Policy zone: Normal Feb 13 15:54:09.095290 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:54:09.095306 kernel: software IO TLB: area num 2. Feb 13 15:54:09.095323 kernel: Memory: 7511320K/7860552K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 348976K reserved, 0K cma-reserved) Feb 13 15:54:09.095340 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:54:09.095356 kernel: Kernel/User page tables isolation: enabled Feb 13 15:54:09.095373 kernel: ftrace: allocating 37890 entries in 149 pages Feb 13 15:54:09.095389 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:54:09.095406 kernel: Dynamic Preempt: voluntary Feb 13 15:54:09.095440 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:54:09.095464 kernel: rcu: RCU event tracing is enabled. Feb 13 15:54:09.095482 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:54:09.095499 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:54:09.095520 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:54:09.095539 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:54:09.095556 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:54:09.095574 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:54:09.095592 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 15:54:09.095614 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:54:09.095632 kernel: Console: colour dummy device 80x25 Feb 13 15:54:09.095650 kernel: printk: console [ttyS0] enabled Feb 13 15:54:09.095684 kernel: ACPI: Core revision 20230628 Feb 13 15:54:09.095702 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:54:09.095720 kernel: x2apic enabled Feb 13 15:54:09.095738 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:54:09.095756 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 15:54:09.095774 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 15:54:09.095797 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 15:54:09.095815 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 15:54:09.095833 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 15:54:09.095852 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:54:09.095870 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 15:54:09.095888 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 15:54:09.095906 kernel: Spectre V2 : Mitigation: IBRS Feb 13 15:54:09.095925 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:54:09.095947 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:54:09.095965 kernel: RETBleed: Mitigation: IBRS Feb 13 15:54:09.095984 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:54:09.096002 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 15:54:09.096021 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:54:09.096039 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 15:54:09.096057 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:54:09.096076 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:54:09.096094 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:54:09.096116 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:54:09.096134 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:54:09.096153 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 15:54:09.096172 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:54:09.096205 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:54:09.096222 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:54:09.096240 kernel: landlock: Up and running. Feb 13 15:54:09.096258 kernel: SELinux: Initializing. Feb 13 15:54:09.096275 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:54:09.096297 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:54:09.096315 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 15:54:09.096334 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:54:09.096353 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:54:09.096372 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:54:09.096389 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 15:54:09.096406 kernel: signal: max sigframe size: 1776 Feb 13 15:54:09.096424 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:54:09.096442 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:54:09.096464 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:54:09.096483 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:54:09.096502 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:54:09.096520 kernel: .... node #0, CPUs: #1 Feb 13 15:54:09.096540 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 15:54:09.096579 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:54:09.096597 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:54:09.096615 kernel: smpboot: Max logical packages: 1 Feb 13 15:54:09.096638 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 15:54:09.096666 kernel: devtmpfs: initialized Feb 13 15:54:09.096684 kernel: x86/mm: Memory block size: 128MB Feb 13 15:54:09.096702 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 15:54:09.096721 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:54:09.096739 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:54:09.096757 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:54:09.096776 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:54:09.096795 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:54:09.096818 kernel: audit: type=2000 audit(1739462047.951:1): state=initialized audit_enabled=0 res=1 Feb 13 15:54:09.096836 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:54:09.096854 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:54:09.096873 kernel: cpuidle: using governor menu Feb 13 15:54:09.096891 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:54:09.096910 kernel: dca service started, version 1.12.1 Feb 13 15:54:09.096928 kernel: PCI: Using configuration type 1 for base access Feb 13 15:54:09.096947 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:54:09.096966 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:54:09.096988 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:54:09.097007 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:54:09.097026 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:54:09.097044 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:54:09.097063 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:54:09.097081 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:54:09.097100 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:54:09.097119 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 15:54:09.097137 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:54:09.097159 kernel: ACPI: Interpreter enabled Feb 13 15:54:09.097177 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:54:09.097210 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:54:09.097237 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:54:09.097255 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 15:54:09.097273 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 15:54:09.097288 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:54:09.097962 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:54:09.098638 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 15:54:09.098862 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 15:54:09.098886 kernel: PCI host bridge to bus 0000:00 Feb 13 15:54:09.099067 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:54:09.099256 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:54:09.099427 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:54:09.099599 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 15:54:09.099791 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:54:09.100005 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 15:54:09.100237 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 15:54:09.100436 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 15:54:09.100617 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 15:54:09.100825 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 15:54:09.101019 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 15:54:09.103287 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 15:54:09.103512 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:54:09.103712 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 15:54:09.103899 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 15:54:09.104093 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:54:09.104328 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 15:54:09.104521 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 15:54:09.104546 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:54:09.104566 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:54:09.104584 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:54:09.104603 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:54:09.104622 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 15:54:09.104640 kernel: iommu: Default domain type: Translated Feb 13 15:54:09.104667 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:54:09.104685 kernel: efivars: Registered efivars operations Feb 13 15:54:09.104709 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:54:09.104727 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:54:09.104746 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 15:54:09.104765 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 15:54:09.104783 kernel: e820: reserve RAM buffer [mem 0xbd329000-0xbfffffff] Feb 13 15:54:09.104801 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 15:54:09.104819 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 15:54:09.104837 kernel: vgaarb: loaded Feb 13 15:54:09.104856 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:54:09.104878 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:54:09.104898 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:54:09.104916 kernel: pnp: PnP ACPI init Feb 13 15:54:09.104934 kernel: pnp: PnP ACPI: found 7 devices Feb 13 15:54:09.104953 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:54:09.104972 kernel: NET: Registered PF_INET protocol family Feb 13 15:54:09.104991 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:54:09.105010 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 15:54:09.105029 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:54:09.105051 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:54:09.105070 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 15:54:09.105088 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 15:54:09.105107 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:54:09.105126 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:54:09.105145 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:54:09.105163 kernel: NET: Registered PF_XDP protocol family Feb 13 15:54:09.105471 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:54:09.105651 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:54:09.105835 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:54:09.106019 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 15:54:09.106243 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 15:54:09.106271 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:54:09.106291 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 15:54:09.106310 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 15:54:09.106335 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:54:09.106354 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 15:54:09.106372 kernel: clocksource: Switched to clocksource tsc Feb 13 15:54:09.106390 kernel: Initialise system trusted keyrings Feb 13 15:54:09.106409 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 15:54:09.106427 kernel: Key type asymmetric registered Feb 13 15:54:09.106445 kernel: Asymmetric key parser 'x509' registered Feb 13 15:54:09.106464 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:54:09.106482 kernel: io scheduler mq-deadline registered Feb 13 15:54:09.106505 kernel: io scheduler kyber registered Feb 13 15:54:09.106523 kernel: io scheduler bfq registered Feb 13 15:54:09.106540 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:54:09.106559 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 15:54:09.106758 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 15:54:09.106782 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 15:54:09.106965 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 15:54:09.106990 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 15:54:09.107172 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 15:54:09.107218 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:54:09.107237 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:54:09.107257 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 15:54:09.107276 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 15:54:09.107294 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 15:54:09.107485 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 15:54:09.107511 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:54:09.107530 kernel: i8042: Warning: Keylock active Feb 13 15:54:09.107554 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:54:09.107573 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:54:09.107763 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 15:54:09.107938 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 15:54:09.108121 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T15:54:08 UTC (1739462048) Feb 13 15:54:09.108701 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 15:54:09.108864 kernel: intel_pstate: CPU model not supported Feb 13 15:54:09.108886 kernel: pstore: Using crash dump compression: deflate Feb 13 15:54:09.108911 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:54:09.108930 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:54:09.109083 kernel: Segment Routing with IPv6 Feb 13 15:54:09.109103 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:54:09.109122 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:54:09.109140 kernel: Key type dns_resolver registered Feb 13 15:54:09.109159 kernel: IPI shorthand broadcast: enabled Feb 13 15:54:09.109327 kernel: sched_clock: Marking stable (848004814, 157089495)->(1025284784, -20190475) Feb 13 15:54:09.109346 kernel: registered taskstats version 1 Feb 13 15:54:09.109370 kernel: Loading compiled-in X.509 certificates Feb 13 15:54:09.109524 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 3d19ae6dcd850c11d55bf09bd44e00c45ed399eb' Feb 13 15:54:09.109543 kernel: Key type .fscrypt registered Feb 13 15:54:09.109561 kernel: Key type fscrypt-provisioning registered Feb 13 15:54:09.109580 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:54:09.109599 kernel: ima: No architecture policies found Feb 13 15:54:09.109618 kernel: clk: Disabling unused clocks Feb 13 15:54:09.109636 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 15:54:09.109662 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:54:09.109685 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 15:54:09.109704 kernel: Run /init as init process Feb 13 15:54:09.109722 kernel: with arguments: Feb 13 15:54:09.109741 kernel: /init Feb 13 15:54:09.109759 kernel: with environment: Feb 13 15:54:09.109778 kernel: HOME=/ Feb 13 15:54:09.109797 kernel: TERM=linux Feb 13 15:54:09.109815 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:54:09.109834 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 13 15:54:09.109861 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:54:09.109885 systemd[1]: Detected virtualization google. Feb 13 15:54:09.109906 systemd[1]: Detected architecture x86-64. Feb 13 15:54:09.109926 systemd[1]: Running in initrd. Feb 13 15:54:09.109946 systemd[1]: No hostname configured, using default hostname. Feb 13 15:54:09.109965 systemd[1]: Hostname set to . Feb 13 15:54:09.109985 systemd[1]: Initializing machine ID from random generator. Feb 13 15:54:09.110009 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:54:09.110029 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:54:09.110049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:54:09.110070 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:54:09.110090 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:54:09.110110 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:54:09.110131 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:54:09.110158 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:54:09.110220 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:54:09.110242 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:54:09.110260 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:54:09.110278 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:54:09.110302 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:54:09.110322 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:54:09.110342 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:54:09.110361 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:54:09.110379 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:54:09.110397 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:54:09.110417 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:54:09.110434 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:54:09.110453 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:54:09.110480 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:54:09.110498 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:54:09.110516 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:54:09.110535 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:54:09.110554 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:54:09.110572 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:54:09.110591 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:54:09.110609 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:54:09.110672 systemd-journald[184]: Collecting audit messages is disabled. Feb 13 15:54:09.110717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:54:09.110736 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:54:09.110757 systemd-journald[184]: Journal started Feb 13 15:54:09.110801 systemd-journald[184]: Runtime Journal (/run/log/journal/8bbb0bafbb98471faac5a14652d0431b) is 8.0M, max 148.6M, 140.6M free. Feb 13 15:54:09.114923 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:54:09.119252 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:54:09.121917 systemd-modules-load[185]: Inserted module 'overlay' Feb 13 15:54:09.124807 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:54:09.147406 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:54:09.157955 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:54:09.167222 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:54:09.167254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:54:09.174229 kernel: Bridge firewalling registered Feb 13 15:54:09.173977 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 13 15:54:09.175538 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:54:09.179929 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:54:09.184692 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:54:09.189752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:54:09.211506 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:54:09.214396 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:54:09.240433 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:54:09.244858 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:54:09.248828 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:54:09.261416 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:54:09.269431 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:54:09.290728 dracut-cmdline[216]: dracut-dracut-053 Feb 13 15:54:09.295580 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:54:09.323741 systemd-resolved[217]: Positive Trust Anchors: Feb 13 15:54:09.324318 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:54:09.324533 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:54:09.330952 systemd-resolved[217]: Defaulting to hostname 'linux'. Feb 13 15:54:09.336018 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:54:09.356455 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:54:09.401238 kernel: SCSI subsystem initialized Feb 13 15:54:09.412238 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:54:09.424210 kernel: iscsi: registered transport (tcp) Feb 13 15:54:09.449238 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:54:09.449319 kernel: QLogic iSCSI HBA Driver Feb 13 15:54:09.501141 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:54:09.506420 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:54:09.545697 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:54:09.545783 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:54:09.545832 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:54:09.591227 kernel: raid6: avx2x4 gen() 18292 MB/s Feb 13 15:54:09.608230 kernel: raid6: avx2x2 gen() 18055 MB/s Feb 13 15:54:09.625621 kernel: raid6: avx2x1 gen() 13923 MB/s Feb 13 15:54:09.625708 kernel: raid6: using algorithm avx2x4 gen() 18292 MB/s Feb 13 15:54:09.643724 kernel: raid6: .... xor() 7779 MB/s, rmw enabled Feb 13 15:54:09.643800 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:54:09.666218 kernel: xor: automatically using best checksumming function avx Feb 13 15:54:09.833227 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:54:09.846923 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:54:09.854468 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:54:09.885407 systemd-udevd[400]: Using default interface naming scheme 'v255'. Feb 13 15:54:09.892625 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:54:09.905978 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:54:09.935632 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Feb 13 15:54:09.973052 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:54:09.977388 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:54:10.069964 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:54:10.084467 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:54:10.125307 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:54:10.146802 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:54:10.168470 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:54:10.229528 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:54:10.181780 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:54:10.223422 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:54:10.287672 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:54:10.293222 kernel: scsi host0: Virtio SCSI HBA Feb 13 15:54:10.303209 kernel: AES CTR mode by8 optimization enabled Feb 13 15:54:10.316158 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:54:10.334344 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 15:54:10.347446 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:54:10.347779 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:54:10.384153 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 15:54:10.451865 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 15:54:10.452057 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 15:54:10.452271 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 15:54:10.452506 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 15:54:10.452770 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:54:10.452799 kernel: GPT:17805311 != 25165823 Feb 13 15:54:10.452822 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:54:10.452843 kernel: GPT:17805311 != 25165823 Feb 13 15:54:10.452977 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:54:10.453016 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:54:10.453044 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 15:54:10.406551 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:54:10.452323 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:54:10.452605 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:54:10.528898 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (447) Feb 13 15:54:10.528940 kernel: BTRFS: device fsid 0e178e67-0100-48b1-87c9-422b9a68652a devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (445) Feb 13 15:54:10.463582 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:54:10.512619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:54:10.571584 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 15:54:10.572112 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:54:10.617073 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 15:54:10.623860 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 15:54:10.652929 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 15:54:10.668395 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 15:54:10.698447 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:54:10.710496 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:54:10.744659 disk-uuid[543]: Primary Header is updated. Feb 13 15:54:10.744659 disk-uuid[543]: Secondary Entries is updated. Feb 13 15:54:10.744659 disk-uuid[543]: Secondary Header is updated. Feb 13 15:54:10.766214 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:54:10.788217 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:54:10.810420 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:54:11.801583 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:54:11.801666 disk-uuid[544]: The operation has completed successfully. Feb 13 15:54:11.881142 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:54:11.881322 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:54:11.910416 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:54:11.943302 sh[566]: Success Feb 13 15:54:11.967227 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:54:12.055521 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:54:12.062651 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:54:12.089647 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:54:12.123616 kernel: BTRFS info (device dm-0): first mount of filesystem 0e178e67-0100-48b1-87c9-422b9a68652a Feb 13 15:54:12.123709 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:54:12.123735 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:54:12.134158 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:54:12.140091 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:54:12.170239 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:54:12.175164 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:54:12.176137 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:54:12.182487 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:54:12.194452 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:54:12.253238 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:54:12.253348 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:54:12.253375 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:54:12.275748 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:54:12.275818 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:54:12.287719 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:54:12.305372 kernel: BTRFS info (device sda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:54:12.314762 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:54:12.339504 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:54:12.435612 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:54:12.463611 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:54:12.530525 ignition[665]: Ignition 2.20.0 Feb 13 15:54:12.530547 ignition[665]: Stage: fetch-offline Feb 13 15:54:12.532673 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:54:12.530646 ignition[665]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:54:12.546308 systemd-networkd[749]: lo: Link UP Feb 13 15:54:12.530664 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:54:12.546314 systemd-networkd[749]: lo: Gained carrier Feb 13 15:54:12.530822 ignition[665]: parsed url from cmdline: "" Feb 13 15:54:12.548063 systemd-networkd[749]: Enumeration completed Feb 13 15:54:12.530830 ignition[665]: no config URL provided Feb 13 15:54:12.548854 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:54:12.530838 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:54:12.548860 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:54:12.530851 ignition[665]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:54:12.550878 systemd-networkd[749]: eth0: Link UP Feb 13 15:54:12.530862 ignition[665]: failed to fetch config: resource requires networking Feb 13 15:54:12.550884 systemd-networkd[749]: eth0: Gained carrier Feb 13 15:54:12.531138 ignition[665]: Ignition finished successfully Feb 13 15:54:12.550894 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:54:12.638689 ignition[758]: Ignition 2.20.0 Feb 13 15:54:12.559294 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.29/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 15:54:12.638699 ignition[758]: Stage: fetch Feb 13 15:54:12.574610 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:54:12.638891 ignition[758]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:54:12.593049 systemd[1]: Reached target network.target - Network. Feb 13 15:54:12.638909 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:54:12.609550 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:54:12.639053 ignition[758]: parsed url from cmdline: "" Feb 13 15:54:12.648301 unknown[758]: fetched base config from "system" Feb 13 15:54:12.639058 ignition[758]: no config URL provided Feb 13 15:54:12.648312 unknown[758]: fetched base config from "system" Feb 13 15:54:12.639066 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:54:12.648321 unknown[758]: fetched user config from "gcp" Feb 13 15:54:12.639076 ignition[758]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:54:12.650836 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:54:12.639103 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 15:54:12.683404 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:54:12.642875 ignition[758]: GET result: OK Feb 13 15:54:12.722289 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:54:12.642938 ignition[758]: parsing config with SHA512: 19643ebabb4ea4f7294a301f3a653a2f45a2c63a73490e40a7b164ce7214b2f5a1469c95d26c3c8e20b71ee92c4137d00c13913006a767c6c6b7efc98f725e60 Feb 13 15:54:12.746408 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:54:12.648682 ignition[758]: fetch: fetch complete Feb 13 15:54:12.781321 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:54:12.648692 ignition[758]: fetch: fetch passed Feb 13 15:54:12.803565 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:54:12.648763 ignition[758]: Ignition finished successfully Feb 13 15:54:12.820362 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:54:12.720113 ignition[765]: Ignition 2.20.0 Feb 13 15:54:12.839369 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:54:12.720124 ignition[765]: Stage: kargs Feb 13 15:54:12.854364 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:54:12.720372 ignition[765]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:54:12.869350 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:54:12.720385 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:54:12.893395 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:54:12.721104 ignition[765]: kargs: kargs passed Feb 13 15:54:12.721155 ignition[765]: Ignition finished successfully Feb 13 15:54:12.779155 ignition[770]: Ignition 2.20.0 Feb 13 15:54:12.779167 ignition[770]: Stage: disks Feb 13 15:54:12.779395 ignition[770]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:54:12.779408 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:54:12.780102 ignition[770]: disks: disks passed Feb 13 15:54:12.780149 ignition[770]: Ignition finished successfully Feb 13 15:54:12.948531 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 15:54:13.132205 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:54:13.165344 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:54:13.283236 kernel: EXT4-fs (sda9): mounted filesystem e45e00fd-a630-4f0f-91bb-bc879e42a47e r/w with ordered data mode. Quota mode: none. Feb 13 15:54:13.283840 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:54:13.284729 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:54:13.305328 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:54:13.337360 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:54:13.353800 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (787) Feb 13 15:54:13.380160 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:54:13.380269 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:54:13.380295 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:54:13.384715 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:54:13.424399 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:54:13.424445 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:54:13.384780 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:54:13.384812 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:54:13.411687 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:54:13.432628 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:54:13.456435 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:54:13.600262 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:54:13.611338 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:54:13.622786 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:54:13.632311 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:54:13.759861 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:54:13.766341 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:54:13.799228 kernel: BTRFS info (device sda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:54:13.810424 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:54:13.820474 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:54:13.824526 systemd-networkd[749]: eth0: Gained IPv6LL Feb 13 15:54:13.862213 ignition[899]: INFO : Ignition 2.20.0 Feb 13 15:54:13.862213 ignition[899]: INFO : Stage: mount Feb 13 15:54:13.876521 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:54:13.876521 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:54:13.876521 ignition[899]: INFO : mount: mount passed Feb 13 15:54:13.876521 ignition[899]: INFO : Ignition finished successfully Feb 13 15:54:13.866137 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:54:13.889618 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:54:13.903411 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:54:13.949454 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:54:14.020738 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (912) Feb 13 15:54:14.020790 kernel: BTRFS info (device sda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:54:14.020816 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:54:14.020839 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:54:14.020862 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:54:14.020885 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:54:14.029763 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:54:14.072602 ignition[929]: INFO : Ignition 2.20.0 Feb 13 15:54:14.072602 ignition[929]: INFO : Stage: files Feb 13 15:54:14.087352 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:54:14.087352 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:54:14.087352 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:54:14.087352 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:54:14.087352 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:54:14.087352 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:54:14.087352 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:54:14.087352 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:54:14.087352 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:54:14.087352 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:54:14.087352 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:54:14.087352 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:54:14.087352 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:54:14.087352 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:54:14.087352 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:54:14.087352 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:54:14.079966 unknown[929]: wrote ssh authorized keys file for user: core Feb 13 15:54:21.894159 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 15:54:22.234486 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:54:22.252337 ignition[929]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:54:22.252337 ignition[929]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:54:22.252337 ignition[929]: INFO : files: files passed Feb 13 15:54:22.252337 ignition[929]: INFO : Ignition finished successfully Feb 13 15:54:22.236550 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:54:22.259552 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:54:22.283503 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:54:22.331999 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:54:22.332313 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:54:22.403483 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:54:22.403483 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:54:22.352820 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:54:22.454447 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:54:22.375214 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:54:22.399465 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:54:22.471475 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:54:22.471596 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:54:22.490257 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:54:22.510487 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:54:22.528553 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:54:22.535500 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:54:22.611782 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:54:22.629415 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:54:22.665530 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:54:22.665835 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:54:22.696691 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:54:22.706754 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:54:22.706958 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:54:22.740679 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:54:22.751741 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:54:22.768729 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:54:22.783731 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:54:22.812767 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:54:22.822765 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:54:22.850798 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:54:22.860741 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:54:22.881749 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:54:22.898729 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:54:22.915745 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:54:22.915951 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:54:22.956447 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:54:22.956848 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:54:22.974716 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:54:22.974894 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:54:22.994681 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:54:22.994880 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:54:23.041711 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:54:23.041936 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:54:23.051723 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:54:23.051899 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:54:23.078566 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:54:23.126381 ignition[981]: INFO : Ignition 2.20.0 Feb 13 15:54:23.126381 ignition[981]: INFO : Stage: umount Feb 13 15:54:23.126381 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:54:23.126381 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:54:23.126381 ignition[981]: INFO : umount: umount passed Feb 13 15:54:23.126381 ignition[981]: INFO : Ignition finished successfully Feb 13 15:54:23.121238 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:54:23.134489 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:54:23.134733 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:54:23.194632 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:54:23.194835 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:54:23.226642 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:54:23.227693 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:54:23.227808 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:54:23.234057 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:54:23.234170 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:54:23.253673 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:54:23.253797 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:54:23.270905 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:54:23.270971 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:54:23.296576 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:54:23.296655 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:54:23.304616 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:54:23.304688 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:54:23.331548 systemd[1]: Stopped target network.target - Network. Feb 13 15:54:23.339565 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:54:23.339650 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:54:23.354637 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:54:23.372558 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:54:23.376282 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:54:23.390564 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:54:23.408547 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:54:23.423602 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:54:23.423668 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:54:23.438635 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:54:23.438699 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:54:23.455617 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:54:23.455696 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:54:23.472660 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:54:23.472737 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:54:23.489616 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:54:23.489696 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:54:23.516808 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:54:23.521253 systemd-networkd[749]: eth0: DHCPv6 lease lost Feb 13 15:54:23.536579 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:54:23.556897 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:54:23.557031 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:54:23.566020 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:54:23.566449 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:54:23.584181 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:54:23.584324 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:54:23.628365 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:54:23.636352 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:54:23.636481 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:54:23.648453 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:54:23.648561 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:54:23.659416 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:54:23.659525 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:54:23.677426 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:54:23.677539 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:54:23.696699 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:54:23.715918 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:54:23.716093 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:54:23.743637 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:54:24.150375 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 13 15:54:23.743710 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:54:23.763611 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:54:23.763667 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:54:23.791543 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:54:23.791634 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:54:23.821665 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:54:23.821751 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:54:23.848630 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:54:23.848718 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:54:23.900473 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:54:23.922353 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:54:23.922473 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:54:23.943462 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:54:23.943556 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:54:23.964440 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:54:23.964531 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:54:23.985532 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:54:23.985632 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:54:24.007951 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:54:24.008081 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:54:24.025772 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:54:24.025892 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:54:24.048668 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:54:24.073435 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:54:24.103378 systemd[1]: Switching root. Feb 13 15:54:24.400325 systemd-journald[184]: Journal stopped Feb 13 15:54:26.777019 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:54:26.777076 kernel: SELinux: policy capability open_perms=1 Feb 13 15:54:26.777098 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:54:26.777116 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:54:26.777132 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:54:26.777150 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:54:26.777170 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:54:26.777202 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:54:26.777225 kernel: audit: type=1403 audit(1739462064.669:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:54:26.777247 systemd[1]: Successfully loaded SELinux policy in 94.797ms. Feb 13 15:54:26.777269 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.268ms. Feb 13 15:54:26.777291 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:54:26.777311 systemd[1]: Detected virtualization google. Feb 13 15:54:26.777331 systemd[1]: Detected architecture x86-64. Feb 13 15:54:26.777356 systemd[1]: Detected first boot. Feb 13 15:54:26.777378 systemd[1]: Initializing machine ID from random generator. Feb 13 15:54:26.777399 zram_generator::config[1022]: No configuration found. Feb 13 15:54:26.777421 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:54:26.777441 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:54:26.777465 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:54:26.777487 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:54:26.777508 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:54:26.777531 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:54:26.777551 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:54:26.777573 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:54:26.777594 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:54:26.777619 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:54:26.777641 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:54:26.777661 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:54:26.777682 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:54:26.777703 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:54:26.777724 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:54:26.777745 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:54:26.777766 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:54:26.777791 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:54:26.777811 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:54:26.777832 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:54:26.777853 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:54:26.777873 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:54:26.777895 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:54:26.777921 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:54:26.777943 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:54:26.777965 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:54:26.777991 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:54:26.778012 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:54:26.778033 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:54:26.778062 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:54:26.778084 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:54:26.778104 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:54:26.778126 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:54:26.778153 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:54:26.778175 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:54:26.778220 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:54:26.778243 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:54:26.778265 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:54:26.778291 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:54:26.778313 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:54:26.778335 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:54:26.778358 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:54:26.778380 systemd[1]: Reached target machines.target - Containers. Feb 13 15:54:26.778401 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:54:26.778424 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:54:26.778446 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:54:26.778472 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:54:26.778496 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:54:26.778517 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:54:26.778540 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:54:26.778561 kernel: ACPI: bus type drm_connector registered Feb 13 15:54:26.778582 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:54:26.778602 kernel: fuse: init (API version 7.39) Feb 13 15:54:26.778623 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:54:26.778644 kernel: loop: module loaded Feb 13 15:54:26.778669 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:54:26.778691 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:54:26.778712 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:54:26.778734 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:54:26.778756 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:54:26.778778 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:54:26.778799 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:54:26.778850 systemd-journald[1109]: Collecting audit messages is disabled. Feb 13 15:54:26.778900 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:54:26.778923 systemd-journald[1109]: Journal started Feb 13 15:54:26.778968 systemd-journald[1109]: Runtime Journal (/run/log/journal/08793ec615e8486fa066041b8eb17046) is 8.0M, max 148.6M, 140.6M free. Feb 13 15:54:25.525449 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:54:25.548457 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:54:25.549019 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:54:26.807220 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:54:26.819245 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:54:26.835220 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:54:26.835311 systemd[1]: Stopped verity-setup.service. Feb 13 15:54:26.874211 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:54:26.883256 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:54:26.895839 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:54:26.905588 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:54:26.916588 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:54:26.926569 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:54:26.936542 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:54:26.947630 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:54:26.957746 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:54:26.969685 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:54:26.981640 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:54:26.981872 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:54:26.993655 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:54:26.993883 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:54:27.005643 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:54:27.005874 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:54:27.015775 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:54:27.016005 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:54:27.027659 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:54:27.027881 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:54:27.037692 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:54:27.037927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:54:27.047693 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:54:27.057702 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:54:27.069692 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:54:27.081720 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:54:27.106267 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:54:27.128347 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:54:27.143333 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:54:27.153342 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:54:27.153570 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:54:27.164672 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:54:27.180444 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:54:27.198431 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:54:27.208498 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:54:27.215735 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:54:27.232586 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:54:27.243383 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:54:27.251578 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:54:27.264415 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:54:27.271699 systemd-journald[1109]: Time spent on flushing to /var/log/journal/08793ec615e8486fa066041b8eb17046 is 110.532ms for 915 entries. Feb 13 15:54:27.271699 systemd-journald[1109]: System Journal (/var/log/journal/08793ec615e8486fa066041b8eb17046) is 8.0M, max 584.8M, 576.8M free. Feb 13 15:54:27.431336 systemd-journald[1109]: Received client request to flush runtime journal. Feb 13 15:54:27.431427 kernel: loop0: detected capacity change from 0 to 211296 Feb 13 15:54:27.282359 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:54:27.303952 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:54:27.324677 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:54:27.348295 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:54:27.362357 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:54:27.379568 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:54:27.390832 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:54:27.402853 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:54:27.414826 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:54:27.445102 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:54:27.446154 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:54:27.453809 systemd-tmpfiles[1141]: ACLs are not supported, ignoring. Feb 13 15:54:27.453841 systemd-tmpfiles[1141]: ACLs are not supported, ignoring. Feb 13 15:54:27.462009 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:54:27.486331 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:54:27.488247 kernel: loop1: detected capacity change from 0 to 52184 Feb 13 15:54:27.501842 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:54:27.516175 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:54:27.517847 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:54:27.544087 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:54:27.563520 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 15:54:27.561899 udevadm[1143]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:54:27.664196 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:54:27.679230 kernel: loop3: detected capacity change from 0 to 141000 Feb 13 15:54:27.689402 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:54:27.772092 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Feb 13 15:54:27.772850 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Feb 13 15:54:27.788699 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:54:27.796340 kernel: loop4: detected capacity change from 0 to 211296 Feb 13 15:54:27.840228 kernel: loop5: detected capacity change from 0 to 52184 Feb 13 15:54:27.876287 kernel: loop6: detected capacity change from 0 to 138184 Feb 13 15:54:27.926229 kernel: loop7: detected capacity change from 0 to 141000 Feb 13 15:54:27.980055 (sd-merge)[1167]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Feb 13 15:54:27.981548 (sd-merge)[1167]: Merged extensions into '/usr'. Feb 13 15:54:27.992213 systemd[1]: Reloading requested from client PID 1140 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:54:27.992233 systemd[1]: Reloading... Feb 13 15:54:28.113263 zram_generator::config[1191]: No configuration found. Feb 13 15:54:28.439004 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:54:28.480229 ldconfig[1135]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:54:28.552438 systemd[1]: Reloading finished in 559 ms. Feb 13 15:54:28.582095 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:54:28.593008 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:54:28.617500 systemd[1]: Starting ensure-sysext.service... Feb 13 15:54:28.631680 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:54:28.652415 systemd[1]: Reloading requested from client PID 1234 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:54:28.652434 systemd[1]: Reloading... Feb 13 15:54:28.712855 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:54:28.716000 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:54:28.722105 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:54:28.726401 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Feb 13 15:54:28.726523 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Feb 13 15:54:28.750159 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:54:28.753244 systemd-tmpfiles[1235]: Skipping /boot Feb 13 15:54:28.771691 zram_generator::config[1258]: No configuration found. Feb 13 15:54:28.812746 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:54:28.813437 systemd-tmpfiles[1235]: Skipping /boot Feb 13 15:54:28.961500 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:54:29.027650 systemd[1]: Reloading finished in 374 ms. Feb 13 15:54:29.049945 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:54:29.067928 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:54:29.091580 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:54:29.107945 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:54:29.125314 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:54:29.152600 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:54:29.172552 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:54:29.189031 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:54:29.205987 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:54:29.206378 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:54:29.216930 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:54:29.233521 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:54:29.249972 augenrules[1331]: No rules Feb 13 15:54:29.253581 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:54:29.263484 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:54:29.264468 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Feb 13 15:54:29.271932 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:54:29.281348 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:54:29.288214 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:54:29.288905 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:54:29.301292 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:54:29.313175 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:54:29.326110 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:54:29.326362 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:54:29.337841 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:54:29.350015 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:54:29.361880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:54:29.363003 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:54:29.378317 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:54:29.378683 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:54:29.432505 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:54:29.479667 systemd[1]: Finished ensure-sysext.service. Feb 13 15:54:29.493879 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:54:29.504373 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:54:29.515130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:54:29.519180 systemd-resolved[1316]: Positive Trust Anchors: Feb 13 15:54:29.522273 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:54:29.522341 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:54:29.522500 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:54:29.540470 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:54:29.540970 systemd-resolved[1316]: Defaulting to hostname 'linux'. Feb 13 15:54:29.557732 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:54:29.569637 augenrules[1372]: /sbin/augenrules: No change Feb 13 15:54:29.577418 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:54:29.597465 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:54:29.606478 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:54:29.619522 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:54:29.620227 augenrules[1396]: No rules Feb 13 15:54:29.627376 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:54:29.646414 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:54:29.656353 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:54:29.656618 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:54:29.658523 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:54:29.669004 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:54:29.674747 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:54:29.685355 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:54:29.686541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:54:29.697811 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:54:29.698266 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:54:29.708889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:54:29.709126 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:54:29.741166 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Feb 13 15:54:29.812340 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1348) Feb 13 15:54:29.812397 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 15:54:29.740808 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:54:29.741044 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:54:29.766705 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:54:29.776837 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:54:29.810592 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:54:29.830569 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:54:29.830672 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 15:54:29.859132 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:54:29.860951 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:54:29.872209 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 15:54:29.890210 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 15:54:29.893433 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Feb 13 15:54:29.903355 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:54:29.903470 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:54:29.963805 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:54:29.998130 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 15:54:30.001744 systemd-networkd[1398]: lo: Link UP Feb 13 15:54:30.001757 systemd-networkd[1398]: lo: Gained carrier Feb 13 15:54:30.004979 systemd-networkd[1398]: Enumeration completed Feb 13 15:54:30.005823 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:54:30.006355 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:54:30.007335 systemd-networkd[1398]: eth0: Link UP Feb 13 15:54:30.007461 systemd-networkd[1398]: eth0: Gained carrier Feb 13 15:54:30.007492 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:54:30.009959 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:54:30.020112 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Feb 13 15:54:30.021290 systemd-networkd[1398]: eth0: DHCPv4 address 10.128.0.29/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 15:54:30.033552 systemd[1]: Reached target network.target - Network. Feb 13 15:54:30.047605 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:54:30.070507 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:54:30.089654 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:54:30.110611 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:54:30.134429 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:54:30.141318 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:54:30.163902 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:54:30.199578 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:54:30.200289 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:54:30.207512 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:54:30.221989 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:54:30.235567 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:54:30.246618 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:54:30.256545 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:54:30.267435 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:54:30.278584 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:54:30.288547 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:54:30.299350 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:54:30.310366 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:54:30.310452 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:54:30.319357 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:54:30.329869 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:54:30.342090 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:54:30.359089 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:54:30.370437 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:54:30.381598 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:54:30.392253 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:54:30.402364 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:54:30.410492 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:54:30.410571 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:54:30.415355 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:54:30.438897 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:54:30.456527 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:54:30.469214 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:54:30.497274 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:54:30.506044 jq[1453]: false Feb 13 15:54:30.507354 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:54:30.518560 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:54:30.538421 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:54:30.554910 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:54:30.562515 extend-filesystems[1454]: Found loop4 Feb 13 15:54:30.573430 extend-filesystems[1454]: Found loop5 Feb 13 15:54:30.573430 extend-filesystems[1454]: Found loop6 Feb 13 15:54:30.573430 extend-filesystems[1454]: Found loop7 Feb 13 15:54:30.573430 extend-filesystems[1454]: Found sda Feb 13 15:54:30.573430 extend-filesystems[1454]: Found sda1 Feb 13 15:54:30.573430 extend-filesystems[1454]: Found sda2 Feb 13 15:54:30.573430 extend-filesystems[1454]: Found sda3 Feb 13 15:54:30.573430 extend-filesystems[1454]: Found usr Feb 13 15:54:30.573430 extend-filesystems[1454]: Found sda4 Feb 13 15:54:30.573430 extend-filesystems[1454]: Found sda6 Feb 13 15:54:30.573430 extend-filesystems[1454]: Found sda7 Feb 13 15:54:30.573430 extend-filesystems[1454]: Found sda9 Feb 13 15:54:30.573430 extend-filesystems[1454]: Checking size of /dev/sda9 Feb 13 15:54:30.747390 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Feb 13 15:54:30.747601 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:33:53 UTC 2025 (1): Starting Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: ---------------------------------------------------- Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: corporation. Support and training for ntp-4 are Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: available at https://www.nwtime.org/support Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: ---------------------------------------------------- Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: proto: precision = 0.106 usec (-23) Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: basedate set to 2025-02-01 Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: gps base set to 2025-02-02 (week 2352) Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: Listen normally on 3 eth0 10.128.0.29:123 Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: Listen normally on 4 lo [::1]:123 Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: bind(21) AF_INET6 fe80::4001:aff:fe80:1d%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:1d%2#123 Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: failed to init interface for address fe80::4001:aff:fe80:1d%2 Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: Listening on routing socket on fd #21 for interface updates Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:54:30.747673 ntpd[1458]: 13 Feb 15:54:30 ntpd[1458]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:54:30.749081 coreos-metadata[1451]: Feb 13 15:54:30.591 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Feb 13 15:54:30.749081 coreos-metadata[1451]: Feb 13 15:54:30.595 INFO Fetch successful Feb 13 15:54:30.749081 coreos-metadata[1451]: Feb 13 15:54:30.595 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Feb 13 15:54:30.749081 coreos-metadata[1451]: Feb 13 15:54:30.600 INFO Fetch successful Feb 13 15:54:30.749081 coreos-metadata[1451]: Feb 13 15:54:30.600 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Feb 13 15:54:30.749081 coreos-metadata[1451]: Feb 13 15:54:30.601 INFO Fetch successful Feb 13 15:54:30.749081 coreos-metadata[1451]: Feb 13 15:54:30.601 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Feb 13 15:54:30.749081 coreos-metadata[1451]: Feb 13 15:54:30.604 INFO Fetch successful Feb 13 15:54:30.648947 dbus-daemon[1452]: [system] SELinux support is enabled Feb 13 15:54:30.573430 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:54:30.750013 extend-filesystems[1454]: Resized partition /dev/sda9 Feb 13 15:54:30.777549 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1361) Feb 13 15:54:30.656074 dbus-daemon[1452]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1398 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:54:30.595420 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:54:30.778003 extend-filesystems[1474]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:54:30.778003 extend-filesystems[1474]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 15:54:30.778003 extend-filesystems[1474]: old_desc_blocks = 1, new_desc_blocks = 2 Feb 13 15:54:30.778003 extend-filesystems[1474]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Feb 13 15:54:30.665965 ntpd[1458]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:33:53 UTC 2025 (1): Starting Feb 13 15:54:30.623942 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Feb 13 15:54:30.841227 extend-filesystems[1454]: Resized filesystem in /dev/sda9 Feb 13 15:54:30.665996 ntpd[1458]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:54:30.624807 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:54:30.666012 ntpd[1458]: ---------------------------------------------------- Feb 13 15:54:30.634876 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:54:30.860581 update_engine[1475]: I20250213 15:54:30.781584 1475 main.cc:92] Flatcar Update Engine starting Feb 13 15:54:30.860581 update_engine[1475]: I20250213 15:54:30.787775 1475 update_check_scheduler.cc:74] Next update check in 10m56s Feb 13 15:54:30.666027 ntpd[1458]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:54:30.678090 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:54:30.861168 jq[1480]: true Feb 13 15:54:30.666042 ntpd[1458]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:54:30.695361 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:54:30.666057 ntpd[1458]: corporation. Support and training for ntp-4 are Feb 13 15:54:30.716807 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:54:30.666070 ntpd[1458]: available at https://www.nwtime.org/support Feb 13 15:54:30.717090 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:54:30.862114 jq[1487]: true Feb 13 15:54:30.666085 ntpd[1458]: ---------------------------------------------------- Feb 13 15:54:30.717634 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:54:30.668787 ntpd[1458]: proto: precision = 0.106 usec (-23) Feb 13 15:54:30.718435 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:54:30.671066 ntpd[1458]: basedate set to 2025-02-01 Feb 13 15:54:30.734664 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:54:30.671094 ntpd[1458]: gps base set to 2025-02-02 (week 2352) Feb 13 15:54:30.735340 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:54:30.686830 ntpd[1458]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:54:30.775910 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:54:30.686898 ntpd[1458]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:54:30.777288 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:54:30.687142 ntpd[1458]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:54:30.856573 (ntainerd)[1488]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:54:30.687220 ntpd[1458]: Listen normally on 3 eth0 10.128.0.29:123 Feb 13 15:54:30.895348 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:54:30.687282 ntpd[1458]: Listen normally on 4 lo [::1]:123 Feb 13 15:54:30.687350 ntpd[1458]: bind(21) AF_INET6 fe80::4001:aff:fe80:1d%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:54:30.687379 ntpd[1458]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:1d%2#123 Feb 13 15:54:30.687400 ntpd[1458]: failed to init interface for address fe80::4001:aff:fe80:1d%2 Feb 13 15:54:30.687440 ntpd[1458]: Listening on routing socket on fd #21 for interface updates Feb 13 15:54:30.691844 ntpd[1458]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:54:30.691886 ntpd[1458]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:54:30.857917 dbus-daemon[1452]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:54:30.914959 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:54:30.949647 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:54:30.952538 systemd-logind[1470]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:54:30.952711 systemd-logind[1470]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 15:54:30.952752 systemd-logind[1470]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:54:30.954244 systemd-logind[1470]: New seat seat0. Feb 13 15:54:30.962119 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:54:30.962290 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:54:30.962332 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:54:30.983486 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:54:30.993355 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:54:30.993404 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:54:31.013559 bash[1517]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:54:31.013453 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:54:31.023683 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:54:31.034047 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:54:31.059593 systemd[1]: Starting sshkeys.service... Feb 13 15:54:31.124212 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:54:31.143753 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:54:31.276799 dbus-daemon[1452]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:54:31.277035 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:54:31.277789 coreos-metadata[1521]: Feb 13 15:54:31.277 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Feb 13 15:54:31.278830 dbus-daemon[1452]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1516 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:54:31.280361 coreos-metadata[1521]: Feb 13 15:54:31.280 INFO Fetch failed with 404: resource not found Feb 13 15:54:31.280361 coreos-metadata[1521]: Feb 13 15:54:31.280 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Feb 13 15:54:31.281247 coreos-metadata[1521]: Feb 13 15:54:31.281 INFO Fetch successful Feb 13 15:54:31.281247 coreos-metadata[1521]: Feb 13 15:54:31.281 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Feb 13 15:54:31.284324 coreos-metadata[1521]: Feb 13 15:54:31.284 INFO Fetch failed with 404: resource not found Feb 13 15:54:31.284324 coreos-metadata[1521]: Feb 13 15:54:31.284 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Feb 13 15:54:31.284596 coreos-metadata[1521]: Feb 13 15:54:31.284 INFO Fetch failed with 404: resource not found Feb 13 15:54:31.284596 coreos-metadata[1521]: Feb 13 15:54:31.284 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Feb 13 15:54:31.285298 coreos-metadata[1521]: Feb 13 15:54:31.285 INFO Fetch successful Feb 13 15:54:31.293363 unknown[1521]: wrote ssh authorized keys file for user: core Feb 13 15:54:31.295809 locksmithd[1518]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:54:31.296382 systemd-networkd[1398]: eth0: Gained IPv6LL Feb 13 15:54:31.299658 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:54:31.309103 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:54:31.330909 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:54:31.354069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:54:31.367224 update-ssh-keys[1534]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:54:31.375737 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:54:31.390313 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Feb 13 15:54:31.401313 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:54:31.411032 systemd[1]: Finished sshkeys.service. Feb 13 15:54:31.443236 init.sh[1539]: + '[' -e /etc/default/instance_configs.cfg.template ']' Feb 13 15:54:31.443236 init.sh[1539]: + echo -e '[InstanceSetup]\nset_host_keys = false' Feb 13 15:54:31.443236 init.sh[1539]: + /usr/bin/google_instance_setup Feb 13 15:54:31.495587 polkitd[1533]: Started polkitd version 121 Feb 13 15:54:31.528795 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:54:31.529347 polkitd[1533]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:54:31.529447 polkitd[1533]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:54:31.533366 polkitd[1533]: Finished loading, compiling and executing 2 rules Feb 13 15:54:31.534881 dbus-daemon[1452]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:54:31.535492 polkitd[1533]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:54:31.538852 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:54:31.590510 systemd-hostnamed[1516]: Hostname set to (transient) Feb 13 15:54:31.593299 systemd-resolved[1316]: System hostname changed to 'ci-4186-1-1-9a06a13f76427d66b44a.c.flatcar-212911.internal'. Feb 13 15:54:31.654740 containerd[1488]: time="2025-02-13T15:54:31.653666451Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:54:31.775256 containerd[1488]: time="2025-02-13T15:54:31.773590701Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:54:31.782280 containerd[1488]: time="2025-02-13T15:54:31.780603543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:54:31.782280 containerd[1488]: time="2025-02-13T15:54:31.780659434Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:54:31.782280 containerd[1488]: time="2025-02-13T15:54:31.780688885Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:54:31.782280 containerd[1488]: time="2025-02-13T15:54:31.780895047Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:54:31.782280 containerd[1488]: time="2025-02-13T15:54:31.780921723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:54:31.782280 containerd[1488]: time="2025-02-13T15:54:31.781011163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:54:31.782280 containerd[1488]: time="2025-02-13T15:54:31.781032115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:54:31.782650 containerd[1488]: time="2025-02-13T15:54:31.782290885Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:54:31.782650 containerd[1488]: time="2025-02-13T15:54:31.782322349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:54:31.782650 containerd[1488]: time="2025-02-13T15:54:31.782347248Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:54:31.782650 containerd[1488]: time="2025-02-13T15:54:31.782362702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:54:31.782650 containerd[1488]: time="2025-02-13T15:54:31.782490187Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:54:31.782867 containerd[1488]: time="2025-02-13T15:54:31.782810969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:54:31.783234 containerd[1488]: time="2025-02-13T15:54:31.783012156Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:54:31.783234 containerd[1488]: time="2025-02-13T15:54:31.783044949Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:54:31.783921 containerd[1488]: time="2025-02-13T15:54:31.783178564Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:54:31.788585 containerd[1488]: time="2025-02-13T15:54:31.788322624Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:54:31.795683 containerd[1488]: time="2025-02-13T15:54:31.795641290Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:54:31.795959 containerd[1488]: time="2025-02-13T15:54:31.795856204Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:54:31.796059 containerd[1488]: time="2025-02-13T15:54:31.795940380Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:54:31.799017 containerd[1488]: time="2025-02-13T15:54:31.796143060Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:54:31.799017 containerd[1488]: time="2025-02-13T15:54:31.796172849Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:54:31.799017 containerd[1488]: time="2025-02-13T15:54:31.796389161Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:54:31.799017 containerd[1488]: time="2025-02-13T15:54:31.796741349Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:54:31.799017 containerd[1488]: time="2025-02-13T15:54:31.796906459Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:54:31.799017 containerd[1488]: time="2025-02-13T15:54:31.796934061Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:54:31.799017 containerd[1488]: time="2025-02-13T15:54:31.796961338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:54:31.799017 containerd[1488]: time="2025-02-13T15:54:31.796985112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:54:31.799017 containerd[1488]: time="2025-02-13T15:54:31.797007752Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:54:31.799017 containerd[1488]: time="2025-02-13T15:54:31.797028736Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:54:31.799017 containerd[1488]: time="2025-02-13T15:54:31.797051375Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:54:31.799017 containerd[1488]: time="2025-02-13T15:54:31.797075279Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:54:31.799017 containerd[1488]: time="2025-02-13T15:54:31.797098013Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:54:31.799017 containerd[1488]: time="2025-02-13T15:54:31.797119587Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:54:31.799629 containerd[1488]: time="2025-02-13T15:54:31.797140515Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:54:31.799629 containerd[1488]: time="2025-02-13T15:54:31.797199703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.799629 containerd[1488]: time="2025-02-13T15:54:31.797225637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.799629 containerd[1488]: time="2025-02-13T15:54:31.797245912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.799629 containerd[1488]: time="2025-02-13T15:54:31.797267997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.799629 containerd[1488]: time="2025-02-13T15:54:31.797289260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.799629 containerd[1488]: time="2025-02-13T15:54:31.797311714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.799629 containerd[1488]: time="2025-02-13T15:54:31.797330998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.799629 containerd[1488]: time="2025-02-13T15:54:31.797357054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.799629 containerd[1488]: time="2025-02-13T15:54:31.797378400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.799629 containerd[1488]: time="2025-02-13T15:54:31.797401683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.799629 containerd[1488]: time="2025-02-13T15:54:31.797422289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.799629 containerd[1488]: time="2025-02-13T15:54:31.797442029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.799629 containerd[1488]: time="2025-02-13T15:54:31.797463581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.799629 containerd[1488]: time="2025-02-13T15:54:31.797487079Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:54:31.800226 containerd[1488]: time="2025-02-13T15:54:31.797520508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.800226 containerd[1488]: time="2025-02-13T15:54:31.797542567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.800226 containerd[1488]: time="2025-02-13T15:54:31.797561083Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:54:31.800226 containerd[1488]: time="2025-02-13T15:54:31.797628461Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:54:31.800226 containerd[1488]: time="2025-02-13T15:54:31.797657866Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:54:31.800226 containerd[1488]: time="2025-02-13T15:54:31.797676401Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:54:31.800226 containerd[1488]: time="2025-02-13T15:54:31.797698788Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:54:31.800226 containerd[1488]: time="2025-02-13T15:54:31.797717799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.800226 containerd[1488]: time="2025-02-13T15:54:31.797746981Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:54:31.800226 containerd[1488]: time="2025-02-13T15:54:31.797765297Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:54:31.800226 containerd[1488]: time="2025-02-13T15:54:31.797784302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:54:31.801537 containerd[1488]: time="2025-02-13T15:54:31.801425769Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:54:31.801853 containerd[1488]: time="2025-02-13T15:54:31.801820807Z" level=info msg="Connect containerd service" Feb 13 15:54:31.802025 containerd[1488]: time="2025-02-13T15:54:31.801988176Z" level=info msg="using legacy CRI server" Feb 13 15:54:31.802135 containerd[1488]: time="2025-02-13T15:54:31.802118205Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:54:31.802455 containerd[1488]: time="2025-02-13T15:54:31.802432559Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:54:31.805534 containerd[1488]: time="2025-02-13T15:54:31.805493143Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:54:31.806741 containerd[1488]: time="2025-02-13T15:54:31.805891425Z" level=info msg="Start subscribing containerd event" Feb 13 15:54:31.806741 containerd[1488]: time="2025-02-13T15:54:31.805964888Z" level=info msg="Start recovering state" Feb 13 15:54:31.806741 containerd[1488]: time="2025-02-13T15:54:31.806088455Z" level=info msg="Start event monitor" Feb 13 15:54:31.806741 containerd[1488]: time="2025-02-13T15:54:31.806109978Z" level=info msg="Start snapshots syncer" Feb 13 15:54:31.806741 containerd[1488]: time="2025-02-13T15:54:31.806130184Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:54:31.806741 containerd[1488]: time="2025-02-13T15:54:31.806143403Z" level=info msg="Start streaming server" Feb 13 15:54:31.806741 containerd[1488]: time="2025-02-13T15:54:31.806303261Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:54:31.806741 containerd[1488]: time="2025-02-13T15:54:31.806381464Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:54:31.806593 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:54:31.807716 containerd[1488]: time="2025-02-13T15:54:31.807590145Z" level=info msg="containerd successfully booted in 0.156107s" Feb 13 15:54:32.111519 sshd_keygen[1481]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:54:32.175152 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:54:32.194635 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:54:32.213599 systemd[1]: Started sshd@0-10.128.0.29:22-139.178.89.65:60192.service - OpenSSH per-connection server daemon (139.178.89.65:60192). Feb 13 15:54:32.228077 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:54:32.228402 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:54:32.250700 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:54:32.282349 instance-setup[1546]: INFO Running google_set_multiqueue. Feb 13 15:54:32.298590 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:54:32.319724 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:54:32.327638 instance-setup[1546]: INFO Set channels for eth0 to 2. Feb 13 15:54:32.332575 instance-setup[1546]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Feb 13 15:54:32.335178 instance-setup[1546]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Feb 13 15:54:32.335280 instance-setup[1546]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Feb 13 15:54:32.337423 instance-setup[1546]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Feb 13 15:54:32.338430 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:54:32.337493 instance-setup[1546]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Feb 13 15:54:32.341671 instance-setup[1546]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Feb 13 15:54:32.342506 instance-setup[1546]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Feb 13 15:54:32.345325 instance-setup[1546]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Feb 13 15:54:32.349686 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:54:32.357682 instance-setup[1546]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 15:54:32.365645 instance-setup[1546]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 15:54:32.368548 instance-setup[1546]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Feb 13 15:54:32.368788 instance-setup[1546]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Feb 13 15:54:32.390140 init.sh[1539]: + /usr/bin/google_metadata_script_runner --script-type startup Feb 13 15:54:32.561797 startup-script[1610]: INFO Starting startup scripts. Feb 13 15:54:32.571455 startup-script[1610]: INFO No startup scripts found in metadata. Feb 13 15:54:32.571548 startup-script[1610]: INFO Finished running startup scripts. Feb 13 15:54:32.594766 init.sh[1539]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Feb 13 15:54:32.595720 init.sh[1539]: + daemon_pids=() Feb 13 15:54:32.595720 init.sh[1539]: + for d in accounts clock_skew network Feb 13 15:54:32.595720 init.sh[1539]: + daemon_pids+=($!) Feb 13 15:54:32.595720 init.sh[1539]: + for d in accounts clock_skew network Feb 13 15:54:32.597103 init.sh[1539]: + daemon_pids+=($!) Feb 13 15:54:32.597103 init.sh[1539]: + for d in accounts clock_skew network Feb 13 15:54:32.597103 init.sh[1539]: + daemon_pids+=($!) Feb 13 15:54:32.597103 init.sh[1539]: + NOTIFY_SOCKET=/run/systemd/notify Feb 13 15:54:32.597103 init.sh[1539]: + /usr/bin/systemd-notify --ready Feb 13 15:54:32.597445 init.sh[1613]: + /usr/bin/google_accounts_daemon Feb 13 15:54:32.598024 init.sh[1614]: + /usr/bin/google_clock_skew_daemon Feb 13 15:54:32.600532 init.sh[1615]: + /usr/bin/google_network_daemon Feb 13 15:54:32.630732 systemd[1]: Started oem-gce.service - GCE Linux Agent. Feb 13 15:54:32.633866 sshd[1573]: Accepted publickey for core from 139.178.89.65 port 60192 ssh2: RSA SHA256:oGy5sLWL11heRvQ79Ti3NN8675bvXiOgHXLBDCC6VGw Feb 13 15:54:32.637960 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:32.659385 init.sh[1539]: + wait -n 1613 1614 1615 Feb 13 15:54:32.667242 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:54:32.687731 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:54:32.716587 systemd-logind[1470]: New session 1 of user core. Feb 13 15:54:32.736328 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:54:32.758714 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:54:32.805901 (systemd)[1619]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:54:33.107347 systemd[1619]: Queued start job for default target default.target. Feb 13 15:54:33.113827 systemd[1619]: Created slice app.slice - User Application Slice. Feb 13 15:54:33.114357 systemd[1619]: Reached target paths.target - Paths. Feb 13 15:54:33.114412 systemd[1619]: Reached target timers.target - Timers. Feb 13 15:54:33.120804 systemd[1619]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:54:33.155728 systemd[1619]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:54:33.159878 systemd[1619]: Reached target sockets.target - Sockets. Feb 13 15:54:33.159922 systemd[1619]: Reached target basic.target - Basic System. Feb 13 15:54:33.159999 systemd[1619]: Reached target default.target - Main User Target. Feb 13 15:54:33.160056 systemd[1619]: Startup finished in 333ms. Feb 13 15:54:33.160276 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:54:33.179444 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:54:33.213561 groupadd[1632]: group added to /etc/group: name=google-sudoers, GID=1000 Feb 13 15:54:33.220701 groupadd[1632]: group added to /etc/gshadow: name=google-sudoers Feb 13 15:54:33.229796 google-clock-skew[1614]: INFO Starting Google Clock Skew daemon. Feb 13 15:54:33.243824 google-clock-skew[1614]: INFO Clock drift token has changed: 0. Feb 13 15:54:33.263678 google-networking[1615]: INFO Starting Google Networking daemon. Feb 13 15:54:33.297078 groupadd[1632]: new group: name=google-sudoers, GID=1000 Feb 13 15:54:33.332551 google-accounts[1613]: INFO Starting Google Accounts daemon. Feb 13 15:54:33.359593 google-accounts[1613]: WARNING OS Login not installed. Feb 13 15:54:33.362051 google-accounts[1613]: INFO Creating a new user account for 0. Feb 13 15:54:33.368858 init.sh[1645]: useradd: invalid user name '0': use --badname to ignore Feb 13 15:54:33.369653 google-accounts[1613]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Feb 13 15:54:33.444676 systemd[1]: Started sshd@1-10.128.0.29:22-139.178.89.65:60196.service - OpenSSH per-connection server daemon (139.178.89.65:60196). Feb 13 15:54:33.461812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:54:33.477115 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:54:33.489002 systemd[1]: Startup finished in 1.019s (kernel) + 15.890s (initrd) + 8.903s (userspace) = 25.813s. Feb 13 15:54:33.491122 (kubelet)[1653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:54:33.508568 agetty[1596]: failed to open credentials directory Feb 13 15:54:33.511722 agetty[1593]: failed to open credentials directory Feb 13 15:54:33.666555 ntpd[1458]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:1d%2]:123 Feb 13 15:54:33.667097 ntpd[1458]: 13 Feb 15:54:33 ntpd[1458]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:1d%2]:123 Feb 13 15:54:33.754459 sshd[1652]: Accepted publickey for core from 139.178.89.65 port 60196 ssh2: RSA SHA256:oGy5sLWL11heRvQ79Ti3NN8675bvXiOgHXLBDCC6VGw Feb 13 15:54:33.756859 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:33.764070 systemd-logind[1470]: New session 2 of user core. Feb 13 15:54:33.770408 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:54:34.000153 systemd-resolved[1316]: Clock change detected. Flushing caches. Feb 13 15:54:34.001118 google-clock-skew[1614]: INFO Synced system time with hardware clock. Feb 13 15:54:34.135683 sshd[1665]: Connection closed by 139.178.89.65 port 60196 Feb 13 15:54:34.136446 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:34.141902 systemd-logind[1470]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:54:34.142574 systemd[1]: sshd@1-10.128.0.29:22-139.178.89.65:60196.service: Deactivated successfully. Feb 13 15:54:34.145609 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:54:34.148131 systemd-logind[1470]: Removed session 2. Feb 13 15:54:34.197970 systemd[1]: Started sshd@2-10.128.0.29:22-139.178.89.65:60198.service - OpenSSH per-connection server daemon (139.178.89.65:60198). Feb 13 15:54:34.492806 sshd[1670]: Accepted publickey for core from 139.178.89.65 port 60198 ssh2: RSA SHA256:oGy5sLWL11heRvQ79Ti3NN8675bvXiOgHXLBDCC6VGw Feb 13 15:54:34.494884 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:34.501871 systemd-logind[1470]: New session 3 of user core. Feb 13 15:54:34.509813 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:54:34.700189 sshd[1672]: Connection closed by 139.178.89.65 port 60198 Feb 13 15:54:34.701054 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:34.707607 systemd[1]: sshd@2-10.128.0.29:22-139.178.89.65:60198.service: Deactivated successfully. Feb 13 15:54:34.710543 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:54:34.711985 systemd-logind[1470]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:54:34.713471 systemd-logind[1470]: Removed session 3. Feb 13 15:54:34.729438 kubelet[1653]: E0213 15:54:34.729329 1653 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:54:34.731710 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:54:34.731938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:54:34.732322 systemd[1]: kubelet.service: Consumed 1.265s CPU time. Feb 13 15:54:34.755483 systemd[1]: Started sshd@3-10.128.0.29:22-139.178.89.65:58752.service - OpenSSH per-connection server daemon (139.178.89.65:58752). Feb 13 15:54:35.047742 sshd[1679]: Accepted publickey for core from 139.178.89.65 port 58752 ssh2: RSA SHA256:oGy5sLWL11heRvQ79Ti3NN8675bvXiOgHXLBDCC6VGw Feb 13 15:54:35.049438 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:35.054953 systemd-logind[1470]: New session 4 of user core. Feb 13 15:54:35.065803 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:54:35.261579 sshd[1681]: Connection closed by 139.178.89.65 port 58752 Feb 13 15:54:35.262425 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:35.266651 systemd[1]: sshd@3-10.128.0.29:22-139.178.89.65:58752.service: Deactivated successfully. Feb 13 15:54:35.268976 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:54:35.270646 systemd-logind[1470]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:54:35.272293 systemd-logind[1470]: Removed session 4. Feb 13 15:54:35.314552 systemd[1]: Started sshd@4-10.128.0.29:22-139.178.89.65:58760.service - OpenSSH per-connection server daemon (139.178.89.65:58760). Feb 13 15:54:35.615114 sshd[1686]: Accepted publickey for core from 139.178.89.65 port 58760 ssh2: RSA SHA256:oGy5sLWL11heRvQ79Ti3NN8675bvXiOgHXLBDCC6VGw Feb 13 15:54:35.616921 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:35.623471 systemd-logind[1470]: New session 5 of user core. Feb 13 15:54:35.634883 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:54:35.807026 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:54:35.807521 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:54:35.823418 sudo[1689]: pam_unix(sudo:session): session closed for user root Feb 13 15:54:35.865942 sshd[1688]: Connection closed by 139.178.89.65 port 58760 Feb 13 15:54:35.867655 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:35.873079 systemd[1]: sshd@4-10.128.0.29:22-139.178.89.65:58760.service: Deactivated successfully. Feb 13 15:54:35.875261 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:54:35.876288 systemd-logind[1470]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:54:35.878137 systemd-logind[1470]: Removed session 5. Feb 13 15:54:35.925321 systemd[1]: Started sshd@5-10.128.0.29:22-139.178.89.65:58770.service - OpenSSH per-connection server daemon (139.178.89.65:58770). Feb 13 15:54:36.210770 sshd[1694]: Accepted publickey for core from 139.178.89.65 port 58770 ssh2: RSA SHA256:oGy5sLWL11heRvQ79Ti3NN8675bvXiOgHXLBDCC6VGw Feb 13 15:54:36.212247 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:36.217964 systemd-logind[1470]: New session 6 of user core. Feb 13 15:54:36.234815 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:54:36.389266 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:54:36.389796 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:54:36.394990 sudo[1698]: pam_unix(sudo:session): session closed for user root Feb 13 15:54:36.408430 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:54:36.408944 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:54:36.429103 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:54:36.466078 augenrules[1720]: No rules Feb 13 15:54:36.467293 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:54:36.467613 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:54:36.469737 sudo[1697]: pam_unix(sudo:session): session closed for user root Feb 13 15:54:36.512229 sshd[1696]: Connection closed by 139.178.89.65 port 58770 Feb 13 15:54:36.513074 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:36.517601 systemd[1]: sshd@5-10.128.0.29:22-139.178.89.65:58770.service: Deactivated successfully. Feb 13 15:54:36.519855 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:54:36.521915 systemd-logind[1470]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:54:36.523295 systemd-logind[1470]: Removed session 6. Feb 13 15:54:36.576987 systemd[1]: Started sshd@6-10.128.0.29:22-139.178.89.65:58782.service - OpenSSH per-connection server daemon (139.178.89.65:58782). Feb 13 15:54:36.867727 sshd[1728]: Accepted publickey for core from 139.178.89.65 port 58782 ssh2: RSA SHA256:oGy5sLWL11heRvQ79Ti3NN8675bvXiOgHXLBDCC6VGw Feb 13 15:54:36.869525 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:36.875637 systemd-logind[1470]: New session 7 of user core. Feb 13 15:54:36.882863 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:54:37.046536 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:54:37.047055 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:54:38.005394 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:54:38.006046 systemd[1]: kubelet.service: Consumed 1.265s CPU time. Feb 13 15:54:38.019251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:54:38.063478 systemd[1]: Reloading requested from client PID 1769 ('systemctl') (unit session-7.scope)... Feb 13 15:54:38.063503 systemd[1]: Reloading... Feb 13 15:54:38.233624 zram_generator::config[1809]: No configuration found. Feb 13 15:54:38.372902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:54:38.476273 systemd[1]: Reloading finished in 411 ms. Feb 13 15:54:38.533330 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:54:38.533464 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:54:38.533839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:54:38.540250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:54:38.810526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:54:38.822152 (kubelet)[1860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:54:38.877987 kubelet[1860]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:54:38.877987 kubelet[1860]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:54:38.878532 kubelet[1860]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:54:38.878532 kubelet[1860]: I0213 15:54:38.878098 1860 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:54:39.484149 kubelet[1860]: I0213 15:54:39.484090 1860 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:54:39.484149 kubelet[1860]: I0213 15:54:39.484129 1860 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:54:39.484516 kubelet[1860]: I0213 15:54:39.484465 1860 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:54:39.522014 kubelet[1860]: I0213 15:54:39.521274 1860 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:54:39.540023 kubelet[1860]: I0213 15:54:39.539962 1860 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:54:39.544079 kubelet[1860]: I0213 15:54:39.544027 1860 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:54:39.544339 kubelet[1860]: I0213 15:54:39.544295 1860 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:54:39.544339 kubelet[1860]: I0213 15:54:39.544331 1860 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:54:39.544634 kubelet[1860]: I0213 15:54:39.544350 1860 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:54:39.544634 kubelet[1860]: I0213 15:54:39.544509 1860 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:54:39.544745 kubelet[1860]: I0213 15:54:39.544692 1860 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:54:39.544745 kubelet[1860]: I0213 15:54:39.544716 1860 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:54:39.544831 kubelet[1860]: I0213 15:54:39.544754 1860 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:54:39.544831 kubelet[1860]: I0213 15:54:39.544772 1860 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:54:39.545396 kubelet[1860]: E0213 15:54:39.545320 1860 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:39.545396 kubelet[1860]: E0213 15:54:39.545391 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:39.546894 kubelet[1860]: I0213 15:54:39.546843 1860 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:54:39.550987 kubelet[1860]: I0213 15:54:39.550953 1860 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:54:39.554242 kubelet[1860]: W0213 15:54:39.554204 1860 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:54:39.555110 kubelet[1860]: I0213 15:54:39.555006 1860 server.go:1256] "Started kubelet" Feb 13 15:54:39.556974 kubelet[1860]: I0213 15:54:39.556776 1860 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:54:39.562971 kubelet[1860]: W0213 15:54:39.562935 1860 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 15:54:39.563081 kubelet[1860]: E0213 15:54:39.562983 1860 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 15:54:39.563164 kubelet[1860]: W0213 15:54:39.563102 1860 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.128.0.29" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 15:54:39.563164 kubelet[1860]: E0213 15:54:39.563123 1860 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.29" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 15:54:39.568388 kubelet[1860]: I0213 15:54:39.567981 1860 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:54:39.569222 kubelet[1860]: I0213 15:54:39.569143 1860 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:54:39.570961 kubelet[1860]: I0213 15:54:39.570902 1860 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:54:39.571185 kubelet[1860]: I0213 15:54:39.571162 1860 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:54:39.572396 kubelet[1860]: I0213 15:54:39.572348 1860 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:54:39.572880 kubelet[1860]: I0213 15:54:39.572855 1860 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:54:39.573032 kubelet[1860]: I0213 15:54:39.572963 1860 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:54:39.576901 kubelet[1860]: I0213 15:54:39.575621 1860 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:54:39.576901 kubelet[1860]: I0213 15:54:39.575735 1860 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:54:39.582484 kubelet[1860]: E0213 15:54:39.582462 1860 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.128.0.29.1823cf8b9822e107 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.128.0.29,UID:10.128.0.29,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.128.0.29,},FirstTimestamp:2025-02-13 15:54:39.554969863 +0000 UTC m=+0.727762134,LastTimestamp:2025-02-13 15:54:39.554969863 +0000 UTC m=+0.727762134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.128.0.29,}" Feb 13 15:54:39.584542 kubelet[1860]: E0213 15:54:39.584521 1860 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:54:39.586152 kubelet[1860]: I0213 15:54:39.586134 1860 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:54:39.593482 kubelet[1860]: E0213 15:54:39.593401 1860 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.128.0.29\" not found" node="10.128.0.29" Feb 13 15:54:39.623103 kubelet[1860]: I0213 15:54:39.622729 1860 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:54:39.623103 kubelet[1860]: I0213 15:54:39.622760 1860 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:54:39.623103 kubelet[1860]: I0213 15:54:39.622784 1860 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:54:39.625593 kubelet[1860]: I0213 15:54:39.625349 1860 policy_none.go:49] "None policy: Start" Feb 13 15:54:39.627424 kubelet[1860]: I0213 15:54:39.626975 1860 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:54:39.627424 kubelet[1860]: I0213 15:54:39.627009 1860 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:54:39.637030 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:54:39.651290 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:54:39.658033 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:54:39.673262 kubelet[1860]: I0213 15:54:39.670089 1860 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:54:39.673262 kubelet[1860]: I0213 15:54:39.670461 1860 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:54:39.673506 kubelet[1860]: E0213 15:54:39.673427 1860 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.29\" not found" Feb 13 15:54:39.674393 kubelet[1860]: I0213 15:54:39.673841 1860 kubelet_node_status.go:73] "Attempting to register node" node="10.128.0.29" Feb 13 15:54:39.684253 kubelet[1860]: I0213 15:54:39.684081 1860 kubelet_node_status.go:76] "Successfully registered node" node="10.128.0.29" Feb 13 15:54:39.685661 kubelet[1860]: I0213 15:54:39.685584 1860 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:54:39.688074 kubelet[1860]: I0213 15:54:39.688036 1860 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:54:39.688074 kubelet[1860]: I0213 15:54:39.688075 1860 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:54:39.688293 kubelet[1860]: I0213 15:54:39.688100 1860 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:54:39.688293 kubelet[1860]: E0213 15:54:39.688180 1860 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 15:54:39.732138 kubelet[1860]: E0213 15:54:39.732082 1860 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.29\" not found" Feb 13 15:54:39.832631 kubelet[1860]: E0213 15:54:39.832442 1860 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.29\" not found" Feb 13 15:54:39.933128 kubelet[1860]: E0213 15:54:39.933064 1860 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.29\" not found" Feb 13 15:54:40.034160 kubelet[1860]: E0213 15:54:40.034073 1860 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.29\" not found" Feb 13 15:54:40.135145 kubelet[1860]: E0213 15:54:40.134980 1860 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.29\" not found" Feb 13 15:54:40.235803 kubelet[1860]: E0213 15:54:40.235734 1860 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.29\" not found" Feb 13 15:54:40.336535 kubelet[1860]: E0213 15:54:40.336469 1860 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.29\" not found" Feb 13 15:54:40.353162 sudo[1731]: pam_unix(sudo:session): session closed for user root Feb 13 15:54:40.396021 sshd[1730]: Connection closed by 139.178.89.65 port 58782 Feb 13 15:54:40.396891 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:40.401864 systemd[1]: sshd@6-10.128.0.29:22-139.178.89.65:58782.service: Deactivated successfully. Feb 13 15:54:40.404347 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:54:40.406494 systemd-logind[1470]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:54:40.408009 systemd-logind[1470]: Removed session 7. Feb 13 15:54:40.437182 kubelet[1860]: E0213 15:54:40.437130 1860 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.29\" not found" Feb 13 15:54:40.493032 kubelet[1860]: I0213 15:54:40.492956 1860 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 15:54:40.493227 kubelet[1860]: W0213 15:54:40.493208 1860 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Feb 13 15:54:40.493294 kubelet[1860]: W0213 15:54:40.493257 1860 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Node ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Feb 13 15:54:40.493359 kubelet[1860]: W0213 15:54:40.493292 1860 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Feb 13 15:54:40.538426 kubelet[1860]: I0213 15:54:40.538379 1860 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 15:54:40.539146 containerd[1488]: time="2025-02-13T15:54:40.539083597Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:54:40.539749 kubelet[1860]: I0213 15:54:40.539370 1860 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 15:54:40.546620 kubelet[1860]: I0213 15:54:40.546357 1860 apiserver.go:52] "Watching apiserver" Feb 13 15:54:40.546620 kubelet[1860]: E0213 15:54:40.546395 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:40.562954 kubelet[1860]: I0213 15:54:40.562910 1860 topology_manager.go:215] "Topology Admit Handler" podUID="26a28cea-7c71-4536-9836-b29824c3f605" podNamespace="calico-system" podName="calico-node-4l9qs" Feb 13 15:54:40.563130 kubelet[1860]: I0213 15:54:40.563073 1860 topology_manager.go:215] "Topology Admit Handler" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" podNamespace="calico-system" podName="csi-node-driver-hkphc" Feb 13 15:54:40.563205 kubelet[1860]: I0213 15:54:40.563151 1860 topology_manager.go:215] "Topology Admit Handler" podUID="c648307b-a0d9-4267-b886-21149766168f" podNamespace="kube-system" podName="kube-proxy-v7gff" Feb 13 15:54:40.564186 kubelet[1860]: E0213 15:54:40.563422 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:54:40.573610 kubelet[1860]: I0213 15:54:40.573579 1860 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:54:40.573840 systemd[1]: Created slice kubepods-besteffort-podc648307b_a0d9_4267_b886_21149766168f.slice - libcontainer container kubepods-besteffort-podc648307b_a0d9_4267_b886_21149766168f.slice. Feb 13 15:54:40.581589 kubelet[1860]: I0213 15:54:40.580655 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c648307b-a0d9-4267-b886-21149766168f-lib-modules\") pod \"kube-proxy-v7gff\" (UID: \"c648307b-a0d9-4267-b886-21149766168f\") " pod="kube-system/kube-proxy-v7gff" Feb 13 15:54:40.581589 kubelet[1860]: I0213 15:54:40.580727 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-policysync\") pod \"calico-node-4l9qs\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " pod="calico-system/calico-node-4l9qs" Feb 13 15:54:40.581589 kubelet[1860]: I0213 15:54:40.580766 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/26a28cea-7c71-4536-9836-b29824c3f605-node-certs\") pod \"calico-node-4l9qs\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " pod="calico-system/calico-node-4l9qs" Feb 13 15:54:40.581589 kubelet[1860]: I0213 15:54:40.580804 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-var-lib-calico\") pod \"calico-node-4l9qs\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " pod="calico-system/calico-node-4l9qs" Feb 13 15:54:40.581589 kubelet[1860]: I0213 15:54:40.580835 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-cni-log-dir\") pod \"calico-node-4l9qs\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " pod="calico-system/calico-node-4l9qs" Feb 13 15:54:40.581943 kubelet[1860]: I0213 15:54:40.580872 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-flexvol-driver-host\") pod \"calico-node-4l9qs\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " pod="calico-system/calico-node-4l9qs" Feb 13 15:54:40.581943 kubelet[1860]: I0213 15:54:40.580928 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fbf57cb1-3114-4738-90d1-01b2062bf75f-kubelet-dir\") pod \"csi-node-driver-hkphc\" (UID: \"fbf57cb1-3114-4738-90d1-01b2062bf75f\") " pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:40.581943 kubelet[1860]: I0213 15:54:40.580962 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-lib-modules\") pod \"calico-node-4l9qs\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " pod="calico-system/calico-node-4l9qs" Feb 13 15:54:40.581943 kubelet[1860]: I0213 15:54:40.580996 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-xtables-lock\") pod \"calico-node-4l9qs\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " pod="calico-system/calico-node-4l9qs" Feb 13 15:54:40.581943 kubelet[1860]: I0213 15:54:40.581030 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-var-run-calico\") pod \"calico-node-4l9qs\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " pod="calico-system/calico-node-4l9qs" Feb 13 15:54:40.582171 kubelet[1860]: I0213 15:54:40.581065 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-cni-bin-dir\") pod \"calico-node-4l9qs\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " pod="calico-system/calico-node-4l9qs" Feb 13 15:54:40.582171 kubelet[1860]: I0213 15:54:40.581098 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c648307b-a0d9-4267-b886-21149766168f-xtables-lock\") pod \"kube-proxy-v7gff\" (UID: \"c648307b-a0d9-4267-b886-21149766168f\") " pod="kube-system/kube-proxy-v7gff" Feb 13 15:54:40.582171 kubelet[1860]: I0213 15:54:40.581151 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmqxl\" (UniqueName: \"kubernetes.io/projected/fbf57cb1-3114-4738-90d1-01b2062bf75f-kube-api-access-lmqxl\") pod \"csi-node-driver-hkphc\" (UID: \"fbf57cb1-3114-4738-90d1-01b2062bf75f\") " pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:40.582171 kubelet[1860]: I0213 15:54:40.581199 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c648307b-a0d9-4267-b886-21149766168f-kube-proxy\") pod \"kube-proxy-v7gff\" (UID: \"c648307b-a0d9-4267-b886-21149766168f\") " pod="kube-system/kube-proxy-v7gff" Feb 13 15:54:40.582171 kubelet[1860]: I0213 15:54:40.581238 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs6fj\" (UniqueName: \"kubernetes.io/projected/c648307b-a0d9-4267-b886-21149766168f-kube-api-access-gs6fj\") pod \"kube-proxy-v7gff\" (UID: \"c648307b-a0d9-4267-b886-21149766168f\") " pod="kube-system/kube-proxy-v7gff" Feb 13 15:54:40.582394 kubelet[1860]: I0213 15:54:40.581280 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26a28cea-7c71-4536-9836-b29824c3f605-tigera-ca-bundle\") pod \"calico-node-4l9qs\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " pod="calico-system/calico-node-4l9qs" Feb 13 15:54:40.582394 kubelet[1860]: I0213 15:54:40.581316 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-cni-net-dir\") pod \"calico-node-4l9qs\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " pod="calico-system/calico-node-4l9qs" Feb 13 15:54:40.582394 kubelet[1860]: I0213 15:54:40.581351 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fbf57cb1-3114-4738-90d1-01b2062bf75f-varrun\") pod \"csi-node-driver-hkphc\" (UID: \"fbf57cb1-3114-4738-90d1-01b2062bf75f\") " pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:40.582394 kubelet[1860]: I0213 15:54:40.581385 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fbf57cb1-3114-4738-90d1-01b2062bf75f-registration-dir\") pod \"csi-node-driver-hkphc\" (UID: \"fbf57cb1-3114-4738-90d1-01b2062bf75f\") " pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:40.582394 kubelet[1860]: I0213 15:54:40.581421 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85b99\" (UniqueName: \"kubernetes.io/projected/26a28cea-7c71-4536-9836-b29824c3f605-kube-api-access-85b99\") pod \"calico-node-4l9qs\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " pod="calico-system/calico-node-4l9qs" Feb 13 15:54:40.582672 kubelet[1860]: I0213 15:54:40.581456 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fbf57cb1-3114-4738-90d1-01b2062bf75f-socket-dir\") pod \"csi-node-driver-hkphc\" (UID: \"fbf57cb1-3114-4738-90d1-01b2062bf75f\") " pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:40.592281 systemd[1]: Created slice kubepods-besteffort-pod26a28cea_7c71_4536_9836_b29824c3f605.slice - libcontainer container kubepods-besteffort-pod26a28cea_7c71_4536_9836_b29824c3f605.slice. Feb 13 15:54:40.683311 kubelet[1860]: E0213 15:54:40.683166 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.683311 kubelet[1860]: W0213 15:54:40.683194 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.683311 kubelet[1860]: E0213 15:54:40.683231 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.686929 kubelet[1860]: E0213 15:54:40.686508 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.686929 kubelet[1860]: W0213 15:54:40.686534 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.686929 kubelet[1860]: E0213 15:54:40.686599 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.687161 kubelet[1860]: E0213 15:54:40.686963 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.687161 kubelet[1860]: W0213 15:54:40.686977 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.687161 kubelet[1860]: E0213 15:54:40.687004 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.688301 kubelet[1860]: E0213 15:54:40.688036 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.688301 kubelet[1860]: W0213 15:54:40.688054 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.688301 kubelet[1860]: E0213 15:54:40.688152 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.688675 kubelet[1860]: E0213 15:54:40.688444 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.688675 kubelet[1860]: W0213 15:54:40.688457 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.688675 kubelet[1860]: E0213 15:54:40.688554 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.689190 kubelet[1860]: E0213 15:54:40.688851 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.689190 kubelet[1860]: W0213 15:54:40.688867 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.690147 kubelet[1860]: E0213 15:54:40.690007 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.690147 kubelet[1860]: E0213 15:54:40.690028 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.690147 kubelet[1860]: W0213 15:54:40.690041 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.690885 kubelet[1860]: E0213 15:54:40.690693 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.690885 kubelet[1860]: W0213 15:54:40.690766 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.691418 kubelet[1860]: E0213 15:54:40.691285 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.691418 kubelet[1860]: W0213 15:54:40.691315 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.691928 kubelet[1860]: E0213 15:54:40.691787 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.691928 kubelet[1860]: E0213 15:54:40.690743 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.691928 kubelet[1860]: E0213 15:54:40.691891 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.691928 kubelet[1860]: W0213 15:54:40.691902 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.692383 kubelet[1860]: E0213 15:54:40.692187 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.692383 kubelet[1860]: E0213 15:54:40.692270 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.693856 kubelet[1860]: E0213 15:54:40.693280 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.693856 kubelet[1860]: W0213 15:54:40.693300 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.693856 kubelet[1860]: E0213 15:54:40.693377 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.694804 kubelet[1860]: E0213 15:54:40.694786 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.695040 kubelet[1860]: W0213 15:54:40.694928 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.695040 kubelet[1860]: E0213 15:54:40.695016 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.695632 kubelet[1860]: E0213 15:54:40.695474 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.695632 kubelet[1860]: W0213 15:54:40.695489 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.695857 kubelet[1860]: E0213 15:54:40.695797 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.696262 kubelet[1860]: E0213 15:54:40.696146 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.696262 kubelet[1860]: W0213 15:54:40.696161 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.696721 kubelet[1860]: E0213 15:54:40.696491 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.697042 kubelet[1860]: E0213 15:54:40.696922 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.697042 kubelet[1860]: W0213 15:54:40.696938 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.697216 kubelet[1860]: E0213 15:54:40.697188 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.697546 kubelet[1860]: E0213 15:54:40.697521 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.697765 kubelet[1860]: W0213 15:54:40.697647 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.698249 kubelet[1860]: E0213 15:54:40.698128 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.698249 kubelet[1860]: W0213 15:54:40.698146 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.700798 kubelet[1860]: E0213 15:54:40.700660 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.700798 kubelet[1860]: W0213 15:54:40.700689 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.701322 kubelet[1860]: E0213 15:54:40.701189 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.701322 kubelet[1860]: W0213 15:54:40.701219 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.701835 kubelet[1860]: E0213 15:54:40.701706 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.701835 kubelet[1860]: W0213 15:54:40.701722 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.702462 kubelet[1860]: E0213 15:54:40.702341 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.702462 kubelet[1860]: E0213 15:54:40.702367 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.702462 kubelet[1860]: E0213 15:54:40.702398 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.702462 kubelet[1860]: E0213 15:54:40.702427 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.702462 kubelet[1860]: E0213 15:54:40.702444 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.703778 kubelet[1860]: E0213 15:54:40.703529 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.703778 kubelet[1860]: W0213 15:54:40.703548 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.704143 kubelet[1860]: E0213 15:54:40.703982 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.704143 kubelet[1860]: W0213 15:54:40.703995 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.704345 kubelet[1860]: E0213 15:54:40.704307 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.704345 kubelet[1860]: E0213 15:54:40.704341 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.704469 kubelet[1860]: E0213 15:54:40.704413 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.704469 kubelet[1860]: W0213 15:54:40.704424 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.704603 kubelet[1860]: E0213 15:54:40.704528 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.707202 kubelet[1860]: E0213 15:54:40.705840 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.707202 kubelet[1860]: W0213 15:54:40.705860 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.707202 kubelet[1860]: E0213 15:54:40.705960 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.707202 kubelet[1860]: E0213 15:54:40.706641 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.707202 kubelet[1860]: W0213 15:54:40.706656 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.707202 kubelet[1860]: E0213 15:54:40.706923 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.707202 kubelet[1860]: W0213 15:54:40.706935 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.707202 kubelet[1860]: E0213 15:54:40.707117 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.707202 kubelet[1860]: E0213 15:54:40.707141 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.708707 kubelet[1860]: E0213 15:54:40.708654 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.708707 kubelet[1860]: W0213 15:54:40.708677 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.708936 kubelet[1860]: E0213 15:54:40.708787 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.709089 kubelet[1860]: E0213 15:54:40.709070 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.709185 kubelet[1860]: W0213 15:54:40.709090 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.709630 kubelet[1860]: E0213 15:54:40.709606 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.711648 kubelet[1860]: E0213 15:54:40.711623 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.711648 kubelet[1860]: W0213 15:54:40.711647 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.712364 kubelet[1860]: E0213 15:54:40.711760 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.712364 kubelet[1860]: E0213 15:54:40.712150 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.712364 kubelet[1860]: W0213 15:54:40.712166 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.712364 kubelet[1860]: E0213 15:54:40.712264 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.712634 kubelet[1860]: E0213 15:54:40.712585 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.712634 kubelet[1860]: W0213 15:54:40.712599 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.714405 kubelet[1860]: E0213 15:54:40.714254 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.715646 kubelet[1860]: E0213 15:54:40.714797 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.715646 kubelet[1860]: W0213 15:54:40.714814 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.715646 kubelet[1860]: E0213 15:54:40.714936 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.715646 kubelet[1860]: E0213 15:54:40.715161 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.715646 kubelet[1860]: W0213 15:54:40.715172 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.715646 kubelet[1860]: E0213 15:54:40.715270 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.715953 kubelet[1860]: E0213 15:54:40.715657 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.715953 kubelet[1860]: W0213 15:54:40.715671 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.715953 kubelet[1860]: E0213 15:54:40.715777 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.716250 kubelet[1860]: E0213 15:54:40.716229 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.716250 kubelet[1860]: W0213 15:54:40.716247 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.716533 kubelet[1860]: E0213 15:54:40.716386 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.716648 kubelet[1860]: E0213 15:54:40.716618 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.716648 kubelet[1860]: W0213 15:54:40.716631 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.716790 kubelet[1860]: E0213 15:54:40.716760 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.717039 kubelet[1860]: E0213 15:54:40.717003 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.717039 kubelet[1860]: W0213 15:54:40.717019 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.717162 kubelet[1860]: E0213 15:54:40.717153 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.717430 kubelet[1860]: E0213 15:54:40.717411 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.717430 kubelet[1860]: W0213 15:54:40.717428 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.717724 kubelet[1860]: E0213 15:54:40.717607 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.717793 kubelet[1860]: E0213 15:54:40.717761 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.717793 kubelet[1860]: W0213 15:54:40.717781 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.717928 kubelet[1860]: E0213 15:54:40.717908 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.718183 kubelet[1860]: E0213 15:54:40.718164 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.718183 kubelet[1860]: W0213 15:54:40.718181 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.718303 kubelet[1860]: E0213 15:54:40.718282 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.718606 kubelet[1860]: E0213 15:54:40.718555 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.718606 kubelet[1860]: W0213 15:54:40.718604 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.718822 kubelet[1860]: E0213 15:54:40.718724 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.719114 kubelet[1860]: E0213 15:54:40.718931 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.719114 kubelet[1860]: W0213 15:54:40.718943 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.719282 kubelet[1860]: E0213 15:54:40.719248 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.719468 kubelet[1860]: E0213 15:54:40.719450 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.719468 kubelet[1860]: W0213 15:54:40.719466 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.719743 kubelet[1860]: E0213 15:54:40.719637 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.719906 kubelet[1860]: E0213 15:54:40.719815 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.719906 kubelet[1860]: W0213 15:54:40.719829 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.719906 kubelet[1860]: E0213 15:54:40.719867 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.720127 kubelet[1860]: E0213 15:54:40.720112 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.720127 kubelet[1860]: W0213 15:54:40.720124 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.720298 kubelet[1860]: E0213 15:54:40.720156 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.720941 kubelet[1860]: E0213 15:54:40.720921 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.720941 kubelet[1860]: W0213 15:54:40.720939 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.721136 kubelet[1860]: E0213 15:54:40.721044 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.721267 kubelet[1860]: E0213 15:54:40.721250 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.721267 kubelet[1860]: W0213 15:54:40.721265 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.721432 kubelet[1860]: E0213 15:54:40.721371 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.721614 kubelet[1860]: E0213 15:54:40.721595 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.721614 kubelet[1860]: W0213 15:54:40.721612 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.721824 kubelet[1860]: E0213 15:54:40.721713 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.721937 kubelet[1860]: E0213 15:54:40.721919 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.721937 kubelet[1860]: W0213 15:54:40.721936 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.722117 kubelet[1860]: E0213 15:54:40.721970 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.722228 kubelet[1860]: E0213 15:54:40.722208 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.722228 kubelet[1860]: W0213 15:54:40.722225 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.722691 kubelet[1860]: E0213 15:54:40.722305 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.722691 kubelet[1860]: E0213 15:54:40.722550 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.722691 kubelet[1860]: W0213 15:54:40.722613 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.722929 kubelet[1860]: E0213 15:54:40.722909 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.723004 kubelet[1860]: W0213 15:54:40.722930 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.723676 kubelet[1860]: E0213 15:54:40.723642 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.724103 kubelet[1860]: E0213 15:54:40.724083 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.724194 kubelet[1860]: E0213 15:54:40.724171 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.724194 kubelet[1860]: W0213 15:54:40.724182 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.724603 kubelet[1860]: E0213 15:54:40.724317 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.724603 kubelet[1860]: E0213 15:54:40.724495 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.724603 kubelet[1860]: W0213 15:54:40.724507 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.724855 kubelet[1860]: E0213 15:54:40.724800 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.724855 kubelet[1860]: W0213 15:54:40.724813 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.725045 kubelet[1860]: E0213 15:54:40.725028 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.725176 kubelet[1860]: E0213 15:54:40.725106 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.725176 kubelet[1860]: W0213 15:54:40.725174 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.725683 kubelet[1860]: E0213 15:54:40.725122 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.725683 kubelet[1860]: E0213 15:54:40.725460 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.725683 kubelet[1860]: W0213 15:54:40.725472 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.725683 kubelet[1860]: E0213 15:54:40.725606 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.725683 kubelet[1860]: E0213 15:54:40.725644 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.725927 kubelet[1860]: E0213 15:54:40.725761 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.725927 kubelet[1860]: W0213 15:54:40.725773 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.725927 kubelet[1860]: E0213 15:54:40.725803 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.758914 kubelet[1860]: E0213 15:54:40.758721 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.758914 kubelet[1860]: W0213 15:54:40.758762 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.758914 kubelet[1860]: E0213 15:54:40.758804 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.762119 kubelet[1860]: E0213 15:54:40.761085 1860 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:54:40.762119 kubelet[1860]: W0213 15:54:40.761108 1860 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:54:40.762119 kubelet[1860]: E0213 15:54:40.761136 1860 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:54:40.891604 containerd[1488]: time="2025-02-13T15:54:40.891521911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v7gff,Uid:c648307b-a0d9-4267-b886-21149766168f,Namespace:kube-system,Attempt:0,}" Feb 13 15:54:40.896875 containerd[1488]: time="2025-02-13T15:54:40.896775775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4l9qs,Uid:26a28cea-7c71-4536-9836-b29824c3f605,Namespace:calico-system,Attempt:0,}" Feb 13 15:54:41.324317 containerd[1488]: time="2025-02-13T15:54:41.324243860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:54:41.326715 containerd[1488]: time="2025-02-13T15:54:41.326661220Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:54:41.327829 containerd[1488]: time="2025-02-13T15:54:41.327754568Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Feb 13 15:54:41.328894 containerd[1488]: time="2025-02-13T15:54:41.328768722Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:54:41.334771 containerd[1488]: time="2025-02-13T15:54:41.334467056Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:54:41.337957 containerd[1488]: time="2025-02-13T15:54:41.337914276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:54:41.339178 containerd[1488]: time="2025-02-13T15:54:41.339134280Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 447.438882ms" Feb 13 15:54:41.342273 containerd[1488]: time="2025-02-13T15:54:41.342221136Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 445.248404ms" Feb 13 15:54:41.501325 containerd[1488]: time="2025-02-13T15:54:41.501209227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:54:41.501600 containerd[1488]: time="2025-02-13T15:54:41.501397093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:54:41.501926 containerd[1488]: time="2025-02-13T15:54:41.501817951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:41.503618 containerd[1488]: time="2025-02-13T15:54:41.502361890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:41.508276 containerd[1488]: time="2025-02-13T15:54:41.507848990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:54:41.508276 containerd[1488]: time="2025-02-13T15:54:41.507913646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:54:41.508276 containerd[1488]: time="2025-02-13T15:54:41.507928476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:41.508276 containerd[1488]: time="2025-02-13T15:54:41.508067454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:41.547342 kubelet[1860]: E0213 15:54:41.546584 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:41.629820 systemd[1]: Started cri-containerd-f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11.scope - libcontainer container f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11. Feb 13 15:54:41.635840 systemd[1]: Started cri-containerd-dfa7ab3b562453b3779ea5ea0a1c4a1d840f0c04b16c17d832011d1086a5863f.scope - libcontainer container dfa7ab3b562453b3779ea5ea0a1c4a1d840f0c04b16c17d832011d1086a5863f. Feb 13 15:54:41.684809 containerd[1488]: time="2025-02-13T15:54:41.684533710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4l9qs,Uid:26a28cea-7c71-4536-9836-b29824c3f605,Namespace:calico-system,Attempt:0,} returns sandbox id \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\"" Feb 13 15:54:41.691424 containerd[1488]: time="2025-02-13T15:54:41.690621999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v7gff,Uid:c648307b-a0d9-4267-b886-21149766168f,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfa7ab3b562453b3779ea5ea0a1c4a1d840f0c04b16c17d832011d1086a5863f\"" Feb 13 15:54:41.697111 containerd[1488]: time="2025-02-13T15:54:41.696710486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:54:41.707209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3247235266.mount: Deactivated successfully. Feb 13 15:54:42.547123 kubelet[1860]: E0213 15:54:42.547064 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:42.689218 kubelet[1860]: E0213 15:54:42.689168 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:54:42.780997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176103378.mount: Deactivated successfully. Feb 13 15:54:42.910666 containerd[1488]: time="2025-02-13T15:54:42.910495980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:42.911960 containerd[1488]: time="2025-02-13T15:54:42.911881149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 15:54:42.913292 containerd[1488]: time="2025-02-13T15:54:42.913246032Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:42.917774 containerd[1488]: time="2025-02-13T15:54:42.917677474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:42.919260 containerd[1488]: time="2025-02-13T15:54:42.918611656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.221828947s" Feb 13 15:54:42.919260 containerd[1488]: time="2025-02-13T15:54:42.918656667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 15:54:42.919792 containerd[1488]: time="2025-02-13T15:54:42.919760008Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:54:42.921393 containerd[1488]: time="2025-02-13T15:54:42.921355968Z" level=info msg="CreateContainer within sandbox \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:54:42.942699 containerd[1488]: time="2025-02-13T15:54:42.942649217Z" level=info msg="CreateContainer within sandbox \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9cfe1562ca66d0939394df6a8c719a63a6c6472874c1cdb9898c2ada56dfd091\"" Feb 13 15:54:42.943591 containerd[1488]: time="2025-02-13T15:54:42.943541383Z" level=info msg="StartContainer for \"9cfe1562ca66d0939394df6a8c719a63a6c6472874c1cdb9898c2ada56dfd091\"" Feb 13 15:54:42.989812 systemd[1]: Started cri-containerd-9cfe1562ca66d0939394df6a8c719a63a6c6472874c1cdb9898c2ada56dfd091.scope - libcontainer container 9cfe1562ca66d0939394df6a8c719a63a6c6472874c1cdb9898c2ada56dfd091. Feb 13 15:54:43.039164 containerd[1488]: time="2025-02-13T15:54:43.039107008Z" level=info msg="StartContainer for \"9cfe1562ca66d0939394df6a8c719a63a6c6472874c1cdb9898c2ada56dfd091\" returns successfully" Feb 13 15:54:43.052318 systemd[1]: cri-containerd-9cfe1562ca66d0939394df6a8c719a63a6c6472874c1cdb9898c2ada56dfd091.scope: Deactivated successfully. Feb 13 15:54:43.140242 containerd[1488]: time="2025-02-13T15:54:43.140138879Z" level=info msg="shim disconnected" id=9cfe1562ca66d0939394df6a8c719a63a6c6472874c1cdb9898c2ada56dfd091 namespace=k8s.io Feb 13 15:54:43.140242 containerd[1488]: time="2025-02-13T15:54:43.140211252Z" level=warning msg="cleaning up after shim disconnected" id=9cfe1562ca66d0939394df6a8c719a63a6c6472874c1cdb9898c2ada56dfd091 namespace=k8s.io Feb 13 15:54:43.140242 containerd[1488]: time="2025-02-13T15:54:43.140227527Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:54:43.548335 kubelet[1860]: E0213 15:54:43.548281 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:43.732835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9cfe1562ca66d0939394df6a8c719a63a6c6472874c1cdb9898c2ada56dfd091-rootfs.mount: Deactivated successfully. Feb 13 15:54:44.103692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3739390616.mount: Deactivated successfully. Feb 13 15:54:44.549997 kubelet[1860]: E0213 15:54:44.548979 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:44.670267 containerd[1488]: time="2025-02-13T15:54:44.670191305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:44.672246 containerd[1488]: time="2025-02-13T15:54:44.671778758Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28622487" Feb 13 15:54:44.674984 containerd[1488]: time="2025-02-13T15:54:44.673346531Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:44.676229 containerd[1488]: time="2025-02-13T15:54:44.676191027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:44.677214 containerd[1488]: time="2025-02-13T15:54:44.677173040Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 1.756914219s" Feb 13 15:54:44.677404 containerd[1488]: time="2025-02-13T15:54:44.677376527Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 15:54:44.678767 containerd[1488]: time="2025-02-13T15:54:44.678212090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:54:44.680343 containerd[1488]: time="2025-02-13T15:54:44.680297327Z" level=info msg="CreateContainer within sandbox \"dfa7ab3b562453b3779ea5ea0a1c4a1d840f0c04b16c17d832011d1086a5863f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:54:44.689026 kubelet[1860]: E0213 15:54:44.688906 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:54:44.701200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1874739994.mount: Deactivated successfully. Feb 13 15:54:44.702531 containerd[1488]: time="2025-02-13T15:54:44.702420772Z" level=info msg="CreateContainer within sandbox \"dfa7ab3b562453b3779ea5ea0a1c4a1d840f0c04b16c17d832011d1086a5863f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"09d03718b1c0e1d7c55ddb37f3f7c850f0871631fb2dd0616f383ab3a1fd6570\"" Feb 13 15:54:44.703500 containerd[1488]: time="2025-02-13T15:54:44.703448540Z" level=info msg="StartContainer for \"09d03718b1c0e1d7c55ddb37f3f7c850f0871631fb2dd0616f383ab3a1fd6570\"" Feb 13 15:54:44.756794 systemd[1]: Started cri-containerd-09d03718b1c0e1d7c55ddb37f3f7c850f0871631fb2dd0616f383ab3a1fd6570.scope - libcontainer container 09d03718b1c0e1d7c55ddb37f3f7c850f0871631fb2dd0616f383ab3a1fd6570. Feb 13 15:54:44.804468 containerd[1488]: time="2025-02-13T15:54:44.803368433Z" level=info msg="StartContainer for \"09d03718b1c0e1d7c55ddb37f3f7c850f0871631fb2dd0616f383ab3a1fd6570\" returns successfully" Feb 13 15:54:45.549816 kubelet[1860]: E0213 15:54:45.549748 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:46.550976 kubelet[1860]: E0213 15:54:46.550896 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:46.688930 kubelet[1860]: E0213 15:54:46.688450 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:54:47.551864 kubelet[1860]: E0213 15:54:47.551819 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:48.510493 containerd[1488]: time="2025-02-13T15:54:48.510420608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:48.511830 containerd[1488]: time="2025-02-13T15:54:48.511761763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 15:54:48.513045 containerd[1488]: time="2025-02-13T15:54:48.512975280Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:48.515929 containerd[1488]: time="2025-02-13T15:54:48.515867280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:48.517230 containerd[1488]: time="2025-02-13T15:54:48.516865710Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.838584324s" Feb 13 15:54:48.517230 containerd[1488]: time="2025-02-13T15:54:48.516930883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 15:54:48.519599 containerd[1488]: time="2025-02-13T15:54:48.519543143Z" level=info msg="CreateContainer within sandbox \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:54:48.538798 containerd[1488]: time="2025-02-13T15:54:48.538748852Z" level=info msg="CreateContainer within sandbox \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5cd1625f74740fbf355e93d4d43c5debf2acb755e6c664a6c3d8e190d206b107\"" Feb 13 15:54:48.539505 containerd[1488]: time="2025-02-13T15:54:48.539461567Z" level=info msg="StartContainer for \"5cd1625f74740fbf355e93d4d43c5debf2acb755e6c664a6c3d8e190d206b107\"" Feb 13 15:54:48.553392 kubelet[1860]: E0213 15:54:48.553305 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:48.582800 systemd[1]: run-containerd-runc-k8s.io-5cd1625f74740fbf355e93d4d43c5debf2acb755e6c664a6c3d8e190d206b107-runc.5eToOk.mount: Deactivated successfully. Feb 13 15:54:48.594809 systemd[1]: Started cri-containerd-5cd1625f74740fbf355e93d4d43c5debf2acb755e6c664a6c3d8e190d206b107.scope - libcontainer container 5cd1625f74740fbf355e93d4d43c5debf2acb755e6c664a6c3d8e190d206b107. Feb 13 15:54:48.639590 containerd[1488]: time="2025-02-13T15:54:48.637427023Z" level=info msg="StartContainer for \"5cd1625f74740fbf355e93d4d43c5debf2acb755e6c664a6c3d8e190d206b107\" returns successfully" Feb 13 15:54:48.689457 kubelet[1860]: E0213 15:54:48.689391 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:54:48.782499 kubelet[1860]: I0213 15:54:48.782352 1860 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-v7gff" podStartSLOduration=6.802783464 podStartE2EDuration="9.78228109s" podCreationTimestamp="2025-02-13 15:54:39 +0000 UTC" firstStartedPulling="2025-02-13 15:54:41.698450522 +0000 UTC m=+2.871242782" lastFinishedPulling="2025-02-13 15:54:44.677948157 +0000 UTC m=+5.850740408" observedRunningTime="2025-02-13 15:54:45.750909665 +0000 UTC m=+6.923701936" watchObservedRunningTime="2025-02-13 15:54:48.78228109 +0000 UTC m=+9.955073358" Feb 13 15:54:49.492212 containerd[1488]: time="2025-02-13T15:54:49.492153538Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:54:49.494151 systemd[1]: cri-containerd-5cd1625f74740fbf355e93d4d43c5debf2acb755e6c664a6c3d8e190d206b107.scope: Deactivated successfully. Feb 13 15:54:49.524487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cd1625f74740fbf355e93d4d43c5debf2acb755e6c664a6c3d8e190d206b107-rootfs.mount: Deactivated successfully. Feb 13 15:54:49.553608 kubelet[1860]: E0213 15:54:49.553520 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:49.586474 kubelet[1860]: I0213 15:54:49.586436 1860 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:54:50.553746 kubelet[1860]: E0213 15:54:50.553674 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:50.638321 containerd[1488]: time="2025-02-13T15:54:50.638198148Z" level=info msg="shim disconnected" id=5cd1625f74740fbf355e93d4d43c5debf2acb755e6c664a6c3d8e190d206b107 namespace=k8s.io Feb 13 15:54:50.638321 containerd[1488]: time="2025-02-13T15:54:50.638300635Z" level=warning msg="cleaning up after shim disconnected" id=5cd1625f74740fbf355e93d4d43c5debf2acb755e6c664a6c3d8e190d206b107 namespace=k8s.io Feb 13 15:54:50.638321 containerd[1488]: time="2025-02-13T15:54:50.638319989Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:54:50.695420 systemd[1]: Created slice kubepods-besteffort-podfbf57cb1_3114_4738_90d1_01b2062bf75f.slice - libcontainer container kubepods-besteffort-podfbf57cb1_3114_4738_90d1_01b2062bf75f.slice. Feb 13 15:54:50.699070 containerd[1488]: time="2025-02-13T15:54:50.698963842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:0,}" Feb 13 15:54:50.749601 containerd[1488]: time="2025-02-13T15:54:50.749427086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:54:50.795835 containerd[1488]: time="2025-02-13T15:54:50.795761880Z" level=error msg="Failed to destroy network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:50.798043 containerd[1488]: time="2025-02-13T15:54:50.797934500Z" level=error msg="encountered an error cleaning up failed sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:50.798602 containerd[1488]: time="2025-02-13T15:54:50.798058677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:50.798713 kubelet[1860]: E0213 15:54:50.798427 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:50.798713 kubelet[1860]: E0213 15:54:50.798509 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:50.798713 kubelet[1860]: E0213 15:54:50.798545 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:50.798876 kubelet[1860]: E0213 15:54:50.798649 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:54:50.799780 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626-shm.mount: Deactivated successfully. Feb 13 15:54:51.554211 kubelet[1860]: E0213 15:54:51.554162 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:51.751341 kubelet[1860]: I0213 15:54:51.749743 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626" Feb 13 15:54:51.751534 containerd[1488]: time="2025-02-13T15:54:51.751003103Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:54:51.754591 containerd[1488]: time="2025-02-13T15:54:51.752190835Z" level=info msg="Ensure that sandbox 19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626 in task-service has been cleanup successfully" Feb 13 15:54:51.755123 containerd[1488]: time="2025-02-13T15:54:51.754920692Z" level=info msg="TearDown network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" successfully" Feb 13 15:54:51.755123 containerd[1488]: time="2025-02-13T15:54:51.754949561Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" returns successfully" Feb 13 15:54:51.756097 systemd[1]: run-netns-cni\x2d5d95e7c0\x2d36dd\x2d3664\x2dabc5\x2d3bfcb7658562.mount: Deactivated successfully. Feb 13 15:54:51.758605 containerd[1488]: time="2025-02-13T15:54:51.757413705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:1,}" Feb 13 15:54:51.898858 containerd[1488]: time="2025-02-13T15:54:51.898542598Z" level=error msg="Failed to destroy network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:51.899589 containerd[1488]: time="2025-02-13T15:54:51.899335066Z" level=error msg="encountered an error cleaning up failed sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:51.899589 containerd[1488]: time="2025-02-13T15:54:51.899432281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:51.902283 kubelet[1860]: E0213 15:54:51.899820 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:51.903175 kubelet[1860]: E0213 15:54:51.902652 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:51.903175 kubelet[1860]: E0213 15:54:51.902728 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:51.903175 kubelet[1860]: E0213 15:54:51.902846 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:54:51.905450 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451-shm.mount: Deactivated successfully. Feb 13 15:54:52.554871 kubelet[1860]: E0213 15:54:52.554778 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:52.753289 kubelet[1860]: I0213 15:54:52.753240 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451" Feb 13 15:54:52.754376 containerd[1488]: time="2025-02-13T15:54:52.753956008Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" Feb 13 15:54:52.754376 containerd[1488]: time="2025-02-13T15:54:52.754254341Z" level=info msg="Ensure that sandbox a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451 in task-service has been cleanup successfully" Feb 13 15:54:52.758671 containerd[1488]: time="2025-02-13T15:54:52.756645742Z" level=info msg="TearDown network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" successfully" Feb 13 15:54:52.758671 containerd[1488]: time="2025-02-13T15:54:52.756678784Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" returns successfully" Feb 13 15:54:52.758671 containerd[1488]: time="2025-02-13T15:54:52.757061634Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:54:52.758671 containerd[1488]: time="2025-02-13T15:54:52.757173971Z" level=info msg="TearDown network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" successfully" Feb 13 15:54:52.758671 containerd[1488]: time="2025-02-13T15:54:52.757190492Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" returns successfully" Feb 13 15:54:52.757299 systemd[1]: run-netns-cni\x2d9bbfac3c\x2dc3a8\x2da5eb\x2d8689\x2d64e39cc1496a.mount: Deactivated successfully. Feb 13 15:54:52.760141 containerd[1488]: time="2025-02-13T15:54:52.759736830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:2,}" Feb 13 15:54:53.227169 containerd[1488]: time="2025-02-13T15:54:53.227110969Z" level=error msg="Failed to destroy network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:53.229603 containerd[1488]: time="2025-02-13T15:54:53.227923244Z" level=error msg="encountered an error cleaning up failed sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:53.229603 containerd[1488]: time="2025-02-13T15:54:53.228015779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:53.230909 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2-shm.mount: Deactivated successfully. Feb 13 15:54:53.232450 kubelet[1860]: E0213 15:54:53.230951 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:53.232450 kubelet[1860]: E0213 15:54:53.231027 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:53.232450 kubelet[1860]: E0213 15:54:53.231079 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:53.232735 kubelet[1860]: E0213 15:54:53.231179 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:54:53.555908 kubelet[1860]: E0213 15:54:53.555419 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:53.758716 kubelet[1860]: I0213 15:54:53.758419 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2" Feb 13 15:54:53.760166 containerd[1488]: time="2025-02-13T15:54:53.760038275Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\"" Feb 13 15:54:53.760862 containerd[1488]: time="2025-02-13T15:54:53.760815460Z" level=info msg="Ensure that sandbox 76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2 in task-service has been cleanup successfully" Feb 13 15:54:53.764639 systemd[1]: run-netns-cni\x2d629b5e43\x2dd78f\x2d9bd4\x2d68f8\x2df18d843879f4.mount: Deactivated successfully. Feb 13 15:54:53.765072 containerd[1488]: time="2025-02-13T15:54:53.764624848Z" level=info msg="TearDown network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" successfully" Feb 13 15:54:53.765072 containerd[1488]: time="2025-02-13T15:54:53.764651422Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" returns successfully" Feb 13 15:54:53.767126 containerd[1488]: time="2025-02-13T15:54:53.765871849Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" Feb 13 15:54:53.767126 containerd[1488]: time="2025-02-13T15:54:53.766002049Z" level=info msg="TearDown network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" successfully" Feb 13 15:54:53.767126 containerd[1488]: time="2025-02-13T15:54:53.766021086Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" returns successfully" Feb 13 15:54:53.767126 containerd[1488]: time="2025-02-13T15:54:53.766367478Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:54:53.767126 containerd[1488]: time="2025-02-13T15:54:53.766472211Z" level=info msg="TearDown network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" successfully" Feb 13 15:54:53.767126 containerd[1488]: time="2025-02-13T15:54:53.766489948Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" returns successfully" Feb 13 15:54:53.767476 containerd[1488]: time="2025-02-13T15:54:53.767168154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:3,}" Feb 13 15:54:53.810822 kubelet[1860]: I0213 15:54:53.810319 1860 topology_manager.go:215] "Topology Admit Handler" podUID="8836a9fd-bf45-4f06-b3a3-1533b6113050" podNamespace="default" podName="nginx-deployment-6d5f899847-jfhxb" Feb 13 15:54:53.822676 systemd[1]: Created slice kubepods-besteffort-pod8836a9fd_bf45_4f06_b3a3_1533b6113050.slice - libcontainer container kubepods-besteffort-pod8836a9fd_bf45_4f06_b3a3_1533b6113050.slice. Feb 13 15:54:53.876024 kubelet[1860]: I0213 15:54:53.875844 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9f7x\" (UniqueName: \"kubernetes.io/projected/8836a9fd-bf45-4f06-b3a3-1533b6113050-kube-api-access-f9f7x\") pod \"nginx-deployment-6d5f899847-jfhxb\" (UID: \"8836a9fd-bf45-4f06-b3a3-1533b6113050\") " pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:54:53.949609 containerd[1488]: time="2025-02-13T15:54:53.947700356Z" level=error msg="Failed to destroy network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:53.951397 containerd[1488]: time="2025-02-13T15:54:53.950107776Z" level=error msg="encountered an error cleaning up failed sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:53.951397 containerd[1488]: time="2025-02-13T15:54:53.950197439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:53.951682 kubelet[1860]: E0213 15:54:53.950518 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:53.951682 kubelet[1860]: E0213 15:54:53.950618 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:53.951682 kubelet[1860]: E0213 15:54:53.950655 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:53.951868 kubelet[1860]: E0213 15:54:53.950734 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:54:53.952845 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2-shm.mount: Deactivated successfully. Feb 13 15:54:54.129198 containerd[1488]: time="2025-02-13T15:54:54.129060604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:0,}" Feb 13 15:54:54.256957 containerd[1488]: time="2025-02-13T15:54:54.256789548Z" level=error msg="Failed to destroy network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:54.257830 containerd[1488]: time="2025-02-13T15:54:54.257606315Z" level=error msg="encountered an error cleaning up failed sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:54.257830 containerd[1488]: time="2025-02-13T15:54:54.257698059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:54.258065 kubelet[1860]: E0213 15:54:54.257997 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:54.258139 kubelet[1860]: E0213 15:54:54.258068 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:54:54.258139 kubelet[1860]: E0213 15:54:54.258119 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:54:54.258248 kubelet[1860]: E0213 15:54:54.258205 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-jfhxb" podUID="8836a9fd-bf45-4f06-b3a3-1533b6113050" Feb 13 15:54:54.556323 kubelet[1860]: E0213 15:54:54.556001 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:54.770918 kubelet[1860]: I0213 15:54:54.770364 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2" Feb 13 15:54:54.773516 containerd[1488]: time="2025-02-13T15:54:54.773023971Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\"" Feb 13 15:54:54.773516 containerd[1488]: time="2025-02-13T15:54:54.773319997Z" level=info msg="Ensure that sandbox 6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2 in task-service has been cleanup successfully" Feb 13 15:54:54.776001 kubelet[1860]: I0213 15:54:54.773303 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a" Feb 13 15:54:54.776130 containerd[1488]: time="2025-02-13T15:54:54.775883028Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\"" Feb 13 15:54:54.777868 containerd[1488]: time="2025-02-13T15:54:54.776249905Z" level=info msg="Ensure that sandbox ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a in task-service has been cleanup successfully" Feb 13 15:54:54.777957 systemd[1]: run-netns-cni\x2d2b7aa8df\x2d1257\x2d2de3\x2de81a\x2d656a70d7c22a.mount: Deactivated successfully. Feb 13 15:54:54.781527 containerd[1488]: time="2025-02-13T15:54:54.780712078Z" level=info msg="TearDown network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" successfully" Feb 13 15:54:54.781527 containerd[1488]: time="2025-02-13T15:54:54.780743418Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" returns successfully" Feb 13 15:54:54.781527 containerd[1488]: time="2025-02-13T15:54:54.781222795Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\"" Feb 13 15:54:54.781527 containerd[1488]: time="2025-02-13T15:54:54.781357082Z" level=info msg="TearDown network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" successfully" Feb 13 15:54:54.781527 containerd[1488]: time="2025-02-13T15:54:54.781377046Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" returns successfully" Feb 13 15:54:54.781527 containerd[1488]: time="2025-02-13T15:54:54.781455809Z" level=info msg="TearDown network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" successfully" Feb 13 15:54:54.781527 containerd[1488]: time="2025-02-13T15:54:54.781473058Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" returns successfully" Feb 13 15:54:54.785009 systemd[1]: run-netns-cni\x2d7aab7cc4\x2df4f4\x2d6e47\x2da38c\x2d33fa9101868c.mount: Deactivated successfully. Feb 13 15:54:54.786051 containerd[1488]: time="2025-02-13T15:54:54.786022423Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" Feb 13 15:54:54.786253 containerd[1488]: time="2025-02-13T15:54:54.786233029Z" level=info msg="TearDown network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" successfully" Feb 13 15:54:54.786972 containerd[1488]: time="2025-02-13T15:54:54.786350234Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" returns successfully" Feb 13 15:54:54.786972 containerd[1488]: time="2025-02-13T15:54:54.786500000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:1,}" Feb 13 15:54:54.789105 containerd[1488]: time="2025-02-13T15:54:54.789038039Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:54:54.789206 containerd[1488]: time="2025-02-13T15:54:54.789169312Z" level=info msg="TearDown network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" successfully" Feb 13 15:54:54.789206 containerd[1488]: time="2025-02-13T15:54:54.789189800Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" returns successfully" Feb 13 15:54:54.792313 containerd[1488]: time="2025-02-13T15:54:54.792259452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:4,}" Feb 13 15:54:54.941529 containerd[1488]: time="2025-02-13T15:54:54.941257587Z" level=error msg="Failed to destroy network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:54.943350 containerd[1488]: time="2025-02-13T15:54:54.943130732Z" level=error msg="encountered an error cleaning up failed sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:54.960291 containerd[1488]: time="2025-02-13T15:54:54.959443771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:54.960482 kubelet[1860]: E0213 15:54:54.960414 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:54.960575 kubelet[1860]: E0213 15:54:54.960483 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:54.960575 kubelet[1860]: E0213 15:54:54.960523 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:54.960690 kubelet[1860]: E0213 15:54:54.960658 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:54:55.018942 containerd[1488]: time="2025-02-13T15:54:55.018781664Z" level=error msg="Failed to destroy network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:55.019669 containerd[1488]: time="2025-02-13T15:54:55.019417389Z" level=error msg="encountered an error cleaning up failed sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:55.019669 containerd[1488]: time="2025-02-13T15:54:55.019512184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:55.020268 kubelet[1860]: E0213 15:54:55.020233 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:55.020374 kubelet[1860]: E0213 15:54:55.020334 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:54:55.020445 kubelet[1860]: E0213 15:54:55.020414 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:54:55.020959 kubelet[1860]: E0213 15:54:55.020823 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-jfhxb" podUID="8836a9fd-bf45-4f06-b3a3-1533b6113050" Feb 13 15:54:55.557120 kubelet[1860]: E0213 15:54:55.557074 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:55.768193 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa-shm.mount: Deactivated successfully. Feb 13 15:54:55.768328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f-shm.mount: Deactivated successfully. Feb 13 15:54:55.780452 kubelet[1860]: I0213 15:54:55.780365 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa" Feb 13 15:54:55.781658 containerd[1488]: time="2025-02-13T15:54:55.781497786Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\"" Feb 13 15:54:55.782500 containerd[1488]: time="2025-02-13T15:54:55.781817196Z" level=info msg="Ensure that sandbox 889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa in task-service has been cleanup successfully" Feb 13 15:54:55.782500 containerd[1488]: time="2025-02-13T15:54:55.782037771Z" level=info msg="TearDown network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" successfully" Feb 13 15:54:55.782500 containerd[1488]: time="2025-02-13T15:54:55.782061399Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" returns successfully" Feb 13 15:54:55.785590 containerd[1488]: time="2025-02-13T15:54:55.783748601Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\"" Feb 13 15:54:55.785590 containerd[1488]: time="2025-02-13T15:54:55.783877210Z" level=info msg="TearDown network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" successfully" Feb 13 15:54:55.785590 containerd[1488]: time="2025-02-13T15:54:55.783895227Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" returns successfully" Feb 13 15:54:55.786390 containerd[1488]: time="2025-02-13T15:54:55.786357241Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\"" Feb 13 15:54:55.786501 containerd[1488]: time="2025-02-13T15:54:55.786471176Z" level=info msg="TearDown network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" successfully" Feb 13 15:54:55.786501 containerd[1488]: time="2025-02-13T15:54:55.786488586Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" returns successfully" Feb 13 15:54:55.786882 systemd[1]: run-netns-cni\x2dc43c5211\x2d4dc3\x2dba85\x2da0fa\x2d1faa95ee51fd.mount: Deactivated successfully. Feb 13 15:54:55.789728 containerd[1488]: time="2025-02-13T15:54:55.789618600Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" Feb 13 15:54:55.790111 containerd[1488]: time="2025-02-13T15:54:55.789816445Z" level=info msg="TearDown network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" successfully" Feb 13 15:54:55.790111 containerd[1488]: time="2025-02-13T15:54:55.789869101Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" returns successfully" Feb 13 15:54:55.790210 kubelet[1860]: I0213 15:54:55.790180 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f" Feb 13 15:54:55.790880 containerd[1488]: time="2025-02-13T15:54:55.790851288Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:54:55.791188 containerd[1488]: time="2025-02-13T15:54:55.791088690Z" level=info msg="TearDown network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" successfully" Feb 13 15:54:55.791188 containerd[1488]: time="2025-02-13T15:54:55.791113540Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" returns successfully" Feb 13 15:54:55.793618 containerd[1488]: time="2025-02-13T15:54:55.791972985Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\"" Feb 13 15:54:55.793618 containerd[1488]: time="2025-02-13T15:54:55.792229231Z" level=info msg="Ensure that sandbox 9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f in task-service has been cleanup successfully" Feb 13 15:54:55.793618 containerd[1488]: time="2025-02-13T15:54:55.792430449Z" level=info msg="TearDown network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" successfully" Feb 13 15:54:55.793618 containerd[1488]: time="2025-02-13T15:54:55.792450973Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" returns successfully" Feb 13 15:54:55.793618 containerd[1488]: time="2025-02-13T15:54:55.792491710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:5,}" Feb 13 15:54:55.795463 systemd[1]: run-netns-cni\x2d57feb26c\x2dcfab\x2d410d\x2da523\x2dee9849932ea2.mount: Deactivated successfully. Feb 13 15:54:55.796715 containerd[1488]: time="2025-02-13T15:54:55.796686023Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\"" Feb 13 15:54:55.796973 containerd[1488]: time="2025-02-13T15:54:55.796937965Z" level=info msg="TearDown network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" successfully" Feb 13 15:54:55.797099 containerd[1488]: time="2025-02-13T15:54:55.797078909Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" returns successfully" Feb 13 15:54:55.800606 containerd[1488]: time="2025-02-13T15:54:55.800412661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:2,}" Feb 13 15:54:55.987693 containerd[1488]: time="2025-02-13T15:54:55.987475472Z" level=error msg="Failed to destroy network for sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:55.989649 containerd[1488]: time="2025-02-13T15:54:55.988406115Z" level=error msg="encountered an error cleaning up failed sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:55.989649 containerd[1488]: time="2025-02-13T15:54:55.988504001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:55.991707 kubelet[1860]: E0213 15:54:55.991669 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:55.991890 kubelet[1860]: E0213 15:54:55.991850 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:54:55.992020 kubelet[1860]: E0213 15:54:55.992001 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:54:55.992452 kubelet[1860]: E0213 15:54:55.992425 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-jfhxb" podUID="8836a9fd-bf45-4f06-b3a3-1533b6113050" Feb 13 15:54:56.008766 containerd[1488]: time="2025-02-13T15:54:56.008706708Z" level=error msg="Failed to destroy network for sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:56.009328 containerd[1488]: time="2025-02-13T15:54:56.009249621Z" level=error msg="encountered an error cleaning up failed sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:56.009460 containerd[1488]: time="2025-02-13T15:54:56.009389633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:56.009732 kubelet[1860]: E0213 15:54:56.009704 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:56.009828 kubelet[1860]: E0213 15:54:56.009773 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:56.009828 kubelet[1860]: E0213 15:54:56.009807 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:56.009937 kubelet[1860]: E0213 15:54:56.009884 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:54:56.558046 kubelet[1860]: E0213 15:54:56.557995 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:56.764661 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d-shm.mount: Deactivated successfully. Feb 13 15:54:56.765186 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498-shm.mount: Deactivated successfully. Feb 13 15:54:56.797331 kubelet[1860]: I0213 15:54:56.797291 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d" Feb 13 15:54:56.798894 containerd[1488]: time="2025-02-13T15:54:56.798286560Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\"" Feb 13 15:54:56.798894 containerd[1488]: time="2025-02-13T15:54:56.798623136Z" level=info msg="Ensure that sandbox cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d in task-service has been cleanup successfully" Feb 13 15:54:56.803483 containerd[1488]: time="2025-02-13T15:54:56.803442463Z" level=info msg="TearDown network for sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" successfully" Feb 13 15:54:56.803483 containerd[1488]: time="2025-02-13T15:54:56.803481167Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" returns successfully" Feb 13 15:54:56.804806 containerd[1488]: time="2025-02-13T15:54:56.804535367Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\"" Feb 13 15:54:56.804806 containerd[1488]: time="2025-02-13T15:54:56.804698303Z" level=info msg="TearDown network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" successfully" Feb 13 15:54:56.804806 containerd[1488]: time="2025-02-13T15:54:56.804716789Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" returns successfully" Feb 13 15:54:56.806305 systemd[1]: run-netns-cni\x2da2033a3e\x2d353f\x2d8bfd\x2dbcd8\x2d5831f56b0002.mount: Deactivated successfully. Feb 13 15:54:56.809687 containerd[1488]: time="2025-02-13T15:54:56.808883409Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\"" Feb 13 15:54:56.809687 containerd[1488]: time="2025-02-13T15:54:56.809001088Z" level=info msg="TearDown network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" successfully" Feb 13 15:54:56.809687 containerd[1488]: time="2025-02-13T15:54:56.809017832Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" returns successfully" Feb 13 15:54:56.810058 containerd[1488]: time="2025-02-13T15:54:56.810022961Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\"" Feb 13 15:54:56.811108 containerd[1488]: time="2025-02-13T15:54:56.810139033Z" level=info msg="TearDown network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" successfully" Feb 13 15:54:56.811108 containerd[1488]: time="2025-02-13T15:54:56.810160744Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" returns successfully" Feb 13 15:54:56.811108 containerd[1488]: time="2025-02-13T15:54:56.810690108Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" Feb 13 15:54:56.811108 containerd[1488]: time="2025-02-13T15:54:56.810797264Z" level=info msg="TearDown network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" successfully" Feb 13 15:54:56.811108 containerd[1488]: time="2025-02-13T15:54:56.810815885Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" returns successfully" Feb 13 15:54:56.811358 kubelet[1860]: I0213 15:54:56.811245 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498" Feb 13 15:54:56.812599 containerd[1488]: time="2025-02-13T15:54:56.812193150Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:54:56.812599 containerd[1488]: time="2025-02-13T15:54:56.812343615Z" level=info msg="TearDown network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" successfully" Feb 13 15:54:56.812599 containerd[1488]: time="2025-02-13T15:54:56.812362953Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" returns successfully" Feb 13 15:54:56.812599 containerd[1488]: time="2025-02-13T15:54:56.812461430Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\"" Feb 13 15:54:56.813136 containerd[1488]: time="2025-02-13T15:54:56.813103184Z" level=info msg="Ensure that sandbox 27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498 in task-service has been cleanup successfully" Feb 13 15:54:56.817657 containerd[1488]: time="2025-02-13T15:54:56.817619757Z" level=info msg="TearDown network for sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" successfully" Feb 13 15:54:56.817657 containerd[1488]: time="2025-02-13T15:54:56.817656084Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" returns successfully" Feb 13 15:54:56.820665 containerd[1488]: time="2025-02-13T15:54:56.819946839Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\"" Feb 13 15:54:56.820665 containerd[1488]: time="2025-02-13T15:54:56.820291776Z" level=info msg="TearDown network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" successfully" Feb 13 15:54:56.820665 containerd[1488]: time="2025-02-13T15:54:56.820312789Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" returns successfully" Feb 13 15:54:56.820665 containerd[1488]: time="2025-02-13T15:54:56.819958512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:6,}" Feb 13 15:54:56.821029 systemd[1]: run-netns-cni\x2d3277d0f6\x2d8aed\x2d8c48\x2de14a\x2d23312f69fc8c.mount: Deactivated successfully. Feb 13 15:54:56.822060 containerd[1488]: time="2025-02-13T15:54:56.822034087Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\"" Feb 13 15:54:56.822621 containerd[1488]: time="2025-02-13T15:54:56.822366470Z" level=info msg="TearDown network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" successfully" Feb 13 15:54:56.822621 containerd[1488]: time="2025-02-13T15:54:56.822450915Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" returns successfully" Feb 13 15:54:56.823666 containerd[1488]: time="2025-02-13T15:54:56.823637773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:3,}" Feb 13 15:54:57.017343 containerd[1488]: time="2025-02-13T15:54:57.017236067Z" level=error msg="Failed to destroy network for sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:57.017838 containerd[1488]: time="2025-02-13T15:54:57.017699368Z" level=error msg="encountered an error cleaning up failed sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:57.017838 containerd[1488]: time="2025-02-13T15:54:57.017789717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:57.018437 kubelet[1860]: E0213 15:54:57.018142 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:57.018437 kubelet[1860]: E0213 15:54:57.018221 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:54:57.018437 kubelet[1860]: E0213 15:54:57.018257 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:54:57.018709 kubelet[1860]: E0213 15:54:57.018330 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-jfhxb" podUID="8836a9fd-bf45-4f06-b3a3-1533b6113050" Feb 13 15:54:57.026007 containerd[1488]: time="2025-02-13T15:54:57.025786301Z" level=error msg="Failed to destroy network for sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:57.026454 containerd[1488]: time="2025-02-13T15:54:57.026318937Z" level=error msg="encountered an error cleaning up failed sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:57.026454 containerd[1488]: time="2025-02-13T15:54:57.026415009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:57.027135 kubelet[1860]: E0213 15:54:57.026983 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:57.027237 kubelet[1860]: E0213 15:54:57.027142 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:57.027237 kubelet[1860]: E0213 15:54:57.027198 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:57.027589 kubelet[1860]: E0213 15:54:57.027373 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:54:57.559343 kubelet[1860]: E0213 15:54:57.559128 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:57.770007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e-shm.mount: Deactivated successfully. Feb 13 15:54:57.818076 kubelet[1860]: I0213 15:54:57.817195 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5" Feb 13 15:54:57.818634 containerd[1488]: time="2025-02-13T15:54:57.818587602Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\"" Feb 13 15:54:57.819425 containerd[1488]: time="2025-02-13T15:54:57.819389133Z" level=info msg="Ensure that sandbox 2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5 in task-service has been cleanup successfully" Feb 13 15:54:57.823291 kubelet[1860]: I0213 15:54:57.823209 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e" Feb 13 15:54:57.823555 containerd[1488]: time="2025-02-13T15:54:57.823466632Z" level=info msg="TearDown network for sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" successfully" Feb 13 15:54:57.823779 containerd[1488]: time="2025-02-13T15:54:57.823683092Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" returns successfully" Feb 13 15:54:57.824272 containerd[1488]: time="2025-02-13T15:54:57.823968364Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\"" Feb 13 15:54:57.824272 containerd[1488]: time="2025-02-13T15:54:57.824236288Z" level=info msg="Ensure that sandbox d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e in task-service has been cleanup successfully" Feb 13 15:54:57.826075 containerd[1488]: time="2025-02-13T15:54:57.825831216Z" level=info msg="TearDown network for sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" successfully" Feb 13 15:54:57.826075 containerd[1488]: time="2025-02-13T15:54:57.825863073Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" returns successfully" Feb 13 15:54:57.826075 containerd[1488]: time="2025-02-13T15:54:57.824761803Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\"" Feb 13 15:54:57.826075 containerd[1488]: time="2025-02-13T15:54:57.826021655Z" level=info msg="TearDown network for sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" successfully" Feb 13 15:54:57.826075 containerd[1488]: time="2025-02-13T15:54:57.826038441Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" returns successfully" Feb 13 15:54:57.827421 systemd[1]: run-netns-cni\x2d19ae51d5\x2ddca4\x2df686\x2d5bc9\x2d9dcde5248944.mount: Deactivated successfully. Feb 13 15:54:57.829171 containerd[1488]: time="2025-02-13T15:54:57.827499052Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\"" Feb 13 15:54:57.829171 containerd[1488]: time="2025-02-13T15:54:57.827787224Z" level=info msg="TearDown network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" successfully" Feb 13 15:54:57.829171 containerd[1488]: time="2025-02-13T15:54:57.827810026Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" returns successfully" Feb 13 15:54:57.829171 containerd[1488]: time="2025-02-13T15:54:57.827922228Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\"" Feb 13 15:54:57.829171 containerd[1488]: time="2025-02-13T15:54:57.828027700Z" level=info msg="TearDown network for sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" successfully" Feb 13 15:54:57.829171 containerd[1488]: time="2025-02-13T15:54:57.828044832Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" returns successfully" Feb 13 15:54:57.836761 containerd[1488]: time="2025-02-13T15:54:57.833733479Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\"" Feb 13 15:54:57.836761 containerd[1488]: time="2025-02-13T15:54:57.833856499Z" level=info msg="TearDown network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" successfully" Feb 13 15:54:57.836761 containerd[1488]: time="2025-02-13T15:54:57.833885289Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" returns successfully" Feb 13 15:54:57.836761 containerd[1488]: time="2025-02-13T15:54:57.833990751Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\"" Feb 13 15:54:57.836761 containerd[1488]: time="2025-02-13T15:54:57.834096070Z" level=info msg="TearDown network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" successfully" Feb 13 15:54:57.836761 containerd[1488]: time="2025-02-13T15:54:57.834113403Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" returns successfully" Feb 13 15:54:57.836761 containerd[1488]: time="2025-02-13T15:54:57.836031093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:4,}" Feb 13 15:54:57.836290 systemd[1]: run-netns-cni\x2d1ac9daeb\x2d4812\x2dd7d9\x2d9090\x2daa66b24ec538.mount: Deactivated successfully. Feb 13 15:54:57.837673 containerd[1488]: time="2025-02-13T15:54:57.837431941Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\"" Feb 13 15:54:57.837673 containerd[1488]: time="2025-02-13T15:54:57.837578648Z" level=info msg="TearDown network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" successfully" Feb 13 15:54:57.840815 containerd[1488]: time="2025-02-13T15:54:57.838228482Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" returns successfully" Feb 13 15:54:57.840815 containerd[1488]: time="2025-02-13T15:54:57.840689052Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\"" Feb 13 15:54:57.840815 containerd[1488]: time="2025-02-13T15:54:57.840799354Z" level=info msg="TearDown network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" successfully" Feb 13 15:54:57.840992 containerd[1488]: time="2025-02-13T15:54:57.840827084Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" returns successfully" Feb 13 15:54:57.841315 containerd[1488]: time="2025-02-13T15:54:57.841288387Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" Feb 13 15:54:57.841551 containerd[1488]: time="2025-02-13T15:54:57.841526790Z" level=info msg="TearDown network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" successfully" Feb 13 15:54:57.841708 containerd[1488]: time="2025-02-13T15:54:57.841673747Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" returns successfully" Feb 13 15:54:57.842184 containerd[1488]: time="2025-02-13T15:54:57.842156187Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:54:57.842384 containerd[1488]: time="2025-02-13T15:54:57.842356173Z" level=info msg="TearDown network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" successfully" Feb 13 15:54:57.842476 containerd[1488]: time="2025-02-13T15:54:57.842460698Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" returns successfully" Feb 13 15:54:57.843168 containerd[1488]: time="2025-02-13T15:54:57.843139565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:7,}" Feb 13 15:54:58.029471 containerd[1488]: time="2025-02-13T15:54:58.029390227Z" level=error msg="Failed to destroy network for sandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:58.029987 containerd[1488]: time="2025-02-13T15:54:58.029940873Z" level=error msg="encountered an error cleaning up failed sandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:58.030125 containerd[1488]: time="2025-02-13T15:54:58.030045590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:58.030552 kubelet[1860]: E0213 15:54:58.030497 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:58.030720 kubelet[1860]: E0213 15:54:58.030684 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:58.030816 kubelet[1860]: E0213 15:54:58.030744 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:54:58.030927 kubelet[1860]: E0213 15:54:58.030876 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:54:58.047662 containerd[1488]: time="2025-02-13T15:54:58.047411944Z" level=error msg="Failed to destroy network for sandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:58.048440 containerd[1488]: time="2025-02-13T15:54:58.048390054Z" level=error msg="encountered an error cleaning up failed sandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:58.048583 containerd[1488]: time="2025-02-13T15:54:58.048482182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:58.049051 kubelet[1860]: E0213 15:54:58.048906 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:54:58.049051 kubelet[1860]: E0213 15:54:58.049026 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:54:58.049208 kubelet[1860]: E0213 15:54:58.049154 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:54:58.050020 kubelet[1860]: E0213 15:54:58.049763 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-jfhxb" podUID="8836a9fd-bf45-4f06-b3a3-1533b6113050" Feb 13 15:54:58.140867 containerd[1488]: time="2025-02-13T15:54:58.140696657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:58.142099 containerd[1488]: time="2025-02-13T15:54:58.142045395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 15:54:58.144582 containerd[1488]: time="2025-02-13T15:54:58.143396468Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:58.146321 containerd[1488]: time="2025-02-13T15:54:58.146281658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:58.147413 containerd[1488]: time="2025-02-13T15:54:58.147372458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.397876472s" Feb 13 15:54:58.147514 containerd[1488]: time="2025-02-13T15:54:58.147419454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 15:54:58.156992 containerd[1488]: time="2025-02-13T15:54:58.156954124Z" level=info msg="CreateContainer within sandbox \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:54:58.174444 containerd[1488]: time="2025-02-13T15:54:58.174373166Z" level=info msg="CreateContainer within sandbox \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499\"" Feb 13 15:54:58.175112 containerd[1488]: time="2025-02-13T15:54:58.174963919Z" level=info msg="StartContainer for \"df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499\"" Feb 13 15:54:58.212807 systemd[1]: Started cri-containerd-df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499.scope - libcontainer container df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499. Feb 13 15:54:58.259465 containerd[1488]: time="2025-02-13T15:54:58.259408152Z" level=info msg="StartContainer for \"df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499\" returns successfully" Feb 13 15:54:58.361055 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:54:58.361231 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:54:58.393009 systemd[1]: cri-containerd-df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499.scope: Deactivated successfully. Feb 13 15:54:58.559674 kubelet[1860]: E0213 15:54:58.559605 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:58.766535 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652-shm.mount: Deactivated successfully. Feb 13 15:54:58.766724 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa-shm.mount: Deactivated successfully. Feb 13 15:54:58.766837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3083176878.mount: Deactivated successfully. Feb 13 15:54:58.829511 kubelet[1860]: I0213 15:54:58.829479 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652" Feb 13 15:54:58.834587 containerd[1488]: time="2025-02-13T15:54:58.830522971Z" level=info msg="StopPodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\"" Feb 13 15:54:58.834587 containerd[1488]: time="2025-02-13T15:54:58.832193339Z" level=info msg="Ensure that sandbox c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652 in task-service has been cleanup successfully" Feb 13 15:54:58.838022 containerd[1488]: time="2025-02-13T15:54:58.837975181Z" level=info msg="TearDown network for sandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" successfully" Feb 13 15:54:58.838022 containerd[1488]: time="2025-02-13T15:54:58.838009764Z" level=info msg="StopPodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" returns successfully" Feb 13 15:54:58.840594 containerd[1488]: time="2025-02-13T15:54:58.839221340Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\"" Feb 13 15:54:58.840594 containerd[1488]: time="2025-02-13T15:54:58.839342533Z" level=info msg="TearDown network for sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" successfully" Feb 13 15:54:58.840594 containerd[1488]: time="2025-02-13T15:54:58.839360527Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" returns successfully" Feb 13 15:54:58.840271 systemd[1]: run-netns-cni\x2d610afccf\x2d9e32\x2d0b16\x2d0e32\x2d106c354eeecd.mount: Deactivated successfully. Feb 13 15:54:58.841144 kubelet[1860]: I0213 15:54:58.839744 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa" Feb 13 15:54:58.845766 containerd[1488]: time="2025-02-13T15:54:58.841833491Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\"" Feb 13 15:54:58.845766 containerd[1488]: time="2025-02-13T15:54:58.841946495Z" level=info msg="TearDown network for sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" successfully" Feb 13 15:54:58.845766 containerd[1488]: time="2025-02-13T15:54:58.842010689Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" returns successfully" Feb 13 15:54:58.845766 containerd[1488]: time="2025-02-13T15:54:58.842192134Z" level=info msg="StopPodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\"" Feb 13 15:54:58.845766 containerd[1488]: time="2025-02-13T15:54:58.842423446Z" level=info msg="Ensure that sandbox 0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa in task-service has been cleanup successfully" Feb 13 15:54:58.845766 containerd[1488]: time="2025-02-13T15:54:58.843681562Z" level=info msg="TearDown network for sandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" successfully" Feb 13 15:54:58.845766 containerd[1488]: time="2025-02-13T15:54:58.843706338Z" level=info msg="StopPodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" returns successfully" Feb 13 15:54:58.847141 containerd[1488]: time="2025-02-13T15:54:58.846424537Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\"" Feb 13 15:54:58.847141 containerd[1488]: time="2025-02-13T15:54:58.846542580Z" level=info msg="TearDown network for sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" successfully" Feb 13 15:54:58.847141 containerd[1488]: time="2025-02-13T15:54:58.846612906Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" returns successfully" Feb 13 15:54:58.847141 containerd[1488]: time="2025-02-13T15:54:58.846681308Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\"" Feb 13 15:54:58.847141 containerd[1488]: time="2025-02-13T15:54:58.846772096Z" level=info msg="TearDown network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" successfully" Feb 13 15:54:58.847141 containerd[1488]: time="2025-02-13T15:54:58.846788123Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" returns successfully" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.847525504Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\"" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.847658626Z" level=info msg="TearDown network for sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" successfully" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.847677792Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" returns successfully" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.847528066Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\"" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.847806900Z" level=info msg="TearDown network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" successfully" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.847821181Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" returns successfully" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.848246894Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\"" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.848348807Z" level=info msg="TearDown network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" successfully" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.848365296Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" returns successfully" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.848454468Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\"" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.848541006Z" level=info msg="TearDown network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" successfully" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.848554591Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" returns successfully" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.849008331Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\"" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.849123277Z" level=info msg="TearDown network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" successfully" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.849140466Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" returns successfully" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.849237194Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.849328446Z" level=info msg="TearDown network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" successfully" Feb 13 15:54:58.850100 containerd[1488]: time="2025-02-13T15:54:58.849345658Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" returns successfully" Feb 13 15:54:58.853219 containerd[1488]: time="2025-02-13T15:54:58.851581143Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:54:58.853219 containerd[1488]: time="2025-02-13T15:54:58.851686543Z" level=info msg="TearDown network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" successfully" Feb 13 15:54:58.853219 containerd[1488]: time="2025-02-13T15:54:58.851704256Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" returns successfully" Feb 13 15:54:58.853387 systemd[1]: run-netns-cni\x2d096fab59\x2d159a\x2df169\x2d4a10\x2dc0bce5ebb61b.mount: Deactivated successfully. Feb 13 15:54:58.855632 containerd[1488]: time="2025-02-13T15:54:58.853745350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:5,}" Feb 13 15:54:58.855632 containerd[1488]: time="2025-02-13T15:54:58.854896495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:8,}" Feb 13 15:54:58.877947 kubelet[1860]: I0213 15:54:58.877906 1860 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-4l9qs" podStartSLOduration=3.424587271 podStartE2EDuration="19.877854199s" podCreationTimestamp="2025-02-13 15:54:39 +0000 UTC" firstStartedPulling="2025-02-13 15:54:41.694471559 +0000 UTC m=+2.867263829" lastFinishedPulling="2025-02-13 15:54:58.147738506 +0000 UTC m=+19.320530757" observedRunningTime="2025-02-13 15:54:58.877082504 +0000 UTC m=+20.049874788" watchObservedRunningTime="2025-02-13 15:54:58.877854199 +0000 UTC m=+20.050646468" Feb 13 15:54:59.544983 kubelet[1860]: E0213 15:54:59.544931 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:59.560213 kubelet[1860]: E0213 15:54:59.560143 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:54:59.987697 containerd[1488]: time="2025-02-13T15:54:59.986222488Z" level=info msg="shim disconnected" id=df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499 namespace=k8s.io Feb 13 15:54:59.987697 containerd[1488]: time="2025-02-13T15:54:59.986288596Z" level=warning msg="cleaning up after shim disconnected" id=df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499 namespace=k8s.io Feb 13 15:54:59.987697 containerd[1488]: time="2025-02-13T15:54:59.986302751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:54:59.997899 containerd[1488]: time="2025-02-13T15:54:59.997828658Z" level=error msg="ExecSync for \"df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"14dc2bfd3be65218c3453285297d4ac7eb017ca65a647aac86fc68f55a369edc\": task df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499 not found: not found" Feb 13 15:55:00.000816 kubelet[1860]: E0213 15:55:00.000773 1860 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"14dc2bfd3be65218c3453285297d4ac7eb017ca65a647aac86fc68f55a369edc\": task df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499 not found: not found" containerID="df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Feb 13 15:55:00.008202 containerd[1488]: time="2025-02-13T15:55:00.007998620Z" level=error msg="ExecSync for \"df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499 not found: not found" Feb 13 15:55:00.008598 kubelet[1860]: E0213 15:55:00.008377 1860 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499 not found: not found" containerID="df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Feb 13 15:55:00.015620 containerd[1488]: time="2025-02-13T15:55:00.012529488Z" level=error msg="ExecSync for \"df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499 not found: not found" Feb 13 15:55:00.017464 kubelet[1860]: E0213 15:55:00.017400 1860 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499 not found: not found" containerID="df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Feb 13 15:55:00.023358 containerd[1488]: time="2025-02-13T15:55:00.023300619Z" level=error msg="ExecSync for \"df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499 not found: not found" Feb 13 15:55:00.023894 kubelet[1860]: E0213 15:55:00.023860 1860 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499 not found: not found" containerID="df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Feb 13 15:55:00.027447 containerd[1488]: time="2025-02-13T15:55:00.025912315Z" level=error msg="ExecSync for \"df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499 not found: not found" Feb 13 15:55:00.028737 kubelet[1860]: E0213 15:55:00.028709 1860 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499 not found: not found" containerID="df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Feb 13 15:55:00.037101 containerd[1488]: time="2025-02-13T15:55:00.036942769Z" level=error msg="ExecSync for \"df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499 not found: not found" Feb 13 15:55:00.037667 kubelet[1860]: E0213 15:55:00.037638 1860 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499 not found: not found" containerID="df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Feb 13 15:55:00.106634 containerd[1488]: time="2025-02-13T15:55:00.106542841Z" level=error msg="Failed to destroy network for sandbox \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:00.108438 containerd[1488]: time="2025-02-13T15:55:00.108372126Z" level=error msg="encountered an error cleaning up failed sandbox \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:00.108590 containerd[1488]: time="2025-02-13T15:55:00.108486100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:00.108846 kubelet[1860]: E0213 15:55:00.108802 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:00.108964 kubelet[1860]: E0213 15:55:00.108880 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:55:00.108964 kubelet[1860]: E0213 15:55:00.108926 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:55:00.109070 kubelet[1860]: E0213 15:55:00.109005 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:55:00.111199 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff-shm.mount: Deactivated successfully. Feb 13 15:55:00.128454 containerd[1488]: time="2025-02-13T15:55:00.128366256Z" level=error msg="Failed to destroy network for sandbox \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:00.128885 containerd[1488]: time="2025-02-13T15:55:00.128824191Z" level=error msg="encountered an error cleaning up failed sandbox \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:00.129031 containerd[1488]: time="2025-02-13T15:55:00.128917888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:00.129919 kubelet[1860]: E0213 15:55:00.129419 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:00.129919 kubelet[1860]: E0213 15:55:00.129490 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:55:00.129919 kubelet[1860]: E0213 15:55:00.129531 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:55:00.130179 kubelet[1860]: E0213 15:55:00.129631 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-jfhxb" podUID="8836a9fd-bf45-4f06-b3a3-1533b6113050" Feb 13 15:55:00.132144 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b-shm.mount: Deactivated successfully. Feb 13 15:55:00.560747 kubelet[1860]: E0213 15:55:00.560681 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:00.864416 kubelet[1860]: I0213 15:55:00.864260 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff" Feb 13 15:55:00.865919 containerd[1488]: time="2025-02-13T15:55:00.865151026Z" level=info msg="StopPodSandbox for \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\"" Feb 13 15:55:00.865919 containerd[1488]: time="2025-02-13T15:55:00.865427799Z" level=info msg="Ensure that sandbox 7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff in task-service has been cleanup successfully" Feb 13 15:55:00.867833 containerd[1488]: time="2025-02-13T15:55:00.867800373Z" level=info msg="TearDown network for sandbox \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\" successfully" Feb 13 15:55:00.868296 containerd[1488]: time="2025-02-13T15:55:00.868073166Z" level=info msg="StopPodSandbox for \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\" returns successfully" Feb 13 15:55:00.868794 containerd[1488]: time="2025-02-13T15:55:00.868419380Z" level=info msg="StopPodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\"" Feb 13 15:55:00.868794 containerd[1488]: time="2025-02-13T15:55:00.868697996Z" level=info msg="TearDown network for sandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" successfully" Feb 13 15:55:00.868794 containerd[1488]: time="2025-02-13T15:55:00.868721322Z" level=info msg="StopPodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" returns successfully" Feb 13 15:55:00.870945 containerd[1488]: time="2025-02-13T15:55:00.870880327Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\"" Feb 13 15:55:00.871108 containerd[1488]: time="2025-02-13T15:55:00.870995749Z" level=info msg="TearDown network for sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" successfully" Feb 13 15:55:00.871108 containerd[1488]: time="2025-02-13T15:55:00.871014861Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" returns successfully" Feb 13 15:55:00.871263 systemd[1]: run-netns-cni\x2d4835664b\x2da3a3\x2d3bcd\x2dfc3c\x2dbe3ff4ded6a5.mount: Deactivated successfully. Feb 13 15:55:00.874086 kubelet[1860]: I0213 15:55:00.872869 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b" Feb 13 15:55:00.874199 containerd[1488]: time="2025-02-13T15:55:00.872546020Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\"" Feb 13 15:55:00.874199 containerd[1488]: time="2025-02-13T15:55:00.873528617Z" level=info msg="TearDown network for sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" successfully" Feb 13 15:55:00.874199 containerd[1488]: time="2025-02-13T15:55:00.873623854Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" returns successfully" Feb 13 15:55:00.874199 containerd[1488]: time="2025-02-13T15:55:00.873425067Z" level=info msg="StopPodSandbox for \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\"" Feb 13 15:55:00.874199 containerd[1488]: time="2025-02-13T15:55:00.873916679Z" level=info msg="Ensure that sandbox 6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b in task-service has been cleanup successfully" Feb 13 15:55:00.874938 containerd[1488]: time="2025-02-13T15:55:00.874664256Z" level=info msg="TearDown network for sandbox \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\" successfully" Feb 13 15:55:00.874938 containerd[1488]: time="2025-02-13T15:55:00.874692498Z" level=info msg="StopPodSandbox for \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\" returns successfully" Feb 13 15:55:00.875080 containerd[1488]: time="2025-02-13T15:55:00.874937656Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\"" Feb 13 15:55:00.875080 containerd[1488]: time="2025-02-13T15:55:00.875051739Z" level=info msg="TearDown network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" successfully" Feb 13 15:55:00.875080 containerd[1488]: time="2025-02-13T15:55:00.875070305Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" returns successfully" Feb 13 15:55:00.875753 containerd[1488]: time="2025-02-13T15:55:00.875519774Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\"" Feb 13 15:55:00.875753 containerd[1488]: time="2025-02-13T15:55:00.875524998Z" level=info msg="StopPodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\"" Feb 13 15:55:00.875753 containerd[1488]: time="2025-02-13T15:55:00.875655315Z" level=info msg="TearDown network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" successfully" Feb 13 15:55:00.875753 containerd[1488]: time="2025-02-13T15:55:00.875672986Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" returns successfully" Feb 13 15:55:00.876008 containerd[1488]: time="2025-02-13T15:55:00.875838555Z" level=info msg="TearDown network for sandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" successfully" Feb 13 15:55:00.876008 containerd[1488]: time="2025-02-13T15:55:00.875857477Z" level=info msg="StopPodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" returns successfully" Feb 13 15:55:00.876669 containerd[1488]: time="2025-02-13T15:55:00.876426058Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\"" Feb 13 15:55:00.876669 containerd[1488]: time="2025-02-13T15:55:00.876620059Z" level=info msg="TearDown network for sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" successfully" Feb 13 15:55:00.876669 containerd[1488]: time="2025-02-13T15:55:00.876639644Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" returns successfully" Feb 13 15:55:00.877661 containerd[1488]: time="2025-02-13T15:55:00.876714813Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\"" Feb 13 15:55:00.877661 containerd[1488]: time="2025-02-13T15:55:00.876809131Z" level=info msg="TearDown network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" successfully" Feb 13 15:55:00.877661 containerd[1488]: time="2025-02-13T15:55:00.876826530Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" returns successfully" Feb 13 15:55:00.877661 containerd[1488]: time="2025-02-13T15:55:00.877224889Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\"" Feb 13 15:55:00.877661 containerd[1488]: time="2025-02-13T15:55:00.877328383Z" level=info msg="TearDown network for sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" successfully" Feb 13 15:55:00.877661 containerd[1488]: time="2025-02-13T15:55:00.877345585Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" returns successfully" Feb 13 15:55:00.877661 containerd[1488]: time="2025-02-13T15:55:00.877380599Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" Feb 13 15:55:00.877661 containerd[1488]: time="2025-02-13T15:55:00.877479144Z" level=info msg="TearDown network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" successfully" Feb 13 15:55:00.877661 containerd[1488]: time="2025-02-13T15:55:00.877495253Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" returns successfully" Feb 13 15:55:00.878677 containerd[1488]: time="2025-02-13T15:55:00.878423148Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:55:00.878677 containerd[1488]: time="2025-02-13T15:55:00.878525874Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\"" Feb 13 15:55:00.878677 containerd[1488]: time="2025-02-13T15:55:00.878621576Z" level=info msg="TearDown network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" successfully" Feb 13 15:55:00.878677 containerd[1488]: time="2025-02-13T15:55:00.878640698Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" returns successfully" Feb 13 15:55:00.878978 containerd[1488]: time="2025-02-13T15:55:00.878640243Z" level=info msg="TearDown network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" successfully" Feb 13 15:55:00.878978 containerd[1488]: time="2025-02-13T15:55:00.878698942Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" returns successfully" Feb 13 15:55:00.880239 containerd[1488]: time="2025-02-13T15:55:00.879762482Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\"" Feb 13 15:55:00.880239 containerd[1488]: time="2025-02-13T15:55:00.879809442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:9,}" Feb 13 15:55:00.880239 containerd[1488]: time="2025-02-13T15:55:00.879868421Z" level=info msg="TearDown network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" successfully" Feb 13 15:55:00.880239 containerd[1488]: time="2025-02-13T15:55:00.879884526Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" returns successfully" Feb 13 15:55:00.881237 kubelet[1860]: I0213 15:55:00.880668 1860 scope.go:117] "RemoveContainer" containerID="df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499" Feb 13 15:55:00.881351 containerd[1488]: time="2025-02-13T15:55:00.880975072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:6,}" Feb 13 15:55:00.884986 containerd[1488]: time="2025-02-13T15:55:00.884945342Z" level=info msg="CreateContainer within sandbox \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Feb 13 15:55:00.917798 containerd[1488]: time="2025-02-13T15:55:00.917626822Z" level=info msg="CreateContainer within sandbox \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"c2724e55bc7147f32da532eefd7b35aeba57538dfa0106171aa428c4b0e3fdce\"" Feb 13 15:55:00.920692 containerd[1488]: time="2025-02-13T15:55:00.918809829Z" level=info msg="StartContainer for \"c2724e55bc7147f32da532eefd7b35aeba57538dfa0106171aa428c4b0e3fdce\"" Feb 13 15:55:01.008073 systemd[1]: run-netns-cni\x2d8ff5c3bc\x2dc6be\x2dd346\x2db227\x2d8ed03f5d31fb.mount: Deactivated successfully. Feb 13 15:55:01.017991 systemd[1]: Started cri-containerd-c2724e55bc7147f32da532eefd7b35aeba57538dfa0106171aa428c4b0e3fdce.scope - libcontainer container c2724e55bc7147f32da532eefd7b35aeba57538dfa0106171aa428c4b0e3fdce. Feb 13 15:55:01.044408 containerd[1488]: time="2025-02-13T15:55:01.044346041Z" level=error msg="Failed to destroy network for sandbox \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:01.045683 containerd[1488]: time="2025-02-13T15:55:01.045436949Z" level=error msg="encountered an error cleaning up failed sandbox \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:01.052868 containerd[1488]: time="2025-02-13T15:55:01.045541066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:01.052726 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61-shm.mount: Deactivated successfully. Feb 13 15:55:01.053179 kubelet[1860]: E0213 15:55:01.051978 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:01.053179 kubelet[1860]: E0213 15:55:01.052057 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:55:01.053179 kubelet[1860]: E0213 15:55:01.052092 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:55:01.053356 kubelet[1860]: E0213 15:55:01.052167 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-jfhxb" podUID="8836a9fd-bf45-4f06-b3a3-1533b6113050" Feb 13 15:55:01.074540 containerd[1488]: time="2025-02-13T15:55:01.073953700Z" level=error msg="Failed to destroy network for sandbox \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:01.078949 containerd[1488]: time="2025-02-13T15:55:01.078882265Z" level=error msg="encountered an error cleaning up failed sandbox \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:01.079264 containerd[1488]: time="2025-02-13T15:55:01.079226457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:01.080256 kubelet[1860]: E0213 15:55:01.079700 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:01.080256 kubelet[1860]: E0213 15:55:01.079769 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:55:01.080256 kubelet[1860]: E0213 15:55:01.079838 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:55:01.080543 kubelet[1860]: E0213 15:55:01.079936 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:55:01.080338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077-shm.mount: Deactivated successfully. Feb 13 15:55:01.102752 containerd[1488]: time="2025-02-13T15:55:01.102702884Z" level=info msg="StartContainer for \"c2724e55bc7147f32da532eefd7b35aeba57538dfa0106171aa428c4b0e3fdce\" returns successfully" Feb 13 15:55:01.226608 systemd[1]: cri-containerd-c2724e55bc7147f32da532eefd7b35aeba57538dfa0106171aa428c4b0e3fdce.scope: Deactivated successfully. Feb 13 15:55:01.262439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2724e55bc7147f32da532eefd7b35aeba57538dfa0106171aa428c4b0e3fdce-rootfs.mount: Deactivated successfully. Feb 13 15:55:01.271527 containerd[1488]: time="2025-02-13T15:55:01.271353806Z" level=info msg="shim disconnected" id=c2724e55bc7147f32da532eefd7b35aeba57538dfa0106171aa428c4b0e3fdce namespace=k8s.io Feb 13 15:55:01.271527 containerd[1488]: time="2025-02-13T15:55:01.271512056Z" level=warning msg="cleaning up after shim disconnected" id=c2724e55bc7147f32da532eefd7b35aeba57538dfa0106171aa428c4b0e3fdce namespace=k8s.io Feb 13 15:55:01.271527 containerd[1488]: time="2025-02-13T15:55:01.271528757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:55:01.536425 kubelet[1860]: I0213 15:55:01.536191 1860 topology_manager.go:215] "Topology Admit Handler" podUID="e0b4efad-f604-4c42-afd6-de92f051a46e" podNamespace="calico-system" podName="calico-typha-667b9cf9dc-wt8t5" Feb 13 15:55:01.544755 systemd[1]: Created slice kubepods-besteffort-pode0b4efad_f604_4c42_afd6_de92f051a46e.slice - libcontainer container kubepods-besteffort-pode0b4efad_f604_4c42_afd6_de92f051a46e.slice. Feb 13 15:55:01.561797 kubelet[1860]: E0213 15:55:01.561745 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:01.625996 kubelet[1860]: I0213 15:55:01.625935 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e0b4efad-f604-4c42-afd6-de92f051a46e-typha-certs\") pod \"calico-typha-667b9cf9dc-wt8t5\" (UID: \"e0b4efad-f604-4c42-afd6-de92f051a46e\") " pod="calico-system/calico-typha-667b9cf9dc-wt8t5" Feb 13 15:55:01.625996 kubelet[1860]: I0213 15:55:01.626003 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k795g\" (UniqueName: \"kubernetes.io/projected/e0b4efad-f604-4c42-afd6-de92f051a46e-kube-api-access-k795g\") pod \"calico-typha-667b9cf9dc-wt8t5\" (UID: \"e0b4efad-f604-4c42-afd6-de92f051a46e\") " pod="calico-system/calico-typha-667b9cf9dc-wt8t5" Feb 13 15:55:01.626255 kubelet[1860]: I0213 15:55:01.626055 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0b4efad-f604-4c42-afd6-de92f051a46e-tigera-ca-bundle\") pod \"calico-typha-667b9cf9dc-wt8t5\" (UID: \"e0b4efad-f604-4c42-afd6-de92f051a46e\") " pod="calico-system/calico-typha-667b9cf9dc-wt8t5" Feb 13 15:55:01.789825 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:55:01.849711 containerd[1488]: time="2025-02-13T15:55:01.849655461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-667b9cf9dc-wt8t5,Uid:e0b4efad-f604-4c42-afd6-de92f051a46e,Namespace:calico-system,Attempt:0,}" Feb 13 15:55:01.881626 containerd[1488]: time="2025-02-13T15:55:01.881456849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:55:01.883682 containerd[1488]: time="2025-02-13T15:55:01.881542780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:55:01.883682 containerd[1488]: time="2025-02-13T15:55:01.883467793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:01.883900 containerd[1488]: time="2025-02-13T15:55:01.883622617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:01.892217 kubelet[1860]: I0213 15:55:01.891285 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077" Feb 13 15:55:01.892379 containerd[1488]: time="2025-02-13T15:55:01.892268998Z" level=info msg="StopPodSandbox for \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\"" Feb 13 15:55:01.892715 containerd[1488]: time="2025-02-13T15:55:01.892677135Z" level=info msg="Ensure that sandbox 49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077 in task-service has been cleanup successfully" Feb 13 15:55:01.893011 containerd[1488]: time="2025-02-13T15:55:01.892965915Z" level=info msg="TearDown network for sandbox \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\" successfully" Feb 13 15:55:01.893122 containerd[1488]: time="2025-02-13T15:55:01.893013107Z" level=info msg="StopPodSandbox for \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\" returns successfully" Feb 13 15:55:01.893829 containerd[1488]: time="2025-02-13T15:55:01.893792053Z" level=info msg="StopPodSandbox for \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\"" Feb 13 15:55:01.894012 containerd[1488]: time="2025-02-13T15:55:01.893972761Z" level=info msg="TearDown network for sandbox \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\" successfully" Feb 13 15:55:01.894090 containerd[1488]: time="2025-02-13T15:55:01.894029607Z" level=info msg="StopPodSandbox for \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\" returns successfully" Feb 13 15:55:01.894884 containerd[1488]: time="2025-02-13T15:55:01.894685774Z" level=info msg="StopPodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\"" Feb 13 15:55:01.894884 containerd[1488]: time="2025-02-13T15:55:01.894847483Z" level=info msg="TearDown network for sandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" successfully" Feb 13 15:55:01.895134 containerd[1488]: time="2025-02-13T15:55:01.895065809Z" level=info msg="StopPodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" returns successfully" Feb 13 15:55:01.896414 containerd[1488]: time="2025-02-13T15:55:01.895857145Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\"" Feb 13 15:55:01.896414 containerd[1488]: time="2025-02-13T15:55:01.896001079Z" level=info msg="TearDown network for sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" successfully" Feb 13 15:55:01.896414 containerd[1488]: time="2025-02-13T15:55:01.896016418Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" returns successfully" Feb 13 15:55:01.896999 containerd[1488]: time="2025-02-13T15:55:01.896960628Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\"" Feb 13 15:55:01.897130 containerd[1488]: time="2025-02-13T15:55:01.897104985Z" level=info msg="TearDown network for sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" successfully" Feb 13 15:55:01.897433 containerd[1488]: time="2025-02-13T15:55:01.897131212Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" returns successfully" Feb 13 15:55:01.897921 containerd[1488]: time="2025-02-13T15:55:01.897693828Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\"" Feb 13 15:55:01.898376 containerd[1488]: time="2025-02-13T15:55:01.898151911Z" level=info msg="TearDown network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" successfully" Feb 13 15:55:01.898376 containerd[1488]: time="2025-02-13T15:55:01.898178904Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" returns successfully" Feb 13 15:55:01.898910 containerd[1488]: time="2025-02-13T15:55:01.898698696Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\"" Feb 13 15:55:01.898910 containerd[1488]: time="2025-02-13T15:55:01.898821348Z" level=info msg="TearDown network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" successfully" Feb 13 15:55:01.898910 containerd[1488]: time="2025-02-13T15:55:01.898839859Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" returns successfully" Feb 13 15:55:01.899681 containerd[1488]: time="2025-02-13T15:55:01.899456068Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\"" Feb 13 15:55:01.899681 containerd[1488]: time="2025-02-13T15:55:01.899590834Z" level=info msg="TearDown network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" successfully" Feb 13 15:55:01.899681 containerd[1488]: time="2025-02-13T15:55:01.899609904Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" returns successfully" Feb 13 15:55:01.900344 containerd[1488]: time="2025-02-13T15:55:01.900149164Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" Feb 13 15:55:01.900344 containerd[1488]: time="2025-02-13T15:55:01.900259659Z" level=info msg="TearDown network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" successfully" Feb 13 15:55:01.900344 containerd[1488]: time="2025-02-13T15:55:01.900277946Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" returns successfully" Feb 13 15:55:01.901262 containerd[1488]: time="2025-02-13T15:55:01.900976756Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:55:01.901262 containerd[1488]: time="2025-02-13T15:55:01.901095055Z" level=info msg="TearDown network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" successfully" Feb 13 15:55:01.901262 containerd[1488]: time="2025-02-13T15:55:01.901113556Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" returns successfully" Feb 13 15:55:01.902318 containerd[1488]: time="2025-02-13T15:55:01.902147736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:10,}" Feb 13 15:55:01.903932 kubelet[1860]: I0213 15:55:01.903904 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61" Feb 13 15:55:01.904837 containerd[1488]: time="2025-02-13T15:55:01.904804404Z" level=info msg="StopPodSandbox for \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\"" Feb 13 15:55:01.905093 containerd[1488]: time="2025-02-13T15:55:01.905051860Z" level=info msg="Ensure that sandbox 4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61 in task-service has been cleanup successfully" Feb 13 15:55:01.905443 containerd[1488]: time="2025-02-13T15:55:01.905258632Z" level=info msg="TearDown network for sandbox \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\" successfully" Feb 13 15:55:01.905443 containerd[1488]: time="2025-02-13T15:55:01.905286024Z" level=info msg="StopPodSandbox for \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\" returns successfully" Feb 13 15:55:01.907943 containerd[1488]: time="2025-02-13T15:55:01.907662515Z" level=info msg="StopPodSandbox for \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\"" Feb 13 15:55:01.907943 containerd[1488]: time="2025-02-13T15:55:01.907782460Z" level=info msg="TearDown network for sandbox \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\" successfully" Feb 13 15:55:01.907943 containerd[1488]: time="2025-02-13T15:55:01.907799940Z" level=info msg="StopPodSandbox for \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\" returns successfully" Feb 13 15:55:01.909383 containerd[1488]: time="2025-02-13T15:55:01.908984607Z" level=info msg="StopPodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\"" Feb 13 15:55:01.909383 containerd[1488]: time="2025-02-13T15:55:01.909098774Z" level=info msg="TearDown network for sandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" successfully" Feb 13 15:55:01.909383 containerd[1488]: time="2025-02-13T15:55:01.909118963Z" level=info msg="StopPodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" returns successfully" Feb 13 15:55:01.911840 containerd[1488]: time="2025-02-13T15:55:01.911809290Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\"" Feb 13 15:55:01.912307 containerd[1488]: time="2025-02-13T15:55:01.912111219Z" level=info msg="TearDown network for sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" successfully" Feb 13 15:55:01.912307 containerd[1488]: time="2025-02-13T15:55:01.912137649Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" returns successfully" Feb 13 15:55:01.914010 containerd[1488]: time="2025-02-13T15:55:01.913814420Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\"" Feb 13 15:55:01.914010 containerd[1488]: time="2025-02-13T15:55:01.913932329Z" level=info msg="TearDown network for sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" successfully" Feb 13 15:55:01.914010 containerd[1488]: time="2025-02-13T15:55:01.913949048Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" returns successfully" Feb 13 15:55:01.915428 kubelet[1860]: I0213 15:55:01.915120 1860 scope.go:117] "RemoveContainer" containerID="df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499" Feb 13 15:55:01.917286 containerd[1488]: time="2025-02-13T15:55:01.916463307Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\"" Feb 13 15:55:01.917838 containerd[1488]: time="2025-02-13T15:55:01.917443157Z" level=info msg="TearDown network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" successfully" Feb 13 15:55:01.917838 containerd[1488]: time="2025-02-13T15:55:01.917464864Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" returns successfully" Feb 13 15:55:01.918507 containerd[1488]: time="2025-02-13T15:55:01.917059957Z" level=info msg="StopPodSandbox for \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\"" Feb 13 15:55:01.918918 containerd[1488]: time="2025-02-13T15:55:01.918887837Z" level=info msg="Container to stop \"c2724e55bc7147f32da532eefd7b35aeba57538dfa0106171aa428c4b0e3fdce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:55:01.920522 containerd[1488]: time="2025-02-13T15:55:01.920420968Z" level=info msg="Container to stop \"9cfe1562ca66d0939394df6a8c719a63a6c6472874c1cdb9898c2ada56dfd091\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:55:01.921110 containerd[1488]: time="2025-02-13T15:55:01.920832510Z" level=info msg="Container to stop \"5cd1625f74740fbf355e93d4d43c5debf2acb755e6c664a6c3d8e190d206b107\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:55:01.921110 containerd[1488]: time="2025-02-13T15:55:01.920865632Z" level=info msg="Container to stop \"df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:55:01.921555 containerd[1488]: time="2025-02-13T15:55:01.921527766Z" level=info msg="RemoveContainer for \"df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499\"" Feb 13 15:55:01.923371 containerd[1488]: time="2025-02-13T15:55:01.922795522Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\"" Feb 13 15:55:01.923604 containerd[1488]: time="2025-02-13T15:55:01.923374262Z" level=info msg="TearDown network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" successfully" Feb 13 15:55:01.923604 containerd[1488]: time="2025-02-13T15:55:01.923393020Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" returns successfully" Feb 13 15:55:01.924420 containerd[1488]: time="2025-02-13T15:55:01.924243983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:7,}" Feb 13 15:55:01.924792 systemd[1]: Started cri-containerd-f59421f399b8bd16f02a4106275f234fe7dc735fb1d284ad2b66e13a79c52dbe.scope - libcontainer container f59421f399b8bd16f02a4106275f234fe7dc735fb1d284ad2b66e13a79c52dbe. Feb 13 15:55:01.931818 containerd[1488]: time="2025-02-13T15:55:01.931651913Z" level=info msg="RemoveContainer for \"df8bbf38bc3d7e9a09f49f6db24191a2e2b03428449ef5a59c19b50b732d8499\" returns successfully" Feb 13 15:55:01.939831 systemd[1]: cri-containerd-f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11.scope: Deactivated successfully. Feb 13 15:55:02.009322 systemd[1]: run-netns-cni\x2deaffd4cd\x2dc121\x2de71c\x2dad1f\x2d4c8c51bfb5ae.mount: Deactivated successfully. Feb 13 15:55:02.010185 systemd[1]: run-netns-cni\x2d894ad1af\x2d61c7\x2de77e\x2d613f\x2dbe4a7da36112.mount: Deactivated successfully. Feb 13 15:55:02.010915 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11-shm.mount: Deactivated successfully. Feb 13 15:55:02.066012 containerd[1488]: time="2025-02-13T15:55:02.065809183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-667b9cf9dc-wt8t5,Uid:e0b4efad-f604-4c42-afd6-de92f051a46e,Namespace:calico-system,Attempt:0,} returns sandbox id \"f59421f399b8bd16f02a4106275f234fe7dc735fb1d284ad2b66e13a79c52dbe\"" Feb 13 15:55:02.074547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11-rootfs.mount: Deactivated successfully. Feb 13 15:55:02.078691 containerd[1488]: time="2025-02-13T15:55:02.074976892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 15:55:02.080404 containerd[1488]: time="2025-02-13T15:55:02.080148211Z" level=info msg="shim disconnected" id=f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11 namespace=k8s.io Feb 13 15:55:02.080404 containerd[1488]: time="2025-02-13T15:55:02.080207642Z" level=warning msg="cleaning up after shim disconnected" id=f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11 namespace=k8s.io Feb 13 15:55:02.080404 containerd[1488]: time="2025-02-13T15:55:02.080222472Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:55:02.112602 containerd[1488]: time="2025-02-13T15:55:02.111885584Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:55:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:55:02.114275 containerd[1488]: time="2025-02-13T15:55:02.114233478Z" level=info msg="TearDown network for sandbox \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\" successfully" Feb 13 15:55:02.114458 containerd[1488]: time="2025-02-13T15:55:02.114434663Z" level=info msg="StopPodSandbox for \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\" returns successfully" Feb 13 15:55:02.159474 containerd[1488]: time="2025-02-13T15:55:02.159379644Z" level=error msg="Failed to destroy network for sandbox \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:02.161945 containerd[1488]: time="2025-02-13T15:55:02.161874607Z" level=error msg="encountered an error cleaning up failed sandbox \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:02.163762 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86-shm.mount: Deactivated successfully. Feb 13 15:55:02.165629 containerd[1488]: time="2025-02-13T15:55:02.164718815Z" level=error msg="Failed to destroy network for sandbox \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:02.167737 containerd[1488]: time="2025-02-13T15:55:02.166036308Z" level=error msg="encountered an error cleaning up failed sandbox \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:02.167737 containerd[1488]: time="2025-02-13T15:55:02.166119930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:7,} failed, error" error="failed to setup network for sandbox \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:02.167974 kubelet[1860]: E0213 15:55:02.167764 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:02.167974 kubelet[1860]: E0213 15:55:02.167841 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:55:02.167974 kubelet[1860]: E0213 15:55:02.167877 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:55:02.168180 kubelet[1860]: E0213 15:55:02.167974 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-jfhxb" podUID="8836a9fd-bf45-4f06-b3a3-1533b6113050" Feb 13 15:55:02.169698 containerd[1488]: time="2025-02-13T15:55:02.168350074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:10,} failed, error" error="failed to setup network for sandbox \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:02.169815 kubelet[1860]: E0213 15:55:02.168633 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:02.169815 kubelet[1860]: E0213 15:55:02.169542 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:55:02.169815 kubelet[1860]: E0213 15:55:02.169649 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:55:02.170136 kubelet[1860]: E0213 15:55:02.170100 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:55:02.170388 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199-shm.mount: Deactivated successfully. Feb 13 15:55:02.231685 kubelet[1860]: I0213 15:55:02.230733 1860 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-cni-bin-dir\") pod \"26a28cea-7c71-4536-9836-b29824c3f605\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " Feb 13 15:55:02.231685 kubelet[1860]: I0213 15:55:02.230812 1860 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26a28cea-7c71-4536-9836-b29824c3f605-tigera-ca-bundle\") pod \"26a28cea-7c71-4536-9836-b29824c3f605\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " Feb 13 15:55:02.231685 kubelet[1860]: I0213 15:55:02.230824 1860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "26a28cea-7c71-4536-9836-b29824c3f605" (UID: "26a28cea-7c71-4536-9836-b29824c3f605"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:55:02.231685 kubelet[1860]: I0213 15:55:02.230851 1860 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/26a28cea-7c71-4536-9836-b29824c3f605-node-certs\") pod \"26a28cea-7c71-4536-9836-b29824c3f605\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " Feb 13 15:55:02.231685 kubelet[1860]: I0213 15:55:02.230898 1860 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-flexvol-driver-host\") pod \"26a28cea-7c71-4536-9836-b29824c3f605\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " Feb 13 15:55:02.231685 kubelet[1860]: I0213 15:55:02.230928 1860 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-xtables-lock\") pod \"26a28cea-7c71-4536-9836-b29824c3f605\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " Feb 13 15:55:02.232152 kubelet[1860]: I0213 15:55:02.230964 1860 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85b99\" (UniqueName: \"kubernetes.io/projected/26a28cea-7c71-4536-9836-b29824c3f605-kube-api-access-85b99\") pod \"26a28cea-7c71-4536-9836-b29824c3f605\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " Feb 13 15:55:02.232152 kubelet[1860]: I0213 15:55:02.230994 1860 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-var-lib-calico\") pod \"26a28cea-7c71-4536-9836-b29824c3f605\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " Feb 13 15:55:02.232152 kubelet[1860]: I0213 15:55:02.231024 1860 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-var-run-calico\") pod \"26a28cea-7c71-4536-9836-b29824c3f605\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " Feb 13 15:55:02.232152 kubelet[1860]: I0213 15:55:02.231058 1860 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-policysync\") pod \"26a28cea-7c71-4536-9836-b29824c3f605\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " Feb 13 15:55:02.232152 kubelet[1860]: I0213 15:55:02.231086 1860 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-lib-modules\") pod \"26a28cea-7c71-4536-9836-b29824c3f605\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " Feb 13 15:55:02.232152 kubelet[1860]: I0213 15:55:02.231116 1860 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-cni-log-dir\") pod \"26a28cea-7c71-4536-9836-b29824c3f605\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " Feb 13 15:55:02.232461 kubelet[1860]: I0213 15:55:02.231151 1860 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-cni-net-dir\") pod \"26a28cea-7c71-4536-9836-b29824c3f605\" (UID: \"26a28cea-7c71-4536-9836-b29824c3f605\") " Feb 13 15:55:02.232461 kubelet[1860]: I0213 15:55:02.231197 1860 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-cni-bin-dir\") on node \"10.128.0.29\" DevicePath \"\"" Feb 13 15:55:02.232461 kubelet[1860]: I0213 15:55:02.231242 1860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "26a28cea-7c71-4536-9836-b29824c3f605" (UID: "26a28cea-7c71-4536-9836-b29824c3f605"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:55:02.233914 kubelet[1860]: I0213 15:55:02.233870 1860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "26a28cea-7c71-4536-9836-b29824c3f605" (UID: "26a28cea-7c71-4536-9836-b29824c3f605"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:55:02.234049 kubelet[1860]: I0213 15:55:02.233972 1860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "26a28cea-7c71-4536-9836-b29824c3f605" (UID: "26a28cea-7c71-4536-9836-b29824c3f605"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:55:02.234049 kubelet[1860]: I0213 15:55:02.234040 1860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "26a28cea-7c71-4536-9836-b29824c3f605" (UID: "26a28cea-7c71-4536-9836-b29824c3f605"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:55:02.234470 kubelet[1860]: I0213 15:55:02.234436 1860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "26a28cea-7c71-4536-9836-b29824c3f605" (UID: "26a28cea-7c71-4536-9836-b29824c3f605"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:55:02.234764 kubelet[1860]: I0213 15:55:02.234619 1860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-policysync" (OuterVolumeSpecName: "policysync") pod "26a28cea-7c71-4536-9836-b29824c3f605" (UID: "26a28cea-7c71-4536-9836-b29824c3f605"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:55:02.234886 kubelet[1860]: I0213 15:55:02.234651 1860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "26a28cea-7c71-4536-9836-b29824c3f605" (UID: "26a28cea-7c71-4536-9836-b29824c3f605"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:55:02.234994 kubelet[1860]: I0213 15:55:02.234677 1860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "26a28cea-7c71-4536-9836-b29824c3f605" (UID: "26a28cea-7c71-4536-9836-b29824c3f605"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:55:02.238640 kubelet[1860]: I0213 15:55:02.237757 1860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a28cea-7c71-4536-9836-b29824c3f605-node-certs" (OuterVolumeSpecName: "node-certs") pod "26a28cea-7c71-4536-9836-b29824c3f605" (UID: "26a28cea-7c71-4536-9836-b29824c3f605"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:55:02.241367 systemd[1]: var-lib-kubelet-pods-26a28cea\x2d7c71\x2d4536\x2d9836\x2db29824c3f605-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Feb 13 15:55:02.242356 kubelet[1860]: I0213 15:55:02.242088 1860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26a28cea-7c71-4536-9836-b29824c3f605-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "26a28cea-7c71-4536-9836-b29824c3f605" (UID: "26a28cea-7c71-4536-9836-b29824c3f605"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:55:02.244403 kubelet[1860]: I0213 15:55:02.244340 1860 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a28cea-7c71-4536-9836-b29824c3f605-kube-api-access-85b99" (OuterVolumeSpecName: "kube-api-access-85b99") pod "26a28cea-7c71-4536-9836-b29824c3f605" (UID: "26a28cea-7c71-4536-9836-b29824c3f605"). InnerVolumeSpecName "kube-api-access-85b99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:55:02.272218 kubelet[1860]: I0213 15:55:02.272170 1860 topology_manager.go:215] "Topology Admit Handler" podUID="0248324a-7c57-450b-8ae7-4cc174095da2" podNamespace="calico-system" podName="calico-node-dqqxz" Feb 13 15:55:02.272448 kubelet[1860]: E0213 15:55:02.272245 1860 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26a28cea-7c71-4536-9836-b29824c3f605" containerName="calico-node" Feb 13 15:55:02.272448 kubelet[1860]: E0213 15:55:02.272263 1860 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26a28cea-7c71-4536-9836-b29824c3f605" containerName="calico-node" Feb 13 15:55:02.272448 kubelet[1860]: E0213 15:55:02.272274 1860 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26a28cea-7c71-4536-9836-b29824c3f605" containerName="flexvol-driver" Feb 13 15:55:02.272448 kubelet[1860]: E0213 15:55:02.272285 1860 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26a28cea-7c71-4536-9836-b29824c3f605" containerName="install-cni" Feb 13 15:55:02.272448 kubelet[1860]: I0213 15:55:02.272315 1860 memory_manager.go:354] "RemoveStaleState removing state" podUID="26a28cea-7c71-4536-9836-b29824c3f605" containerName="calico-node" Feb 13 15:55:02.272448 kubelet[1860]: I0213 15:55:02.272327 1860 memory_manager.go:354] "RemoveStaleState removing state" podUID="26a28cea-7c71-4536-9836-b29824c3f605" containerName="calico-node" Feb 13 15:55:02.280172 systemd[1]: Created slice kubepods-besteffort-pod0248324a_7c57_450b_8ae7_4cc174095da2.slice - libcontainer container kubepods-besteffort-pod0248324a_7c57_450b_8ae7_4cc174095da2.slice. Feb 13 15:55:02.332868 kubelet[1860]: I0213 15:55:02.331925 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0248324a-7c57-450b-8ae7-4cc174095da2-cni-bin-dir\") pod \"calico-node-dqqxz\" (UID: \"0248324a-7c57-450b-8ae7-4cc174095da2\") " pod="calico-system/calico-node-dqqxz" Feb 13 15:55:02.332868 kubelet[1860]: I0213 15:55:02.332003 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0248324a-7c57-450b-8ae7-4cc174095da2-lib-modules\") pod \"calico-node-dqqxz\" (UID: \"0248324a-7c57-450b-8ae7-4cc174095da2\") " pod="calico-system/calico-node-dqqxz" Feb 13 15:55:02.332868 kubelet[1860]: I0213 15:55:02.332071 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0248324a-7c57-450b-8ae7-4cc174095da2-var-lib-calico\") pod \"calico-node-dqqxz\" (UID: \"0248324a-7c57-450b-8ae7-4cc174095da2\") " pod="calico-system/calico-node-dqqxz" Feb 13 15:55:02.332868 kubelet[1860]: I0213 15:55:02.332110 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0248324a-7c57-450b-8ae7-4cc174095da2-flexvol-driver-host\") pod \"calico-node-dqqxz\" (UID: \"0248324a-7c57-450b-8ae7-4cc174095da2\") " pod="calico-system/calico-node-dqqxz" Feb 13 15:55:02.332868 kubelet[1860]: I0213 15:55:02.332143 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0248324a-7c57-450b-8ae7-4cc174095da2-var-run-calico\") pod \"calico-node-dqqxz\" (UID: \"0248324a-7c57-450b-8ae7-4cc174095da2\") " pod="calico-system/calico-node-dqqxz" Feb 13 15:55:02.333285 kubelet[1860]: I0213 15:55:02.332177 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0248324a-7c57-450b-8ae7-4cc174095da2-tigera-ca-bundle\") pod \"calico-node-dqqxz\" (UID: \"0248324a-7c57-450b-8ae7-4cc174095da2\") " pod="calico-system/calico-node-dqqxz" Feb 13 15:55:02.333285 kubelet[1860]: I0213 15:55:02.332213 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0248324a-7c57-450b-8ae7-4cc174095da2-node-certs\") pod \"calico-node-dqqxz\" (UID: \"0248324a-7c57-450b-8ae7-4cc174095da2\") " pod="calico-system/calico-node-dqqxz" Feb 13 15:55:02.333285 kubelet[1860]: I0213 15:55:02.332247 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0248324a-7c57-450b-8ae7-4cc174095da2-xtables-lock\") pod \"calico-node-dqqxz\" (UID: \"0248324a-7c57-450b-8ae7-4cc174095da2\") " pod="calico-system/calico-node-dqqxz" Feb 13 15:55:02.333285 kubelet[1860]: I0213 15:55:02.332281 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0248324a-7c57-450b-8ae7-4cc174095da2-cni-log-dir\") pod \"calico-node-dqqxz\" (UID: \"0248324a-7c57-450b-8ae7-4cc174095da2\") " pod="calico-system/calico-node-dqqxz" Feb 13 15:55:02.333285 kubelet[1860]: I0213 15:55:02.332316 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t9l6\" (UniqueName: \"kubernetes.io/projected/0248324a-7c57-450b-8ae7-4cc174095da2-kube-api-access-6t9l6\") pod \"calico-node-dqqxz\" (UID: \"0248324a-7c57-450b-8ae7-4cc174095da2\") " pod="calico-system/calico-node-dqqxz" Feb 13 15:55:02.333551 kubelet[1860]: I0213 15:55:02.332349 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0248324a-7c57-450b-8ae7-4cc174095da2-policysync\") pod \"calico-node-dqqxz\" (UID: \"0248324a-7c57-450b-8ae7-4cc174095da2\") " pod="calico-system/calico-node-dqqxz" Feb 13 15:55:02.333551 kubelet[1860]: I0213 15:55:02.332386 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0248324a-7c57-450b-8ae7-4cc174095da2-cni-net-dir\") pod \"calico-node-dqqxz\" (UID: \"0248324a-7c57-450b-8ae7-4cc174095da2\") " pod="calico-system/calico-node-dqqxz" Feb 13 15:55:02.333551 kubelet[1860]: I0213 15:55:02.332419 1860 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-cni-log-dir\") on node \"10.128.0.29\" DevicePath \"\"" Feb 13 15:55:02.333551 kubelet[1860]: I0213 15:55:02.332441 1860 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-cni-net-dir\") on node \"10.128.0.29\" DevicePath \"\"" Feb 13 15:55:02.333551 kubelet[1860]: I0213 15:55:02.332457 1860 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/26a28cea-7c71-4536-9836-b29824c3f605-node-certs\") on node \"10.128.0.29\" DevicePath \"\"" Feb 13 15:55:02.333551 kubelet[1860]: I0213 15:55:02.332476 1860 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-flexvol-driver-host\") on node \"10.128.0.29\" DevicePath \"\"" Feb 13 15:55:02.333551 kubelet[1860]: I0213 15:55:02.332495 1860 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26a28cea-7c71-4536-9836-b29824c3f605-tigera-ca-bundle\") on node \"10.128.0.29\" DevicePath \"\"" Feb 13 15:55:02.333950 kubelet[1860]: I0213 15:55:02.332515 1860 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-xtables-lock\") on node \"10.128.0.29\" DevicePath \"\"" Feb 13 15:55:02.333950 kubelet[1860]: I0213 15:55:02.332536 1860 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-85b99\" (UniqueName: \"kubernetes.io/projected/26a28cea-7c71-4536-9836-b29824c3f605-kube-api-access-85b99\") on node \"10.128.0.29\" DevicePath \"\"" Feb 13 15:55:02.333950 kubelet[1860]: I0213 15:55:02.332555 1860 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-var-lib-calico\") on node \"10.128.0.29\" DevicePath \"\"" Feb 13 15:55:02.333950 kubelet[1860]: I0213 15:55:02.332599 1860 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-var-run-calico\") on node \"10.128.0.29\" DevicePath \"\"" Feb 13 15:55:02.333950 kubelet[1860]: I0213 15:55:02.332619 1860 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-policysync\") on node \"10.128.0.29\" DevicePath \"\"" Feb 13 15:55:02.333950 kubelet[1860]: I0213 15:55:02.332636 1860 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26a28cea-7c71-4536-9836-b29824c3f605-lib-modules\") on node \"10.128.0.29\" DevicePath \"\"" Feb 13 15:55:02.562588 kubelet[1860]: E0213 15:55:02.562493 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:02.585417 containerd[1488]: time="2025-02-13T15:55:02.584825692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dqqxz,Uid:0248324a-7c57-450b-8ae7-4cc174095da2,Namespace:calico-system,Attempt:0,}" Feb 13 15:55:02.619737 containerd[1488]: time="2025-02-13T15:55:02.619511736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:55:02.619737 containerd[1488]: time="2025-02-13T15:55:02.619605654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:55:02.619737 containerd[1488]: time="2025-02-13T15:55:02.619625759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:02.620941 containerd[1488]: time="2025-02-13T15:55:02.620678391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:02.652907 systemd[1]: Started cri-containerd-eaaa4b513895b89cf7198db29d3e7742e4e8cdf5bbd09bb2f651f74ed42dc3a9.scope - libcontainer container eaaa4b513895b89cf7198db29d3e7742e4e8cdf5bbd09bb2f651f74ed42dc3a9. Feb 13 15:55:02.686018 containerd[1488]: time="2025-02-13T15:55:02.685956890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dqqxz,Uid:0248324a-7c57-450b-8ae7-4cc174095da2,Namespace:calico-system,Attempt:0,} returns sandbox id \"eaaa4b513895b89cf7198db29d3e7742e4e8cdf5bbd09bb2f651f74ed42dc3a9\"" Feb 13 15:55:02.689688 containerd[1488]: time="2025-02-13T15:55:02.689635391Z" level=info msg="CreateContainer within sandbox \"eaaa4b513895b89cf7198db29d3e7742e4e8cdf5bbd09bb2f651f74ed42dc3a9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:55:02.705553 containerd[1488]: time="2025-02-13T15:55:02.705472148Z" level=info msg="CreateContainer within sandbox \"eaaa4b513895b89cf7198db29d3e7742e4e8cdf5bbd09bb2f651f74ed42dc3a9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"eda475e72d8c1398d8f3be01c9520ff7b4c52b85596c91f55bb4a7fba251417a\"" Feb 13 15:55:02.706745 containerd[1488]: time="2025-02-13T15:55:02.706696891Z" level=info msg="StartContainer for \"eda475e72d8c1398d8f3be01c9520ff7b4c52b85596c91f55bb4a7fba251417a\"" Feb 13 15:55:02.743795 systemd[1]: Started cri-containerd-eda475e72d8c1398d8f3be01c9520ff7b4c52b85596c91f55bb4a7fba251417a.scope - libcontainer container eda475e72d8c1398d8f3be01c9520ff7b4c52b85596c91f55bb4a7fba251417a. Feb 13 15:55:02.783914 containerd[1488]: time="2025-02-13T15:55:02.783830708Z" level=info msg="StartContainer for \"eda475e72d8c1398d8f3be01c9520ff7b4c52b85596c91f55bb4a7fba251417a\" returns successfully" Feb 13 15:55:02.801553 systemd[1]: cri-containerd-eda475e72d8c1398d8f3be01c9520ff7b4c52b85596c91f55bb4a7fba251417a.scope: Deactivated successfully. Feb 13 15:55:02.839545 containerd[1488]: time="2025-02-13T15:55:02.839260867Z" level=info msg="shim disconnected" id=eda475e72d8c1398d8f3be01c9520ff7b4c52b85596c91f55bb4a7fba251417a namespace=k8s.io Feb 13 15:55:02.839545 containerd[1488]: time="2025-02-13T15:55:02.839335756Z" level=warning msg="cleaning up after shim disconnected" id=eda475e72d8c1398d8f3be01c9520ff7b4c52b85596c91f55bb4a7fba251417a namespace=k8s.io Feb 13 15:55:02.839545 containerd[1488]: time="2025-02-13T15:55:02.839350263Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:55:02.920847 kubelet[1860]: I0213 15:55:02.920812 1860 scope.go:117] "RemoveContainer" containerID="c2724e55bc7147f32da532eefd7b35aeba57538dfa0106171aa428c4b0e3fdce" Feb 13 15:55:02.924822 containerd[1488]: time="2025-02-13T15:55:02.924394759Z" level=info msg="RemoveContainer for \"c2724e55bc7147f32da532eefd7b35aeba57538dfa0106171aa428c4b0e3fdce\"" Feb 13 15:55:02.930842 containerd[1488]: time="2025-02-13T15:55:02.930798786Z" level=info msg="RemoveContainer for \"c2724e55bc7147f32da532eefd7b35aeba57538dfa0106171aa428c4b0e3fdce\" returns successfully" Feb 13 15:55:02.931740 kubelet[1860]: I0213 15:55:02.930978 1860 scope.go:117] "RemoveContainer" containerID="5cd1625f74740fbf355e93d4d43c5debf2acb755e6c664a6c3d8e190d206b107" Feb 13 15:55:02.931100 systemd[1]: Removed slice kubepods-besteffort-pod26a28cea_7c71_4536_9836_b29824c3f605.slice - libcontainer container kubepods-besteffort-pod26a28cea_7c71_4536_9836_b29824c3f605.slice. Feb 13 15:55:02.934287 kubelet[1860]: I0213 15:55:02.933423 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86" Feb 13 15:55:02.934414 containerd[1488]: time="2025-02-13T15:55:02.934134818Z" level=info msg="StopPodSandbox for \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\"" Feb 13 15:55:02.934589 containerd[1488]: time="2025-02-13T15:55:02.934517333Z" level=info msg="Ensure that sandbox 23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86 in task-service has been cleanup successfully" Feb 13 15:55:02.934833 containerd[1488]: time="2025-02-13T15:55:02.934795563Z" level=info msg="TearDown network for sandbox \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\" successfully" Feb 13 15:55:02.934833 containerd[1488]: time="2025-02-13T15:55:02.934822748Z" level=info msg="StopPodSandbox for \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\" returns successfully" Feb 13 15:55:02.935500 containerd[1488]: time="2025-02-13T15:55:02.935165472Z" level=info msg="RemoveContainer for \"5cd1625f74740fbf355e93d4d43c5debf2acb755e6c664a6c3d8e190d206b107\"" Feb 13 15:55:02.936252 containerd[1488]: time="2025-02-13T15:55:02.936170160Z" level=info msg="StopPodSandbox for \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\"" Feb 13 15:55:02.936354 containerd[1488]: time="2025-02-13T15:55:02.936286095Z" level=info msg="TearDown network for sandbox \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\" successfully" Feb 13 15:55:02.936354 containerd[1488]: time="2025-02-13T15:55:02.936305346Z" level=info msg="StopPodSandbox for \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\" returns successfully" Feb 13 15:55:02.936968 containerd[1488]: time="2025-02-13T15:55:02.936895186Z" level=info msg="StopPodSandbox for \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\"" Feb 13 15:55:02.937151 containerd[1488]: time="2025-02-13T15:55:02.937013478Z" level=info msg="TearDown network for sandbox \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\" successfully" Feb 13 15:55:02.937151 containerd[1488]: time="2025-02-13T15:55:02.937032783Z" level=info msg="StopPodSandbox for \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\" returns successfully" Feb 13 15:55:02.938089 containerd[1488]: time="2025-02-13T15:55:02.937970865Z" level=info msg="StopPodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\"" Feb 13 15:55:02.938089 containerd[1488]: time="2025-02-13T15:55:02.938085590Z" level=info msg="TearDown network for sandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" successfully" Feb 13 15:55:02.938284 containerd[1488]: time="2025-02-13T15:55:02.938102830Z" level=info msg="StopPodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" returns successfully" Feb 13 15:55:02.939749 containerd[1488]: time="2025-02-13T15:55:02.938712033Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\"" Feb 13 15:55:02.939749 containerd[1488]: time="2025-02-13T15:55:02.938821158Z" level=info msg="TearDown network for sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" successfully" Feb 13 15:55:02.939749 containerd[1488]: time="2025-02-13T15:55:02.938841339Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" returns successfully" Feb 13 15:55:02.939749 containerd[1488]: time="2025-02-13T15:55:02.939626198Z" level=info msg="RemoveContainer for \"5cd1625f74740fbf355e93d4d43c5debf2acb755e6c664a6c3d8e190d206b107\" returns successfully" Feb 13 15:55:02.940025 kubelet[1860]: I0213 15:55:02.939866 1860 scope.go:117] "RemoveContainer" containerID="9cfe1562ca66d0939394df6a8c719a63a6c6472874c1cdb9898c2ada56dfd091" Feb 13 15:55:02.940494 containerd[1488]: time="2025-02-13T15:55:02.940385347Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\"" Feb 13 15:55:02.941249 containerd[1488]: time="2025-02-13T15:55:02.941051163Z" level=info msg="TearDown network for sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" successfully" Feb 13 15:55:02.941249 containerd[1488]: time="2025-02-13T15:55:02.941100847Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" returns successfully" Feb 13 15:55:02.941685 containerd[1488]: time="2025-02-13T15:55:02.941643561Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\"" Feb 13 15:55:02.941802 containerd[1488]: time="2025-02-13T15:55:02.941766023Z" level=info msg="TearDown network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" successfully" Feb 13 15:55:02.941802 containerd[1488]: time="2025-02-13T15:55:02.941784133Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" returns successfully" Feb 13 15:55:02.942505 containerd[1488]: time="2025-02-13T15:55:02.942315514Z" level=info msg="RemoveContainer for \"9cfe1562ca66d0939394df6a8c719a63a6c6472874c1cdb9898c2ada56dfd091\"" Feb 13 15:55:02.943019 containerd[1488]: time="2025-02-13T15:55:02.942317154Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\"" Feb 13 15:55:02.943132 containerd[1488]: time="2025-02-13T15:55:02.943102901Z" level=info msg="TearDown network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" successfully" Feb 13 15:55:02.943132 containerd[1488]: time="2025-02-13T15:55:02.943121430Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" returns successfully" Feb 13 15:55:02.943675 kubelet[1860]: I0213 15:55:02.943365 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199" Feb 13 15:55:02.944006 containerd[1488]: time="2025-02-13T15:55:02.943898607Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\"" Feb 13 15:55:02.944224 containerd[1488]: time="2025-02-13T15:55:02.944126899Z" level=info msg="TearDown network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" successfully" Feb 13 15:55:02.944224 containerd[1488]: time="2025-02-13T15:55:02.944149239Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" returns successfully" Feb 13 15:55:02.945189 containerd[1488]: time="2025-02-13T15:55:02.945154463Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" Feb 13 15:55:02.945427 containerd[1488]: time="2025-02-13T15:55:02.945276813Z" level=info msg="TearDown network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" successfully" Feb 13 15:55:02.945427 containerd[1488]: time="2025-02-13T15:55:02.945296367Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" returns successfully" Feb 13 15:55:02.945427 containerd[1488]: time="2025-02-13T15:55:02.945376644Z" level=info msg="StopPodSandbox for \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\"" Feb 13 15:55:02.945660 containerd[1488]: time="2025-02-13T15:55:02.945623930Z" level=info msg="Ensure that sandbox 2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199 in task-service has been cleanup successfully" Feb 13 15:55:02.946608 containerd[1488]: time="2025-02-13T15:55:02.946511969Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:55:02.946881 containerd[1488]: time="2025-02-13T15:55:02.946853707Z" level=info msg="TearDown network for sandbox \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\" successfully" Feb 13 15:55:02.947305 containerd[1488]: time="2025-02-13T15:55:02.947278689Z" level=info msg="StopPodSandbox for \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\" returns successfully" Feb 13 15:55:02.947477 containerd[1488]: time="2025-02-13T15:55:02.947061109Z" level=info msg="TearDown network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" successfully" Feb 13 15:55:02.947736 containerd[1488]: time="2025-02-13T15:55:02.947712855Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" returns successfully" Feb 13 15:55:02.947899 containerd[1488]: time="2025-02-13T15:55:02.947210247Z" level=info msg="RemoveContainer for \"9cfe1562ca66d0939394df6a8c719a63a6c6472874c1cdb9898c2ada56dfd091\" returns successfully" Feb 13 15:55:02.949331 containerd[1488]: time="2025-02-13T15:55:02.949295307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:11,}" Feb 13 15:55:02.950051 containerd[1488]: time="2025-02-13T15:55:02.949897861Z" level=info msg="StopPodSandbox for \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\"" Feb 13 15:55:02.950051 containerd[1488]: time="2025-02-13T15:55:02.950016500Z" level=info msg="TearDown network for sandbox \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\" successfully" Feb 13 15:55:02.950051 containerd[1488]: time="2025-02-13T15:55:02.950034891Z" level=info msg="StopPodSandbox for \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\" returns successfully" Feb 13 15:55:02.950944 containerd[1488]: time="2025-02-13T15:55:02.950719773Z" level=info msg="StopPodSandbox for \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\"" Feb 13 15:55:02.950944 containerd[1488]: time="2025-02-13T15:55:02.950828098Z" level=info msg="TearDown network for sandbox \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\" successfully" Feb 13 15:55:02.950944 containerd[1488]: time="2025-02-13T15:55:02.950846461Z" level=info msg="StopPodSandbox for \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\" returns successfully" Feb 13 15:55:02.951541 containerd[1488]: time="2025-02-13T15:55:02.951504097Z" level=info msg="StopPodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\"" Feb 13 15:55:02.951689 containerd[1488]: time="2025-02-13T15:55:02.951660324Z" level=info msg="TearDown network for sandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" successfully" Feb 13 15:55:02.951689 containerd[1488]: time="2025-02-13T15:55:02.951681277Z" level=info msg="StopPodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" returns successfully" Feb 13 15:55:02.953345 containerd[1488]: time="2025-02-13T15:55:02.953317524Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\"" Feb 13 15:55:02.953880 containerd[1488]: time="2025-02-13T15:55:02.953731076Z" level=info msg="CreateContainer within sandbox \"eaaa4b513895b89cf7198db29d3e7742e4e8cdf5bbd09bb2f651f74ed42dc3a9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:55:02.954117 containerd[1488]: time="2025-02-13T15:55:02.953744718Z" level=info msg="TearDown network for sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" successfully" Feb 13 15:55:02.954709 containerd[1488]: time="2025-02-13T15:55:02.954235937Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" returns successfully" Feb 13 15:55:02.955246 containerd[1488]: time="2025-02-13T15:55:02.955211591Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\"" Feb 13 15:55:02.955350 containerd[1488]: time="2025-02-13T15:55:02.955325293Z" level=info msg="TearDown network for sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" successfully" Feb 13 15:55:02.955350 containerd[1488]: time="2025-02-13T15:55:02.955343843Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" returns successfully" Feb 13 15:55:02.957482 containerd[1488]: time="2025-02-13T15:55:02.955767296Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\"" Feb 13 15:55:02.957482 containerd[1488]: time="2025-02-13T15:55:02.955928043Z" level=info msg="TearDown network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" successfully" Feb 13 15:55:02.957482 containerd[1488]: time="2025-02-13T15:55:02.955947770Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" returns successfully" Feb 13 15:55:02.957482 containerd[1488]: time="2025-02-13T15:55:02.956309513Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\"" Feb 13 15:55:02.957482 containerd[1488]: time="2025-02-13T15:55:02.956465406Z" level=info msg="TearDown network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" successfully" Feb 13 15:55:02.957482 containerd[1488]: time="2025-02-13T15:55:02.956485274Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" returns successfully" Feb 13 15:55:02.957482 containerd[1488]: time="2025-02-13T15:55:02.957028354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:8,}" Feb 13 15:55:02.983421 containerd[1488]: time="2025-02-13T15:55:02.982952922Z" level=info msg="CreateContainer within sandbox \"eaaa4b513895b89cf7198db29d3e7742e4e8cdf5bbd09bb2f651f74ed42dc3a9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0df0b6f5f450416dafd42878aedf42ce92c1060397853db9eab992e8d5bf662a\"" Feb 13 15:55:02.991596 containerd[1488]: time="2025-02-13T15:55:02.986067780Z" level=info msg="StartContainer for \"0df0b6f5f450416dafd42878aedf42ce92c1060397853db9eab992e8d5bf662a\"" Feb 13 15:55:03.007462 systemd[1]: run-netns-cni\x2d91dcd49c\x2da5b8\x2d63aa\x2d637b\x2d63eb65be46ee.mount: Deactivated successfully. Feb 13 15:55:03.007638 systemd[1]: run-netns-cni\x2d61ec1c2d\x2dd10a\x2d4af4\x2db4d5\x2dd6344d064758.mount: Deactivated successfully. Feb 13 15:55:03.007747 systemd[1]: var-lib-kubelet-pods-26a28cea\x2d7c71\x2d4536\x2d9836\x2db29824c3f605-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Feb 13 15:55:03.007866 systemd[1]: var-lib-kubelet-pods-26a28cea\x2d7c71\x2d4536\x2d9836\x2db29824c3f605-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d85b99.mount: Deactivated successfully. Feb 13 15:55:03.102816 systemd[1]: Started cri-containerd-0df0b6f5f450416dafd42878aedf42ce92c1060397853db9eab992e8d5bf662a.scope - libcontainer container 0df0b6f5f450416dafd42878aedf42ce92c1060397853db9eab992e8d5bf662a. Feb 13 15:55:03.164698 containerd[1488]: time="2025-02-13T15:55:03.163810593Z" level=error msg="Failed to destroy network for sandbox \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:03.168639 containerd[1488]: time="2025-02-13T15:55:03.165299144Z" level=error msg="encountered an error cleaning up failed sandbox \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:03.168639 containerd[1488]: time="2025-02-13T15:55:03.165670561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:11,} failed, error" error="failed to setup network for sandbox \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:03.168473 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead-shm.mount: Deactivated successfully. Feb 13 15:55:03.168998 kubelet[1860]: E0213 15:55:03.166879 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:03.168998 kubelet[1860]: E0213 15:55:03.166971 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:55:03.168998 kubelet[1860]: E0213 15:55:03.167015 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:55:03.169192 kubelet[1860]: E0213 15:55:03.167092 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:55:03.178198 containerd[1488]: time="2025-02-13T15:55:03.175182198Z" level=error msg="Failed to destroy network for sandbox \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:03.178198 containerd[1488]: time="2025-02-13T15:55:03.177939784Z" level=error msg="encountered an error cleaning up failed sandbox \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:03.178198 containerd[1488]: time="2025-02-13T15:55:03.178039449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:8,} failed, error" error="failed to setup network for sandbox \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:03.178961 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f-shm.mount: Deactivated successfully. Feb 13 15:55:03.180813 kubelet[1860]: E0213 15:55:03.179644 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:03.180813 kubelet[1860]: E0213 15:55:03.180167 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:55:03.182737 kubelet[1860]: E0213 15:55:03.181884 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:55:03.185678 kubelet[1860]: E0213 15:55:03.185641 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-jfhxb" podUID="8836a9fd-bf45-4f06-b3a3-1533b6113050" Feb 13 15:55:03.208528 containerd[1488]: time="2025-02-13T15:55:03.208467145Z" level=info msg="StartContainer for \"0df0b6f5f450416dafd42878aedf42ce92c1060397853db9eab992e8d5bf662a\" returns successfully" Feb 13 15:55:03.563064 kubelet[1860]: E0213 15:55:03.562904 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:03.701593 kubelet[1860]: I0213 15:55:03.699467 1860 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="26a28cea-7c71-4536-9836-b29824c3f605" path="/var/lib/kubelet/pods/26a28cea-7c71-4536-9836-b29824c3f605/volumes" Feb 13 15:55:03.975054 kubelet[1860]: I0213 15:55:03.973140 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead" Feb 13 15:55:03.976453 containerd[1488]: time="2025-02-13T15:55:03.976380277Z" level=info msg="StopPodSandbox for \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\"" Feb 13 15:55:03.978611 containerd[1488]: time="2025-02-13T15:55:03.977284189Z" level=info msg="Ensure that sandbox 0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead in task-service has been cleanup successfully" Feb 13 15:55:03.978611 containerd[1488]: time="2025-02-13T15:55:03.977994022Z" level=info msg="TearDown network for sandbox \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\" successfully" Feb 13 15:55:03.978611 containerd[1488]: time="2025-02-13T15:55:03.978039284Z" level=info msg="StopPodSandbox for \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\" returns successfully" Feb 13 15:55:03.979291 containerd[1488]: time="2025-02-13T15:55:03.979091414Z" level=info msg="StopPodSandbox for \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\"" Feb 13 15:55:03.979291 containerd[1488]: time="2025-02-13T15:55:03.979201006Z" level=info msg="TearDown network for sandbox \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\" successfully" Feb 13 15:55:03.979291 containerd[1488]: time="2025-02-13T15:55:03.979216785Z" level=info msg="StopPodSandbox for \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\" returns successfully" Feb 13 15:55:03.981001 containerd[1488]: time="2025-02-13T15:55:03.980965911Z" level=info msg="StopPodSandbox for \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\"" Feb 13 15:55:03.982587 containerd[1488]: time="2025-02-13T15:55:03.982349071Z" level=info msg="TearDown network for sandbox \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\" successfully" Feb 13 15:55:03.983658 containerd[1488]: time="2025-02-13T15:55:03.983629061Z" level=info msg="StopPodSandbox for \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\" returns successfully" Feb 13 15:55:03.986158 containerd[1488]: time="2025-02-13T15:55:03.985920871Z" level=info msg="StopPodSandbox for \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\"" Feb 13 15:55:03.990480 containerd[1488]: time="2025-02-13T15:55:03.989870861Z" level=info msg="TearDown network for sandbox \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\" successfully" Feb 13 15:55:03.990480 containerd[1488]: time="2025-02-13T15:55:03.989909689Z" level=info msg="StopPodSandbox for \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\" returns successfully" Feb 13 15:55:03.991662 systemd[1]: run-netns-cni\x2de2a0c226\x2dd617\x2d4723\x2d7ef8\x2d2c38326f3c36.mount: Deactivated successfully. Feb 13 15:55:03.996105 containerd[1488]: time="2025-02-13T15:55:03.994486881Z" level=info msg="StopPodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\"" Feb 13 15:55:03.996105 containerd[1488]: time="2025-02-13T15:55:03.994651921Z" level=info msg="TearDown network for sandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" successfully" Feb 13 15:55:03.996105 containerd[1488]: time="2025-02-13T15:55:03.994677934Z" level=info msg="StopPodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" returns successfully" Feb 13 15:55:03.997369 containerd[1488]: time="2025-02-13T15:55:03.996818760Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\"" Feb 13 15:55:03.997369 containerd[1488]: time="2025-02-13T15:55:03.996937565Z" level=info msg="TearDown network for sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" successfully" Feb 13 15:55:03.997369 containerd[1488]: time="2025-02-13T15:55:03.996952634Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" returns successfully" Feb 13 15:55:03.998612 containerd[1488]: time="2025-02-13T15:55:03.998233084Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\"" Feb 13 15:55:03.998612 containerd[1488]: time="2025-02-13T15:55:03.998351610Z" level=info msg="TearDown network for sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" successfully" Feb 13 15:55:03.998612 containerd[1488]: time="2025-02-13T15:55:03.998367942Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" returns successfully" Feb 13 15:55:04.000191 containerd[1488]: time="2025-02-13T15:55:03.999817443Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\"" Feb 13 15:55:04.000191 containerd[1488]: time="2025-02-13T15:55:03.999959375Z" level=info msg="TearDown network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" successfully" Feb 13 15:55:04.000191 containerd[1488]: time="2025-02-13T15:55:03.999981108Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" returns successfully" Feb 13 15:55:04.001399 kubelet[1860]: I0213 15:55:04.001004 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f" Feb 13 15:55:04.003596 containerd[1488]: time="2025-02-13T15:55:04.002825075Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\"" Feb 13 15:55:04.003596 containerd[1488]: time="2025-02-13T15:55:04.003512334Z" level=info msg="TearDown network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" successfully" Feb 13 15:55:04.003596 containerd[1488]: time="2025-02-13T15:55:04.003534099Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" returns successfully" Feb 13 15:55:04.005223 containerd[1488]: time="2025-02-13T15:55:04.004706684Z" level=info msg="StopPodSandbox for \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\"" Feb 13 15:55:04.005223 containerd[1488]: time="2025-02-13T15:55:04.004993107Z" level=info msg="Ensure that sandbox e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f in task-service has been cleanup successfully" Feb 13 15:55:04.005487 containerd[1488]: time="2025-02-13T15:55:04.005441383Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\"" Feb 13 15:55:04.008869 containerd[1488]: time="2025-02-13T15:55:04.007645164Z" level=info msg="TearDown network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" successfully" Feb 13 15:55:04.010056 systemd[1]: run-netns-cni\x2d34ad6089\x2df329\x2d6309\x2daa06\x2db3d728300d0f.mount: Deactivated successfully. Feb 13 15:55:04.010362 containerd[1488]: time="2025-02-13T15:55:04.010220661Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" returns successfully" Feb 13 15:55:04.010800 containerd[1488]: time="2025-02-13T15:55:04.010614964Z" level=info msg="TearDown network for sandbox \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\" successfully" Feb 13 15:55:04.010800 containerd[1488]: time="2025-02-13T15:55:04.010644836Z" level=info msg="StopPodSandbox for \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\" returns successfully" Feb 13 15:55:04.013596 containerd[1488]: time="2025-02-13T15:55:04.012934898Z" level=info msg="StopPodSandbox for \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\"" Feb 13 15:55:04.013596 containerd[1488]: time="2025-02-13T15:55:04.013225925Z" level=info msg="TearDown network for sandbox \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\" successfully" Feb 13 15:55:04.013596 containerd[1488]: time="2025-02-13T15:55:04.013248998Z" level=info msg="StopPodSandbox for \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\" returns successfully" Feb 13 15:55:04.014224 containerd[1488]: time="2025-02-13T15:55:04.013973763Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" Feb 13 15:55:04.014224 containerd[1488]: time="2025-02-13T15:55:04.014107757Z" level=info msg="TearDown network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" successfully" Feb 13 15:55:04.014224 containerd[1488]: time="2025-02-13T15:55:04.014125044Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" returns successfully" Feb 13 15:55:04.016646 containerd[1488]: time="2025-02-13T15:55:04.016289620Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:55:04.016646 containerd[1488]: time="2025-02-13T15:55:04.016407897Z" level=info msg="TearDown network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" successfully" Feb 13 15:55:04.016646 containerd[1488]: time="2025-02-13T15:55:04.016424119Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" returns successfully" Feb 13 15:55:04.017766 containerd[1488]: time="2025-02-13T15:55:04.017553550Z" level=info msg="StopPodSandbox for \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\"" Feb 13 15:55:04.018238 containerd[1488]: time="2025-02-13T15:55:04.018113103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:12,}" Feb 13 15:55:04.019323 containerd[1488]: time="2025-02-13T15:55:04.019199605Z" level=info msg="TearDown network for sandbox \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\" successfully" Feb 13 15:55:04.019840 containerd[1488]: time="2025-02-13T15:55:04.019683974Z" level=info msg="StopPodSandbox for \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\" returns successfully" Feb 13 15:55:04.022884 containerd[1488]: time="2025-02-13T15:55:04.022554965Z" level=info msg="StopPodSandbox for \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\"" Feb 13 15:55:04.022884 containerd[1488]: time="2025-02-13T15:55:04.022789954Z" level=info msg="TearDown network for sandbox \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\" successfully" Feb 13 15:55:04.022884 containerd[1488]: time="2025-02-13T15:55:04.022806741Z" level=info msg="StopPodSandbox for \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\" returns successfully" Feb 13 15:55:04.027167 containerd[1488]: time="2025-02-13T15:55:04.026915595Z" level=info msg="StopPodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\"" Feb 13 15:55:04.027167 containerd[1488]: time="2025-02-13T15:55:04.027051213Z" level=info msg="TearDown network for sandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" successfully" Feb 13 15:55:04.027167 containerd[1488]: time="2025-02-13T15:55:04.027075682Z" level=info msg="StopPodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" returns successfully" Feb 13 15:55:04.028433 containerd[1488]: time="2025-02-13T15:55:04.028210937Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\"" Feb 13 15:55:04.028433 containerd[1488]: time="2025-02-13T15:55:04.028331049Z" level=info msg="TearDown network for sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" successfully" Feb 13 15:55:04.028433 containerd[1488]: time="2025-02-13T15:55:04.028350357Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" returns successfully" Feb 13 15:55:04.030244 containerd[1488]: time="2025-02-13T15:55:04.029893260Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\"" Feb 13 15:55:04.030244 containerd[1488]: time="2025-02-13T15:55:04.030171269Z" level=info msg="TearDown network for sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" successfully" Feb 13 15:55:04.030244 containerd[1488]: time="2025-02-13T15:55:04.030192601Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" returns successfully" Feb 13 15:55:04.032827 containerd[1488]: time="2025-02-13T15:55:04.032429143Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\"" Feb 13 15:55:04.032827 containerd[1488]: time="2025-02-13T15:55:04.032547842Z" level=info msg="TearDown network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" successfully" Feb 13 15:55:04.032827 containerd[1488]: time="2025-02-13T15:55:04.032595445Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" returns successfully" Feb 13 15:55:04.034209 containerd[1488]: time="2025-02-13T15:55:04.034176150Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\"" Feb 13 15:55:04.036590 containerd[1488]: time="2025-02-13T15:55:04.036540393Z" level=info msg="TearDown network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" successfully" Feb 13 15:55:04.036590 containerd[1488]: time="2025-02-13T15:55:04.036583604Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" returns successfully" Feb 13 15:55:04.038064 containerd[1488]: time="2025-02-13T15:55:04.037887299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:9,}" Feb 13 15:55:04.308293 systemd[1]: cri-containerd-0df0b6f5f450416dafd42878aedf42ce92c1060397853db9eab992e8d5bf662a.scope: Deactivated successfully. Feb 13 15:55:04.310058 systemd[1]: cri-containerd-0df0b6f5f450416dafd42878aedf42ce92c1060397853db9eab992e8d5bf662a.scope: Consumed 1.009s CPU time. Feb 13 15:55:04.327276 containerd[1488]: time="2025-02-13T15:55:04.327084119Z" level=error msg="Failed to destroy network for sandbox \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:04.329985 containerd[1488]: time="2025-02-13T15:55:04.328859341Z" level=error msg="encountered an error cleaning up failed sandbox \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:04.329985 containerd[1488]: time="2025-02-13T15:55:04.328952721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:9,} failed, error" error="failed to setup network for sandbox \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:04.330248 kubelet[1860]: E0213 15:55:04.329474 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:04.330248 kubelet[1860]: E0213 15:55:04.329576 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:55:04.330248 kubelet[1860]: E0213 15:55:04.329615 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:55:04.330506 kubelet[1860]: E0213 15:55:04.329695 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-jfhxb" podUID="8836a9fd-bf45-4f06-b3a3-1533b6113050" Feb 13 15:55:04.363528 containerd[1488]: time="2025-02-13T15:55:04.363051666Z" level=error msg="Failed to destroy network for sandbox \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:04.363894 containerd[1488]: time="2025-02-13T15:55:04.363798644Z" level=error msg="encountered an error cleaning up failed sandbox \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:04.365597 containerd[1488]: time="2025-02-13T15:55:04.364799752Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:12,} failed, error" error="failed to setup network for sandbox \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:04.365708 kubelet[1860]: E0213 15:55:04.365101 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:04.365708 kubelet[1860]: E0213 15:55:04.365177 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:55:04.365708 kubelet[1860]: E0213 15:55:04.365216 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:55:04.365897 kubelet[1860]: E0213 15:55:04.365295 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:55:04.461950 containerd[1488]: time="2025-02-13T15:55:04.461076518Z" level=info msg="shim disconnected" id=0df0b6f5f450416dafd42878aedf42ce92c1060397853db9eab992e8d5bf662a namespace=k8s.io Feb 13 15:55:04.461950 containerd[1488]: time="2025-02-13T15:55:04.461226573Z" level=warning msg="cleaning up after shim disconnected" id=0df0b6f5f450416dafd42878aedf42ce92c1060397853db9eab992e8d5bf662a namespace=k8s.io Feb 13 15:55:04.461950 containerd[1488]: time="2025-02-13T15:55:04.461241167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:55:04.564394 kubelet[1860]: E0213 15:55:04.563474 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:04.750451 containerd[1488]: time="2025-02-13T15:55:04.750377101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:04.751608 containerd[1488]: time="2025-02-13T15:55:04.751480784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 15:55:04.752876 containerd[1488]: time="2025-02-13T15:55:04.752806054Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:04.755653 containerd[1488]: time="2025-02-13T15:55:04.755585223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:04.757713 containerd[1488]: time="2025-02-13T15:55:04.756400826Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.681377612s" Feb 13 15:55:04.757713 containerd[1488]: time="2025-02-13T15:55:04.756444688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 15:55:04.768580 containerd[1488]: time="2025-02-13T15:55:04.768509552Z" level=info msg="CreateContainer within sandbox \"f59421f399b8bd16f02a4106275f234fe7dc735fb1d284ad2b66e13a79c52dbe\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 15:55:04.783949 containerd[1488]: time="2025-02-13T15:55:04.783859107Z" level=info msg="CreateContainer within sandbox \"f59421f399b8bd16f02a4106275f234fe7dc735fb1d284ad2b66e13a79c52dbe\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2c6935d733a5f586b5c4cf0a7e70d6a7304515a1fb5c66e57a881eef8db4f7ea\"" Feb 13 15:55:04.784793 containerd[1488]: time="2025-02-13T15:55:04.784710253Z" level=info msg="StartContainer for \"2c6935d733a5f586b5c4cf0a7e70d6a7304515a1fb5c66e57a881eef8db4f7ea\"" Feb 13 15:55:04.819826 systemd[1]: Started cri-containerd-2c6935d733a5f586b5c4cf0a7e70d6a7304515a1fb5c66e57a881eef8db4f7ea.scope - libcontainer container 2c6935d733a5f586b5c4cf0a7e70d6a7304515a1fb5c66e57a881eef8db4f7ea. Feb 13 15:55:04.877003 containerd[1488]: time="2025-02-13T15:55:04.876942692Z" level=info msg="StartContainer for \"2c6935d733a5f586b5c4cf0a7e70d6a7304515a1fb5c66e57a881eef8db4f7ea\" returns successfully" Feb 13 15:55:04.994145 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8-shm.mount: Deactivated successfully. Feb 13 15:55:04.994826 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50-shm.mount: Deactivated successfully. Feb 13 15:55:04.994943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0df0b6f5f450416dafd42878aedf42ce92c1060397853db9eab992e8d5bf662a-rootfs.mount: Deactivated successfully. Feb 13 15:55:05.034147 kubelet[1860]: I0213 15:55:05.034091 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50" Feb 13 15:55:05.035384 containerd[1488]: time="2025-02-13T15:55:05.035062407Z" level=info msg="StopPodSandbox for \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\"" Feb 13 15:55:05.036083 containerd[1488]: time="2025-02-13T15:55:05.035745836Z" level=info msg="Ensure that sandbox a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50 in task-service has been cleanup successfully" Feb 13 15:55:05.036083 containerd[1488]: time="2025-02-13T15:55:05.036047188Z" level=info msg="TearDown network for sandbox \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\" successfully" Feb 13 15:55:05.036388 containerd[1488]: time="2025-02-13T15:55:05.036265822Z" level=info msg="StopPodSandbox for \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\" returns successfully" Feb 13 15:55:05.038586 containerd[1488]: time="2025-02-13T15:55:05.037861758Z" level=info msg="StopPodSandbox for \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\"" Feb 13 15:55:05.040870 containerd[1488]: time="2025-02-13T15:55:05.039072764Z" level=info msg="TearDown network for sandbox \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\" successfully" Feb 13 15:55:05.040870 containerd[1488]: time="2025-02-13T15:55:05.039098562Z" level=info msg="StopPodSandbox for \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\" returns successfully" Feb 13 15:55:05.040480 systemd[1]: run-netns-cni\x2d865968fc\x2d039b\x2d663a\x2d5fe0\x2d5c2cdbcae30b.mount: Deactivated successfully. Feb 13 15:55:05.041768 containerd[1488]: time="2025-02-13T15:55:05.041612914Z" level=info msg="StopPodSandbox for \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\"" Feb 13 15:55:05.041768 containerd[1488]: time="2025-02-13T15:55:05.041746829Z" level=info msg="TearDown network for sandbox \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\" successfully" Feb 13 15:55:05.041768 containerd[1488]: time="2025-02-13T15:55:05.041767100Z" level=info msg="StopPodSandbox for \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\" returns successfully" Feb 13 15:55:05.042455 containerd[1488]: time="2025-02-13T15:55:05.042407873Z" level=info msg="StopPodSandbox for \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\"" Feb 13 15:55:05.042552 containerd[1488]: time="2025-02-13T15:55:05.042519112Z" level=info msg="TearDown network for sandbox \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\" successfully" Feb 13 15:55:05.042552 containerd[1488]: time="2025-02-13T15:55:05.042537200Z" level=info msg="StopPodSandbox for \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\" returns successfully" Feb 13 15:55:05.044220 containerd[1488]: time="2025-02-13T15:55:05.044097813Z" level=info msg="StopPodSandbox for \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\"" Feb 13 15:55:05.044324 containerd[1488]: time="2025-02-13T15:55:05.044256056Z" level=info msg="TearDown network for sandbox \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\" successfully" Feb 13 15:55:05.044324 containerd[1488]: time="2025-02-13T15:55:05.044278849Z" level=info msg="StopPodSandbox for \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\" returns successfully" Feb 13 15:55:05.045003 containerd[1488]: time="2025-02-13T15:55:05.044958230Z" level=info msg="StopPodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\"" Feb 13 15:55:05.045220 containerd[1488]: time="2025-02-13T15:55:05.045193760Z" level=info msg="TearDown network for sandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" successfully" Feb 13 15:55:05.045447 containerd[1488]: time="2025-02-13T15:55:05.045218547Z" level=info msg="StopPodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" returns successfully" Feb 13 15:55:05.045545 containerd[1488]: time="2025-02-13T15:55:05.045510500Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\"" Feb 13 15:55:05.046268 containerd[1488]: time="2025-02-13T15:55:05.045643369Z" level=info msg="TearDown network for sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" successfully" Feb 13 15:55:05.046268 containerd[1488]: time="2025-02-13T15:55:05.045662172Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" returns successfully" Feb 13 15:55:05.046268 containerd[1488]: time="2025-02-13T15:55:05.045995241Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\"" Feb 13 15:55:05.046268 containerd[1488]: time="2025-02-13T15:55:05.046129019Z" level=info msg="TearDown network for sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" successfully" Feb 13 15:55:05.046268 containerd[1488]: time="2025-02-13T15:55:05.046147612Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" returns successfully" Feb 13 15:55:05.046552 kubelet[1860]: I0213 15:55:05.046000 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8" Feb 13 15:55:05.047416 containerd[1488]: time="2025-02-13T15:55:05.047186890Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\"" Feb 13 15:55:05.047416 containerd[1488]: time="2025-02-13T15:55:05.047302802Z" level=info msg="TearDown network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" successfully" Feb 13 15:55:05.047416 containerd[1488]: time="2025-02-13T15:55:05.047323165Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" returns successfully" Feb 13 15:55:05.047416 containerd[1488]: time="2025-02-13T15:55:05.047336225Z" level=info msg="StopPodSandbox for \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\"" Feb 13 15:55:05.047701 containerd[1488]: time="2025-02-13T15:55:05.047613946Z" level=info msg="Ensure that sandbox d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8 in task-service has been cleanup successfully" Feb 13 15:55:05.051939 containerd[1488]: time="2025-02-13T15:55:05.050032040Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\"" Feb 13 15:55:05.051939 containerd[1488]: time="2025-02-13T15:55:05.050145028Z" level=info msg="TearDown network for sandbox \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\" successfully" Feb 13 15:55:05.051939 containerd[1488]: time="2025-02-13T15:55:05.050167948Z" level=info msg="StopPodSandbox for \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\" returns successfully" Feb 13 15:55:05.051939 containerd[1488]: time="2025-02-13T15:55:05.050145089Z" level=info msg="TearDown network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" successfully" Feb 13 15:55:05.051939 containerd[1488]: time="2025-02-13T15:55:05.050313776Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" returns successfully" Feb 13 15:55:05.052927 systemd[1]: run-netns-cni\x2d444aa516\x2d448d\x2dfd0e\x2df4f8\x2d73350552e7e2.mount: Deactivated successfully. Feb 13 15:55:05.053762 containerd[1488]: time="2025-02-13T15:55:05.053032676Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\"" Feb 13 15:55:05.053762 containerd[1488]: time="2025-02-13T15:55:05.053139828Z" level=info msg="TearDown network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" successfully" Feb 13 15:55:05.053762 containerd[1488]: time="2025-02-13T15:55:05.053156502Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" returns successfully" Feb 13 15:55:05.053762 containerd[1488]: time="2025-02-13T15:55:05.053254206Z" level=info msg="StopPodSandbox for \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\"" Feb 13 15:55:05.053762 containerd[1488]: time="2025-02-13T15:55:05.053346826Z" level=info msg="TearDown network for sandbox \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\" successfully" Feb 13 15:55:05.053762 containerd[1488]: time="2025-02-13T15:55:05.053363120Z" level=info msg="StopPodSandbox for \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\" returns successfully" Feb 13 15:55:05.054973 containerd[1488]: time="2025-02-13T15:55:05.054416141Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" Feb 13 15:55:05.054973 containerd[1488]: time="2025-02-13T15:55:05.054540893Z" level=info msg="TearDown network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" successfully" Feb 13 15:55:05.054973 containerd[1488]: time="2025-02-13T15:55:05.054601275Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" returns successfully" Feb 13 15:55:05.054973 containerd[1488]: time="2025-02-13T15:55:05.054787185Z" level=info msg="StopPodSandbox for \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\"" Feb 13 15:55:05.054973 containerd[1488]: time="2025-02-13T15:55:05.054888904Z" level=info msg="TearDown network for sandbox \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\" successfully" Feb 13 15:55:05.054973 containerd[1488]: time="2025-02-13T15:55:05.054903679Z" level=info msg="StopPodSandbox for \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\" returns successfully" Feb 13 15:55:05.056658 containerd[1488]: time="2025-02-13T15:55:05.056502596Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:55:05.056882 containerd[1488]: time="2025-02-13T15:55:05.056747066Z" level=info msg="TearDown network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" successfully" Feb 13 15:55:05.056882 containerd[1488]: time="2025-02-13T15:55:05.056784580Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" returns successfully" Feb 13 15:55:05.056882 containerd[1488]: time="2025-02-13T15:55:05.056504294Z" level=info msg="StopPodSandbox for \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\"" Feb 13 15:55:05.057061 containerd[1488]: time="2025-02-13T15:55:05.056970576Z" level=info msg="TearDown network for sandbox \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\" successfully" Feb 13 15:55:05.057061 containerd[1488]: time="2025-02-13T15:55:05.056988752Z" level=info msg="StopPodSandbox for \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\" returns successfully" Feb 13 15:55:05.058828 containerd[1488]: time="2025-02-13T15:55:05.058126356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:13,}" Feb 13 15:55:05.058828 containerd[1488]: time="2025-02-13T15:55:05.058307651Z" level=info msg="StopPodSandbox for \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\"" Feb 13 15:55:05.058828 containerd[1488]: time="2025-02-13T15:55:05.058409481Z" level=info msg="TearDown network for sandbox \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\" successfully" Feb 13 15:55:05.058828 containerd[1488]: time="2025-02-13T15:55:05.058425150Z" level=info msg="StopPodSandbox for \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\" returns successfully" Feb 13 15:55:05.060115 containerd[1488]: time="2025-02-13T15:55:05.060078266Z" level=info msg="StopPodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\"" Feb 13 15:55:05.060273 containerd[1488]: time="2025-02-13T15:55:05.060198881Z" level=info msg="TearDown network for sandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" successfully" Feb 13 15:55:05.060273 containerd[1488]: time="2025-02-13T15:55:05.060218300Z" level=info msg="StopPodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" returns successfully" Feb 13 15:55:05.063088 containerd[1488]: time="2025-02-13T15:55:05.062877392Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\"" Feb 13 15:55:05.063088 containerd[1488]: time="2025-02-13T15:55:05.062991279Z" level=info msg="TearDown network for sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" successfully" Feb 13 15:55:05.063088 containerd[1488]: time="2025-02-13T15:55:05.063010153Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" returns successfully" Feb 13 15:55:05.068713 containerd[1488]: time="2025-02-13T15:55:05.068173949Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\"" Feb 13 15:55:05.068713 containerd[1488]: time="2025-02-13T15:55:05.068317611Z" level=info msg="TearDown network for sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" successfully" Feb 13 15:55:05.068713 containerd[1488]: time="2025-02-13T15:55:05.068335338Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" returns successfully" Feb 13 15:55:05.074958 containerd[1488]: time="2025-02-13T15:55:05.074845036Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\"" Feb 13 15:55:05.075725 containerd[1488]: time="2025-02-13T15:55:05.075482050Z" level=info msg="TearDown network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" successfully" Feb 13 15:55:05.075725 containerd[1488]: time="2025-02-13T15:55:05.075510717Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" returns successfully" Feb 13 15:55:05.077777 containerd[1488]: time="2025-02-13T15:55:05.077529497Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\"" Feb 13 15:55:05.077777 containerd[1488]: time="2025-02-13T15:55:05.077683101Z" level=info msg="TearDown network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" successfully" Feb 13 15:55:05.077777 containerd[1488]: time="2025-02-13T15:55:05.077702945Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" returns successfully" Feb 13 15:55:05.080209 containerd[1488]: time="2025-02-13T15:55:05.079498061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:10,}" Feb 13 15:55:05.093554 containerd[1488]: time="2025-02-13T15:55:05.093308513Z" level=info msg="CreateContainer within sandbox \"eaaa4b513895b89cf7198db29d3e7742e4e8cdf5bbd09bb2f651f74ed42dc3a9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:55:05.129428 containerd[1488]: time="2025-02-13T15:55:05.129279523Z" level=info msg="CreateContainer within sandbox \"eaaa4b513895b89cf7198db29d3e7742e4e8cdf5bbd09bb2f651f74ed42dc3a9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6553e5181190f761141ee3496fb5f1c760513658b0cb7a77b3e85046d734b3ca\"" Feb 13 15:55:05.132050 containerd[1488]: time="2025-02-13T15:55:05.131498719Z" level=info msg="StartContainer for \"6553e5181190f761141ee3496fb5f1c760513658b0cb7a77b3e85046d734b3ca\"" Feb 13 15:55:05.215960 systemd[1]: Started cri-containerd-6553e5181190f761141ee3496fb5f1c760513658b0cb7a77b3e85046d734b3ca.scope - libcontainer container 6553e5181190f761141ee3496fb5f1c760513658b0cb7a77b3e85046d734b3ca. Feb 13 15:55:05.230330 containerd[1488]: time="2025-02-13T15:55:05.230015645Z" level=error msg="Failed to destroy network for sandbox \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:05.232991 containerd[1488]: time="2025-02-13T15:55:05.232769609Z" level=error msg="encountered an error cleaning up failed sandbox \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:05.233756 containerd[1488]: time="2025-02-13T15:55:05.233505135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:13,} failed, error" error="failed to setup network for sandbox \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:05.234047 kubelet[1860]: E0213 15:55:05.233977 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:05.234320 kubelet[1860]: E0213 15:55:05.234125 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:55:05.234320 kubelet[1860]: E0213 15:55:05.234163 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hkphc" Feb 13 15:55:05.234320 kubelet[1860]: E0213 15:55:05.234258 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hkphc_calico-system(fbf57cb1-3114-4738-90d1-01b2062bf75f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hkphc" podUID="fbf57cb1-3114-4738-90d1-01b2062bf75f" Feb 13 15:55:05.281287 containerd[1488]: time="2025-02-13T15:55:05.281164973Z" level=error msg="Failed to destroy network for sandbox \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:05.282484 containerd[1488]: time="2025-02-13T15:55:05.281672213Z" level=error msg="encountered an error cleaning up failed sandbox \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:05.282484 containerd[1488]: time="2025-02-13T15:55:05.281761651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:10,} failed, error" error="failed to setup network for sandbox \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:05.283418 kubelet[1860]: E0213 15:55:05.282096 1860 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:55:05.283418 kubelet[1860]: E0213 15:55:05.282165 1860 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:55:05.283418 kubelet[1860]: E0213 15:55:05.282201 1860 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jfhxb" Feb 13 15:55:05.283645 kubelet[1860]: E0213 15:55:05.282283 1860 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-jfhxb_default(8836a9fd-bf45-4f06-b3a3-1533b6113050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-jfhxb" podUID="8836a9fd-bf45-4f06-b3a3-1533b6113050" Feb 13 15:55:05.287085 containerd[1488]: time="2025-02-13T15:55:05.287047492Z" level=info msg="StartContainer for \"6553e5181190f761141ee3496fb5f1c760513658b0cb7a77b3e85046d734b3ca\" returns successfully" Feb 13 15:55:05.565472 kubelet[1860]: E0213 15:55:05.565425 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:05.848024 kubelet[1860]: I0213 15:55:05.847877 1860 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-667b9cf9dc-wt8t5" podStartSLOduration=2.162858106 podStartE2EDuration="4.847820079s" podCreationTimestamp="2025-02-13 15:55:01 +0000 UTC" firstStartedPulling="2025-02-13 15:55:02.072944858 +0000 UTC m=+23.245737124" lastFinishedPulling="2025-02-13 15:55:04.757906839 +0000 UTC m=+25.930699097" observedRunningTime="2025-02-13 15:55:05.102885289 +0000 UTC m=+26.275677563" watchObservedRunningTime="2025-02-13 15:55:05.847820079 +0000 UTC m=+27.020612350" Feb 13 15:55:05.849077 kubelet[1860]: I0213 15:55:05.848176 1860 topology_manager.go:215] "Topology Admit Handler" podUID="91fbe7a0-482b-4d8d-9051-4ed2b2d7f48f" podNamespace="calico-system" podName="calico-kube-controllers-65f6b8654f-jsv2p" Feb 13 15:55:05.855821 systemd[1]: Created slice kubepods-besteffort-pod91fbe7a0_482b_4d8d_9051_4ed2b2d7f48f.slice - libcontainer container kubepods-besteffort-pod91fbe7a0_482b_4d8d_9051_4ed2b2d7f48f.slice. Feb 13 15:55:05.860430 kubelet[1860]: I0213 15:55:05.860403 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2f58\" (UniqueName: \"kubernetes.io/projected/91fbe7a0-482b-4d8d-9051-4ed2b2d7f48f-kube-api-access-b2f58\") pod \"calico-kube-controllers-65f6b8654f-jsv2p\" (UID: \"91fbe7a0-482b-4d8d-9051-4ed2b2d7f48f\") " pod="calico-system/calico-kube-controllers-65f6b8654f-jsv2p" Feb 13 15:55:05.860674 kubelet[1860]: I0213 15:55:05.860639 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91fbe7a0-482b-4d8d-9051-4ed2b2d7f48f-tigera-ca-bundle\") pod \"calico-kube-controllers-65f6b8654f-jsv2p\" (UID: \"91fbe7a0-482b-4d8d-9051-4ed2b2d7f48f\") " pod="calico-system/calico-kube-controllers-65f6b8654f-jsv2p" Feb 13 15:55:06.077634 kubelet[1860]: I0213 15:55:06.077596 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8" Feb 13 15:55:06.078586 containerd[1488]: time="2025-02-13T15:55:06.078417236Z" level=info msg="StopPodSandbox for \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\"" Feb 13 15:55:06.079176 containerd[1488]: time="2025-02-13T15:55:06.078863216Z" level=info msg="Ensure that sandbox fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8 in task-service has been cleanup successfully" Feb 13 15:55:06.086240 containerd[1488]: time="2025-02-13T15:55:06.082690612Z" level=info msg="TearDown network for sandbox \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\" successfully" Feb 13 15:55:06.086240 containerd[1488]: time="2025-02-13T15:55:06.082725444Z" level=info msg="StopPodSandbox for \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\" returns successfully" Feb 13 15:55:06.086240 containerd[1488]: time="2025-02-13T15:55:06.083990553Z" level=info msg="StopPodSandbox for \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\"" Feb 13 15:55:06.086240 containerd[1488]: time="2025-02-13T15:55:06.084101071Z" level=info msg="TearDown network for sandbox \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\" successfully" Feb 13 15:55:06.086240 containerd[1488]: time="2025-02-13T15:55:06.084119396Z" level=info msg="StopPodSandbox for \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\" returns successfully" Feb 13 15:55:06.086240 containerd[1488]: time="2025-02-13T15:55:06.084824723Z" level=info msg="StopPodSandbox for \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\"" Feb 13 15:55:06.086240 containerd[1488]: time="2025-02-13T15:55:06.084938463Z" level=info msg="TearDown network for sandbox \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\" successfully" Feb 13 15:55:06.086240 containerd[1488]: time="2025-02-13T15:55:06.085006037Z" level=info msg="StopPodSandbox for \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\" returns successfully" Feb 13 15:55:06.085305 systemd[1]: run-netns-cni\x2d6107fc54\x2d2762\x2d5c95\x2d1d25\x2db4404ab00fd6.mount: Deactivated successfully. Feb 13 15:55:06.089089 containerd[1488]: time="2025-02-13T15:55:06.086384223Z" level=info msg="StopPodSandbox for \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\"" Feb 13 15:55:06.089089 containerd[1488]: time="2025-02-13T15:55:06.086508164Z" level=info msg="TearDown network for sandbox \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\" successfully" Feb 13 15:55:06.089089 containerd[1488]: time="2025-02-13T15:55:06.086526116Z" level=info msg="StopPodSandbox for \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\" returns successfully" Feb 13 15:55:06.089089 containerd[1488]: time="2025-02-13T15:55:06.087048229Z" level=info msg="StopPodSandbox for \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\"" Feb 13 15:55:06.089089 containerd[1488]: time="2025-02-13T15:55:06.087156187Z" level=info msg="TearDown network for sandbox \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\" successfully" Feb 13 15:55:06.089089 containerd[1488]: time="2025-02-13T15:55:06.087174360Z" level=info msg="StopPodSandbox for \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\" returns successfully" Feb 13 15:55:06.089089 containerd[1488]: time="2025-02-13T15:55:06.088414848Z" level=info msg="StopPodSandbox for \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\"" Feb 13 15:55:06.089089 containerd[1488]: time="2025-02-13T15:55:06.088516612Z" level=info msg="TearDown network for sandbox \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\" successfully" Feb 13 15:55:06.089089 containerd[1488]: time="2025-02-13T15:55:06.088533267Z" level=info msg="StopPodSandbox for \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\" returns successfully" Feb 13 15:55:06.089544 containerd[1488]: time="2025-02-13T15:55:06.089095386Z" level=info msg="StopPodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\"" Feb 13 15:55:06.089544 containerd[1488]: time="2025-02-13T15:55:06.089195455Z" level=info msg="TearDown network for sandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" successfully" Feb 13 15:55:06.089544 containerd[1488]: time="2025-02-13T15:55:06.089212846Z" level=info msg="StopPodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" returns successfully" Feb 13 15:55:06.089966 containerd[1488]: time="2025-02-13T15:55:06.089821761Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\"" Feb 13 15:55:06.090232 containerd[1488]: time="2025-02-13T15:55:06.090161961Z" level=info msg="TearDown network for sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" successfully" Feb 13 15:55:06.090232 containerd[1488]: time="2025-02-13T15:55:06.090186707Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" returns successfully" Feb 13 15:55:06.092268 containerd[1488]: time="2025-02-13T15:55:06.090625034Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\"" Feb 13 15:55:06.092268 containerd[1488]: time="2025-02-13T15:55:06.090794193Z" level=info msg="TearDown network for sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" successfully" Feb 13 15:55:06.092268 containerd[1488]: time="2025-02-13T15:55:06.090814441Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" returns successfully" Feb 13 15:55:06.092268 containerd[1488]: time="2025-02-13T15:55:06.092143011Z" level=info msg="StopPodSandbox for \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\"" Feb 13 15:55:06.092471 kubelet[1860]: I0213 15:55:06.090436 1860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64" Feb 13 15:55:06.092544 containerd[1488]: time="2025-02-13T15:55:06.092461021Z" level=info msg="Ensure that sandbox f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64 in task-service has been cleanup successfully" Feb 13 15:55:06.092800 containerd[1488]: time="2025-02-13T15:55:06.092740233Z" level=info msg="TearDown network for sandbox \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\" successfully" Feb 13 15:55:06.092891 containerd[1488]: time="2025-02-13T15:55:06.092815875Z" level=info msg="StopPodSandbox for \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\" returns successfully" Feb 13 15:55:06.092891 containerd[1488]: time="2025-02-13T15:55:06.092742442Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\"" Feb 13 15:55:06.092997 containerd[1488]: time="2025-02-13T15:55:06.092962663Z" level=info msg="TearDown network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" successfully" Feb 13 15:55:06.092997 containerd[1488]: time="2025-02-13T15:55:06.092980591Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" returns successfully" Feb 13 15:55:06.095858 containerd[1488]: time="2025-02-13T15:55:06.095825176Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\"" Feb 13 15:55:06.095960 containerd[1488]: time="2025-02-13T15:55:06.095937964Z" level=info msg="TearDown network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" successfully" Feb 13 15:55:06.096015 containerd[1488]: time="2025-02-13T15:55:06.095961981Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" returns successfully" Feb 13 15:55:06.097355 systemd[1]: run-netns-cni\x2d8aca1b45\x2d0a0c\x2dccb4\x2d4447\x2d190ad812ef60.mount: Deactivated successfully. Feb 13 15:55:06.097993 containerd[1488]: time="2025-02-13T15:55:06.097846623Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\"" Feb 13 15:55:06.097993 containerd[1488]: time="2025-02-13T15:55:06.097954662Z" level=info msg="TearDown network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" successfully" Feb 13 15:55:06.097993 containerd[1488]: time="2025-02-13T15:55:06.097971863Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" returns successfully" Feb 13 15:55:06.098250 containerd[1488]: time="2025-02-13T15:55:06.098063224Z" level=info msg="StopPodSandbox for \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\"" Feb 13 15:55:06.098250 containerd[1488]: time="2025-02-13T15:55:06.098157667Z" level=info msg="TearDown network for sandbox \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\" successfully" Feb 13 15:55:06.098250 containerd[1488]: time="2025-02-13T15:55:06.098174401Z" level=info msg="StopPodSandbox for \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\" returns successfully" Feb 13 15:55:06.100704 containerd[1488]: time="2025-02-13T15:55:06.098896457Z" level=info msg="StopPodSandbox for \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\"" Feb 13 15:55:06.100704 containerd[1488]: time="2025-02-13T15:55:06.099006042Z" level=info msg="TearDown network for sandbox \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\" successfully" Feb 13 15:55:06.100704 containerd[1488]: time="2025-02-13T15:55:06.099021831Z" level=info msg="StopPodSandbox for \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\" returns successfully" Feb 13 15:55:06.101549 containerd[1488]: time="2025-02-13T15:55:06.099475757Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" Feb 13 15:55:06.101549 containerd[1488]: time="2025-02-13T15:55:06.101208595Z" level=info msg="StopPodSandbox for \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\"" Feb 13 15:55:06.101549 containerd[1488]: time="2025-02-13T15:55:06.101464193Z" level=info msg="TearDown network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" successfully" Feb 13 15:55:06.101549 containerd[1488]: time="2025-02-13T15:55:06.101481977Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" returns successfully" Feb 13 15:55:06.101874 containerd[1488]: time="2025-02-13T15:55:06.101550323Z" level=info msg="TearDown network for sandbox \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\" successfully" Feb 13 15:55:06.101874 containerd[1488]: time="2025-02-13T15:55:06.101601407Z" level=info msg="StopPodSandbox for \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\" returns successfully" Feb 13 15:55:06.104585 containerd[1488]: time="2025-02-13T15:55:06.102686844Z" level=info msg="StopPodSandbox for \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\"" Feb 13 15:55:06.104585 containerd[1488]: time="2025-02-13T15:55:06.102795943Z" level=info msg="TearDown network for sandbox \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\" successfully" Feb 13 15:55:06.104585 containerd[1488]: time="2025-02-13T15:55:06.102813672Z" level=info msg="StopPodSandbox for \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\" returns successfully" Feb 13 15:55:06.104958 containerd[1488]: time="2025-02-13T15:55:06.104930225Z" level=info msg="StopPodSandbox for \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\"" Feb 13 15:55:06.105164 containerd[1488]: time="2025-02-13T15:55:06.105141382Z" level=info msg="TearDown network for sandbox \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\" successfully" Feb 13 15:55:06.105290 containerd[1488]: time="2025-02-13T15:55:06.105271119Z" level=info msg="StopPodSandbox for \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\" returns successfully" Feb 13 15:55:06.105484 containerd[1488]: time="2025-02-13T15:55:06.104953806Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:55:06.105709 containerd[1488]: time="2025-02-13T15:55:06.105686012Z" level=info msg="TearDown network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" successfully" Feb 13 15:55:06.105818 containerd[1488]: time="2025-02-13T15:55:06.105799708Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" returns successfully" Feb 13 15:55:06.106769 containerd[1488]: time="2025-02-13T15:55:06.106737518Z" level=info msg="StopPodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\"" Feb 13 15:55:06.107409 containerd[1488]: time="2025-02-13T15:55:06.106849924Z" level=info msg="TearDown network for sandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" successfully" Feb 13 15:55:06.107409 containerd[1488]: time="2025-02-13T15:55:06.106871029Z" level=info msg="StopPodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" returns successfully" Feb 13 15:55:06.107409 containerd[1488]: time="2025-02-13T15:55:06.106964221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:14,}" Feb 13 15:55:06.107409 containerd[1488]: time="2025-02-13T15:55:06.107383913Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\"" Feb 13 15:55:06.107854 containerd[1488]: time="2025-02-13T15:55:06.107722382Z" level=info msg="TearDown network for sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" successfully" Feb 13 15:55:06.107945 containerd[1488]: time="2025-02-13T15:55:06.107853998Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" returns successfully" Feb 13 15:55:06.110155 containerd[1488]: time="2025-02-13T15:55:06.110116369Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\"" Feb 13 15:55:06.110259 containerd[1488]: time="2025-02-13T15:55:06.110237432Z" level=info msg="TearDown network for sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" successfully" Feb 13 15:55:06.110320 containerd[1488]: time="2025-02-13T15:55:06.110256367Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" returns successfully" Feb 13 15:55:06.111076 containerd[1488]: time="2025-02-13T15:55:06.110847974Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\"" Feb 13 15:55:06.111076 containerd[1488]: time="2025-02-13T15:55:06.110958292Z" level=info msg="TearDown network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" successfully" Feb 13 15:55:06.111076 containerd[1488]: time="2025-02-13T15:55:06.110974970Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" returns successfully" Feb 13 15:55:06.112998 kubelet[1860]: I0213 15:55:06.112750 1860 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-dqqxz" podStartSLOduration=4.112698555 podStartE2EDuration="4.112698555s" podCreationTimestamp="2025-02-13 15:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:55:06.109384093 +0000 UTC m=+27.282176366" watchObservedRunningTime="2025-02-13 15:55:06.112698555 +0000 UTC m=+27.285490830" Feb 13 15:55:06.113200 containerd[1488]: time="2025-02-13T15:55:06.112778189Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\"" Feb 13 15:55:06.113200 containerd[1488]: time="2025-02-13T15:55:06.112888527Z" level=info msg="TearDown network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" successfully" Feb 13 15:55:06.113200 containerd[1488]: time="2025-02-13T15:55:06.112904094Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" returns successfully" Feb 13 15:55:06.114119 containerd[1488]: time="2025-02-13T15:55:06.114089211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:11,}" Feb 13 15:55:06.159705 containerd[1488]: time="2025-02-13T15:55:06.159653948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65f6b8654f-jsv2p,Uid:91fbe7a0-482b-4d8d-9051-4ed2b2d7f48f,Namespace:calico-system,Attempt:0,}" Feb 13 15:55:06.366226 systemd-networkd[1398]: cali3af8a639af3: Link UP Feb 13 15:55:06.366490 systemd-networkd[1398]: cali3af8a639af3: Gained carrier Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.181 [INFO][3623] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.215 [INFO][3623] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.29-k8s-nginx--deployment--6d5f899847--jfhxb-eth0 nginx-deployment-6d5f899847- default 8836a9fd-bf45-4f06-b3a3-1533b6113050 1084 0 2025-02-13 15:54:53 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.128.0.29 nginx-deployment-6d5f899847-jfhxb eth0 default [] [] [kns.default ksa.default.default] cali3af8a639af3 [] []}} ContainerID="977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" Namespace="default" Pod="nginx-deployment-6d5f899847-jfhxb" WorkloadEndpoint="10.128.0.29-k8s-nginx--deployment--6d5f899847--jfhxb-" Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.215 [INFO][3623] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" Namespace="default" Pod="nginx-deployment-6d5f899847-jfhxb" WorkloadEndpoint="10.128.0.29-k8s-nginx--deployment--6d5f899847--jfhxb-eth0" Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.268 [INFO][3658] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" HandleID="k8s-pod-network.977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" Workload="10.128.0.29-k8s-nginx--deployment--6d5f899847--jfhxb-eth0" Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.308 [INFO][3658] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" HandleID="k8s-pod-network.977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" Workload="10.128.0.29-k8s-nginx--deployment--6d5f899847--jfhxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319570), Attrs:map[string]string{"namespace":"default", "node":"10.128.0.29", "pod":"nginx-deployment-6d5f899847-jfhxb", "timestamp":"2025-02-13 15:55:06.268181498 +0000 UTC"}, Hostname:"10.128.0.29", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.309 [INFO][3658] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.309 [INFO][3658] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.309 [INFO][3658] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.29' Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.324 [INFO][3658] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" host="10.128.0.29" Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.329 [INFO][3658] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.29" Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.334 [INFO][3658] ipam/ipam.go 489: Trying affinity for 192.168.125.128/26 host="10.128.0.29" Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.336 [INFO][3658] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.128/26 host="10.128.0.29" Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.339 [INFO][3658] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="10.128.0.29" Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.339 [INFO][3658] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" host="10.128.0.29" Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.341 [INFO][3658] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.346 [INFO][3658] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" host="10.128.0.29" Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.353 [INFO][3658] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.129/26] block=192.168.125.128/26 handle="k8s-pod-network.977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" host="10.128.0.29" Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.353 [INFO][3658] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.129/26] handle="k8s-pod-network.977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" host="10.128.0.29" Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.353 [INFO][3658] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:55:06.385065 containerd[1488]: 2025-02-13 15:55:06.353 [INFO][3658] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.129/26] IPv6=[] ContainerID="977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" HandleID="k8s-pod-network.977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" Workload="10.128.0.29-k8s-nginx--deployment--6d5f899847--jfhxb-eth0" Feb 13 15:55:06.386246 containerd[1488]: 2025-02-13 15:55:06.355 [INFO][3623] cni-plugin/k8s.go 386: Populated endpoint ContainerID="977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" Namespace="default" Pod="nginx-deployment-6d5f899847-jfhxb" WorkloadEndpoint="10.128.0.29-k8s-nginx--deployment--6d5f899847--jfhxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.29-k8s-nginx--deployment--6d5f899847--jfhxb-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"8836a9fd-bf45-4f06-b3a3-1533b6113050", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 54, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.29", ContainerID:"", Pod:"nginx-deployment-6d5f899847-jfhxb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3af8a639af3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:55:06.386246 containerd[1488]: 2025-02-13 15:55:06.355 [INFO][3623] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.129/32] ContainerID="977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" Namespace="default" Pod="nginx-deployment-6d5f899847-jfhxb" WorkloadEndpoint="10.128.0.29-k8s-nginx--deployment--6d5f899847--jfhxb-eth0" Feb 13 15:55:06.386246 containerd[1488]: 2025-02-13 15:55:06.355 [INFO][3623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3af8a639af3 ContainerID="977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" Namespace="default" Pod="nginx-deployment-6d5f899847-jfhxb" WorkloadEndpoint="10.128.0.29-k8s-nginx--deployment--6d5f899847--jfhxb-eth0" Feb 13 15:55:06.386246 containerd[1488]: 2025-02-13 15:55:06.365 [INFO][3623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" Namespace="default" Pod="nginx-deployment-6d5f899847-jfhxb" WorkloadEndpoint="10.128.0.29-k8s-nginx--deployment--6d5f899847--jfhxb-eth0" Feb 13 15:55:06.386246 containerd[1488]: 2025-02-13 15:55:06.367 [INFO][3623] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" Namespace="default" Pod="nginx-deployment-6d5f899847-jfhxb" WorkloadEndpoint="10.128.0.29-k8s-nginx--deployment--6d5f899847--jfhxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.29-k8s-nginx--deployment--6d5f899847--jfhxb-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"8836a9fd-bf45-4f06-b3a3-1533b6113050", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 54, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.29", ContainerID:"977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c", Pod:"nginx-deployment-6d5f899847-jfhxb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3af8a639af3", MAC:"7a:f7:37:9e:b4:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:55:06.386246 containerd[1488]: 2025-02-13 15:55:06.383 [INFO][3623] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c" Namespace="default" Pod="nginx-deployment-6d5f899847-jfhxb" WorkloadEndpoint="10.128.0.29-k8s-nginx--deployment--6d5f899847--jfhxb-eth0" Feb 13 15:55:06.424389 containerd[1488]: time="2025-02-13T15:55:06.422310700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:55:06.424389 containerd[1488]: time="2025-02-13T15:55:06.422408880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:55:06.424389 containerd[1488]: time="2025-02-13T15:55:06.422492093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:06.424389 containerd[1488]: time="2025-02-13T15:55:06.422913035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:06.428374 systemd-networkd[1398]: cali368ffe1c1ff: Link UP Feb 13 15:55:06.428787 systemd-networkd[1398]: cali368ffe1c1ff: Gained carrier Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.188 [INFO][3626] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.221 [INFO][3626] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.29-k8s-csi--node--driver--hkphc-eth0 csi-node-driver- calico-system fbf57cb1-3114-4738-90d1-01b2062bf75f 1008 0 2025-02-13 15:54:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.128.0.29 csi-node-driver-hkphc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali368ffe1c1ff [] []}} ContainerID="eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" Namespace="calico-system" Pod="csi-node-driver-hkphc" WorkloadEndpoint="10.128.0.29-k8s-csi--node--driver--hkphc-" Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.221 [INFO][3626] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" Namespace="calico-system" Pod="csi-node-driver-hkphc" WorkloadEndpoint="10.128.0.29-k8s-csi--node--driver--hkphc-eth0" Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.276 [INFO][3662] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" HandleID="k8s-pod-network.eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" Workload="10.128.0.29-k8s-csi--node--driver--hkphc-eth0" Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.310 [INFO][3662] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" HandleID="k8s-pod-network.eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" Workload="10.128.0.29-k8s-csi--node--driver--hkphc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334c80), Attrs:map[string]string{"namespace":"calico-system", "node":"10.128.0.29", "pod":"csi-node-driver-hkphc", "timestamp":"2025-02-13 15:55:06.276230811 +0000 UTC"}, Hostname:"10.128.0.29", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.310 [INFO][3662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.353 [INFO][3662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.353 [INFO][3662] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.29' Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.356 [INFO][3662] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" host="10.128.0.29" Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.369 [INFO][3662] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.29" Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.382 [INFO][3662] ipam/ipam.go 489: Trying affinity for 192.168.125.128/26 host="10.128.0.29" Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.386 [INFO][3662] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.128/26 host="10.128.0.29" Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.395 [INFO][3662] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="10.128.0.29" Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.395 [INFO][3662] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" host="10.128.0.29" Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.398 [INFO][3662] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2 Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.408 [INFO][3662] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" host="10.128.0.29" Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.418 [INFO][3662] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.130/26] block=192.168.125.128/26 handle="k8s-pod-network.eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" host="10.128.0.29" Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.419 [INFO][3662] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.130/26] handle="k8s-pod-network.eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" host="10.128.0.29" Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.419 [INFO][3662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:55:06.449980 containerd[1488]: 2025-02-13 15:55:06.419 [INFO][3662] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.130/26] IPv6=[] ContainerID="eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" HandleID="k8s-pod-network.eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" Workload="10.128.0.29-k8s-csi--node--driver--hkphc-eth0" Feb 13 15:55:06.451223 containerd[1488]: 2025-02-13 15:55:06.423 [INFO][3626] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" Namespace="calico-system" Pod="csi-node-driver-hkphc" WorkloadEndpoint="10.128.0.29-k8s-csi--node--driver--hkphc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.29-k8s-csi--node--driver--hkphc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fbf57cb1-3114-4738-90d1-01b2062bf75f", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.29", ContainerID:"", Pod:"csi-node-driver-hkphc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali368ffe1c1ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:55:06.451223 containerd[1488]: 2025-02-13 15:55:06.423 [INFO][3626] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.130/32] ContainerID="eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" Namespace="calico-system" Pod="csi-node-driver-hkphc" WorkloadEndpoint="10.128.0.29-k8s-csi--node--driver--hkphc-eth0" Feb 13 15:55:06.451223 containerd[1488]: 2025-02-13 15:55:06.423 [INFO][3626] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali368ffe1c1ff ContainerID="eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" Namespace="calico-system" Pod="csi-node-driver-hkphc" WorkloadEndpoint="10.128.0.29-k8s-csi--node--driver--hkphc-eth0" Feb 13 15:55:06.451223 containerd[1488]: 2025-02-13 15:55:06.426 [INFO][3626] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" Namespace="calico-system" Pod="csi-node-driver-hkphc" WorkloadEndpoint="10.128.0.29-k8s-csi--node--driver--hkphc-eth0" Feb 13 15:55:06.451223 containerd[1488]: 2025-02-13 15:55:06.426 [INFO][3626] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" Namespace="calico-system" Pod="csi-node-driver-hkphc" WorkloadEndpoint="10.128.0.29-k8s-csi--node--driver--hkphc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.29-k8s-csi--node--driver--hkphc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fbf57cb1-3114-4738-90d1-01b2062bf75f", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.29", ContainerID:"eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2", Pod:"csi-node-driver-hkphc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali368ffe1c1ff", MAC:"86:0b:31:14:99:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:55:06.451223 containerd[1488]: 2025-02-13 15:55:06.447 [INFO][3626] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2" Namespace="calico-system" Pod="csi-node-driver-hkphc" WorkloadEndpoint="10.128.0.29-k8s-csi--node--driver--hkphc-eth0" Feb 13 15:55:06.474807 systemd[1]: Started cri-containerd-977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c.scope - libcontainer container 977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c. Feb 13 15:55:06.506063 systemd-networkd[1398]: caliebadfd68f3d: Link UP Feb 13 15:55:06.507856 systemd-networkd[1398]: caliebadfd68f3d: Gained carrier Feb 13 15:55:06.517312 containerd[1488]: time="2025-02-13T15:55:06.516074755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:55:06.517312 containerd[1488]: time="2025-02-13T15:55:06.516165036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:55:06.517312 containerd[1488]: time="2025-02-13T15:55:06.516193592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:06.517312 containerd[1488]: time="2025-02-13T15:55:06.516317525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.224 [INFO][3645] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.261 [INFO][3645] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.29-k8s-calico--kube--controllers--65f6b8654f--jsv2p-eth0 calico-kube-controllers-65f6b8654f- calico-system 91fbe7a0-482b-4d8d-9051-4ed2b2d7f48f 1289 0 2025-02-13 15:55:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65f6b8654f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 10.128.0.29 calico-kube-controllers-65f6b8654f-jsv2p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliebadfd68f3d [] []}} ContainerID="c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" Namespace="calico-system" Pod="calico-kube-controllers-65f6b8654f-jsv2p" WorkloadEndpoint="10.128.0.29-k8s-calico--kube--controllers--65f6b8654f--jsv2p-" Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.261 [INFO][3645] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" Namespace="calico-system" Pod="calico-kube-controllers-65f6b8654f-jsv2p" WorkloadEndpoint="10.128.0.29-k8s-calico--kube--controllers--65f6b8654f--jsv2p-eth0" Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.308 [INFO][3671] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" HandleID="k8s-pod-network.c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" Workload="10.128.0.29-k8s-calico--kube--controllers--65f6b8654f--jsv2p-eth0" Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.327 [INFO][3671] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" HandleID="k8s-pod-network.c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" Workload="10.128.0.29-k8s-calico--kube--controllers--65f6b8654f--jsv2p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030f150), Attrs:map[string]string{"namespace":"calico-system", "node":"10.128.0.29", "pod":"calico-kube-controllers-65f6b8654f-jsv2p", "timestamp":"2025-02-13 15:55:06.308011022 +0000 UTC"}, Hostname:"10.128.0.29", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.327 [INFO][3671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.419 [INFO][3671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.419 [INFO][3671] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.29' Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.428 [INFO][3671] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" host="10.128.0.29" Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.440 [INFO][3671] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.29" Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.451 [INFO][3671] ipam/ipam.go 489: Trying affinity for 192.168.125.128/26 host="10.128.0.29" Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.455 [INFO][3671] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.128/26 host="10.128.0.29" Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.460 [INFO][3671] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="10.128.0.29" Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.460 [INFO][3671] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" host="10.128.0.29" Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.462 [INFO][3671] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4 Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.469 [INFO][3671] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" host="10.128.0.29" Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.479 [INFO][3671] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.131/26] block=192.168.125.128/26 handle="k8s-pod-network.c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" host="10.128.0.29" Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.480 [INFO][3671] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.131/26] handle="k8s-pod-network.c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" host="10.128.0.29" Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.480 [INFO][3671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:55:06.536331 containerd[1488]: 2025-02-13 15:55:06.481 [INFO][3671] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.131/26] IPv6=[] ContainerID="c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" HandleID="k8s-pod-network.c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" Workload="10.128.0.29-k8s-calico--kube--controllers--65f6b8654f--jsv2p-eth0" Feb 13 15:55:06.537646 containerd[1488]: 2025-02-13 15:55:06.487 [INFO][3645] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" Namespace="calico-system" Pod="calico-kube-controllers-65f6b8654f-jsv2p" WorkloadEndpoint="10.128.0.29-k8s-calico--kube--controllers--65f6b8654f--jsv2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.29-k8s-calico--kube--controllers--65f6b8654f--jsv2p-eth0", GenerateName:"calico-kube-controllers-65f6b8654f-", Namespace:"calico-system", SelfLink:"", UID:"91fbe7a0-482b-4d8d-9051-4ed2b2d7f48f", ResourceVersion:"1289", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 55, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65f6b8654f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.29", ContainerID:"", Pod:"calico-kube-controllers-65f6b8654f-jsv2p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliebadfd68f3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:55:06.537646 containerd[1488]: 2025-02-13 15:55:06.488 [INFO][3645] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.131/32] ContainerID="c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" Namespace="calico-system" Pod="calico-kube-controllers-65f6b8654f-jsv2p" WorkloadEndpoint="10.128.0.29-k8s-calico--kube--controllers--65f6b8654f--jsv2p-eth0" Feb 13 15:55:06.537646 containerd[1488]: 2025-02-13 15:55:06.488 [INFO][3645] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebadfd68f3d ContainerID="c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" Namespace="calico-system" Pod="calico-kube-controllers-65f6b8654f-jsv2p" WorkloadEndpoint="10.128.0.29-k8s-calico--kube--controllers--65f6b8654f--jsv2p-eth0" Feb 13 15:55:06.537646 containerd[1488]: 2025-02-13 15:55:06.508 [INFO][3645] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" Namespace="calico-system" Pod="calico-kube-controllers-65f6b8654f-jsv2p" WorkloadEndpoint="10.128.0.29-k8s-calico--kube--controllers--65f6b8654f--jsv2p-eth0" Feb 13 15:55:06.537646 containerd[1488]: 2025-02-13 15:55:06.509 [INFO][3645] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" Namespace="calico-system" Pod="calico-kube-controllers-65f6b8654f-jsv2p" WorkloadEndpoint="10.128.0.29-k8s-calico--kube--controllers--65f6b8654f--jsv2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.29-k8s-calico--kube--controllers--65f6b8654f--jsv2p-eth0", GenerateName:"calico-kube-controllers-65f6b8654f-", Namespace:"calico-system", SelfLink:"", UID:"91fbe7a0-482b-4d8d-9051-4ed2b2d7f48f", ResourceVersion:"1289", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 55, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65f6b8654f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.29", ContainerID:"c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4", Pod:"calico-kube-controllers-65f6b8654f-jsv2p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliebadfd68f3d", MAC:"0e:f9:66:a5:84:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:55:06.537646 containerd[1488]: 2025-02-13 15:55:06.531 [INFO][3645] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4" Namespace="calico-system" Pod="calico-kube-controllers-65f6b8654f-jsv2p" WorkloadEndpoint="10.128.0.29-k8s-calico--kube--controllers--65f6b8654f--jsv2p-eth0" Feb 13 15:55:06.565342 systemd[1]: Started cri-containerd-eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2.scope - libcontainer container eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2. Feb 13 15:55:06.566357 kubelet[1860]: E0213 15:55:06.566019 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:06.584797 containerd[1488]: time="2025-02-13T15:55:06.584643547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jfhxb,Uid:8836a9fd-bf45-4f06-b3a3-1533b6113050,Namespace:default,Attempt:11,} returns sandbox id \"977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c\"" Feb 13 15:55:06.589590 containerd[1488]: time="2025-02-13T15:55:06.589314716Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:55:06.603694 containerd[1488]: time="2025-02-13T15:55:06.602742055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:55:06.603694 containerd[1488]: time="2025-02-13T15:55:06.602834969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:55:06.603694 containerd[1488]: time="2025-02-13T15:55:06.602860894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:06.603694 containerd[1488]: time="2025-02-13T15:55:06.602987408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:06.624033 containerd[1488]: time="2025-02-13T15:55:06.623440574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hkphc,Uid:fbf57cb1-3114-4738-90d1-01b2062bf75f,Namespace:calico-system,Attempt:14,} returns sandbox id \"eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2\"" Feb 13 15:55:06.644807 systemd[1]: Started cri-containerd-c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4.scope - libcontainer container c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4. Feb 13 15:55:06.702804 containerd[1488]: time="2025-02-13T15:55:06.702755400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65f6b8654f-jsv2p,Uid:91fbe7a0-482b-4d8d-9051-4ed2b2d7f48f,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4\"" Feb 13 15:55:07.190601 kernel: bpftool[3933]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:55:07.427366 systemd-networkd[1398]: cali3af8a639af3: Gained IPv6LL Feb 13 15:55:07.546086 systemd-networkd[1398]: vxlan.calico: Link UP Feb 13 15:55:07.546101 systemd-networkd[1398]: vxlan.calico: Gained carrier Feb 13 15:55:07.569159 kubelet[1860]: E0213 15:55:07.568882 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:08.130901 systemd-networkd[1398]: cali368ffe1c1ff: Gained IPv6LL Feb 13 15:55:08.515429 systemd-networkd[1398]: caliebadfd68f3d: Gained IPv6LL Feb 13 15:55:08.570964 kubelet[1860]: E0213 15:55:08.570908 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:08.899312 systemd-networkd[1398]: vxlan.calico: Gained IPv6LL Feb 13 15:55:09.572857 kubelet[1860]: E0213 15:55:09.572801 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:09.635674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount330779940.mount: Deactivated successfully. Feb 13 15:55:10.573917 kubelet[1860]: E0213 15:55:10.573850 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:11.240778 containerd[1488]: time="2025-02-13T15:55:11.240703666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:11.242399 containerd[1488]: time="2025-02-13T15:55:11.242310468Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 15:55:11.243595 containerd[1488]: time="2025-02-13T15:55:11.243377689Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:11.255210 containerd[1488]: time="2025-02-13T15:55:11.255141870Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 4.665770559s" Feb 13 15:55:11.255210 containerd[1488]: time="2025-02-13T15:55:11.255209043Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 15:55:11.256832 containerd[1488]: time="2025-02-13T15:55:11.255509929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:11.258015 containerd[1488]: time="2025-02-13T15:55:11.257970793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:55:11.260694 containerd[1488]: time="2025-02-13T15:55:11.260636374Z" level=info msg="CreateContainer within sandbox \"977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 15:55:11.287094 containerd[1488]: time="2025-02-13T15:55:11.287000284Z" level=info msg="CreateContainer within sandbox \"977c78e9c1893326afadb188018cbe4ecb1ae08686eefbfdb5f81d50cb3bcb0c\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4db8bc7c699c82b0d5908d4cea5736310709b1208c22ff5546e7407c6bccbcac\"" Feb 13 15:55:11.289065 containerd[1488]: time="2025-02-13T15:55:11.287689036Z" level=info msg="StartContainer for \"4db8bc7c699c82b0d5908d4cea5736310709b1208c22ff5546e7407c6bccbcac\"" Feb 13 15:55:11.333781 systemd[1]: Started cri-containerd-4db8bc7c699c82b0d5908d4cea5736310709b1208c22ff5546e7407c6bccbcac.scope - libcontainer container 4db8bc7c699c82b0d5908d4cea5736310709b1208c22ff5546e7407c6bccbcac. Feb 13 15:55:11.369042 containerd[1488]: time="2025-02-13T15:55:11.368981318Z" level=info msg="StartContainer for \"4db8bc7c699c82b0d5908d4cea5736310709b1208c22ff5546e7407c6bccbcac\" returns successfully" Feb 13 15:55:11.574474 kubelet[1860]: E0213 15:55:11.574412 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:11.829004 ntpd[1458]: Listen normally on 7 vxlan.calico 192.168.125.128:123 Feb 13 15:55:11.829672 ntpd[1458]: 13 Feb 15:55:11 ntpd[1458]: Listen normally on 7 vxlan.calico 192.168.125.128:123 Feb 13 15:55:11.829672 ntpd[1458]: 13 Feb 15:55:11 ntpd[1458]: Listen normally on 8 cali3af8a639af3 [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 15:55:11.829672 ntpd[1458]: 13 Feb 15:55:11 ntpd[1458]: Listen normally on 9 cali368ffe1c1ff [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 15:55:11.829672 ntpd[1458]: 13 Feb 15:55:11 ntpd[1458]: Listen normally on 10 caliebadfd68f3d [fe80::ecee:eeff:feee:eeee%5]:123 Feb 13 15:55:11.829672 ntpd[1458]: 13 Feb 15:55:11 ntpd[1458]: Listen normally on 11 vxlan.calico [fe80::64d3:8eff:fe93:222%6]:123 Feb 13 15:55:11.829146 ntpd[1458]: Listen normally on 8 cali3af8a639af3 [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 15:55:11.829241 ntpd[1458]: Listen normally on 9 cali368ffe1c1ff [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 15:55:11.829301 ntpd[1458]: Listen normally on 10 caliebadfd68f3d [fe80::ecee:eeff:feee:eeee%5]:123 Feb 13 15:55:11.829358 ntpd[1458]: Listen normally on 11 vxlan.calico [fe80::64d3:8eff:fe93:222%6]:123 Feb 13 15:55:12.465111 containerd[1488]: time="2025-02-13T15:55:12.465034763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:12.466367 containerd[1488]: time="2025-02-13T15:55:12.466295714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 15:55:12.469390 containerd[1488]: time="2025-02-13T15:55:12.467705811Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:12.471364 containerd[1488]: time="2025-02-13T15:55:12.471320557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:12.472555 containerd[1488]: time="2025-02-13T15:55:12.472515258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.214494285s" Feb 13 15:55:12.472800 containerd[1488]: time="2025-02-13T15:55:12.472771361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 15:55:12.474450 containerd[1488]: time="2025-02-13T15:55:12.474414016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 15:55:12.475719 containerd[1488]: time="2025-02-13T15:55:12.475683830Z" level=info msg="CreateContainer within sandbox \"eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:55:12.495545 containerd[1488]: time="2025-02-13T15:55:12.495483266Z" level=info msg="CreateContainer within sandbox \"eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ca75712d2a3fc2b5d745a65d28b85a74bcabfd01f085483305d6c080df34658f\"" Feb 13 15:55:12.497652 containerd[1488]: time="2025-02-13T15:55:12.496108431Z" level=info msg="StartContainer for \"ca75712d2a3fc2b5d745a65d28b85a74bcabfd01f085483305d6c080df34658f\"" Feb 13 15:55:12.543540 systemd[1]: run-containerd-runc-k8s.io-ca75712d2a3fc2b5d745a65d28b85a74bcabfd01f085483305d6c080df34658f-runc.g0QHRk.mount: Deactivated successfully. Feb 13 15:55:12.554798 systemd[1]: Started cri-containerd-ca75712d2a3fc2b5d745a65d28b85a74bcabfd01f085483305d6c080df34658f.scope - libcontainer container ca75712d2a3fc2b5d745a65d28b85a74bcabfd01f085483305d6c080df34658f. Feb 13 15:55:12.575024 kubelet[1860]: E0213 15:55:12.574951 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:12.595350 containerd[1488]: time="2025-02-13T15:55:12.595260367Z" level=info msg="StartContainer for \"ca75712d2a3fc2b5d745a65d28b85a74bcabfd01f085483305d6c080df34658f\" returns successfully" Feb 13 15:55:13.575729 kubelet[1860]: E0213 15:55:13.575620 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:14.564876 containerd[1488]: time="2025-02-13T15:55:14.564801990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:14.566197 containerd[1488]: time="2025-02-13T15:55:14.566116394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 15:55:14.567724 containerd[1488]: time="2025-02-13T15:55:14.567652456Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:14.570632 containerd[1488]: time="2025-02-13T15:55:14.570542245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:14.571610 containerd[1488]: time="2025-02-13T15:55:14.571547143Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.09708685s" Feb 13 15:55:14.571728 containerd[1488]: time="2025-02-13T15:55:14.571615646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 15:55:14.573890 containerd[1488]: time="2025-02-13T15:55:14.573785480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:55:14.576783 kubelet[1860]: E0213 15:55:14.576684 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:14.596756 containerd[1488]: time="2025-02-13T15:55:14.596381571Z" level=info msg="CreateContainer within sandbox \"c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 15:55:14.617278 containerd[1488]: time="2025-02-13T15:55:14.617212385Z" level=info msg="CreateContainer within sandbox \"c8be6d6100f07900e0e4707858fc17cc35bc8e635e537a9b59584796c3224fe4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4976134c09ebf1529d23e63067ee76da59c74ee4f3cc961e6eb862af4aa2ab8a\"" Feb 13 15:55:14.618042 containerd[1488]: time="2025-02-13T15:55:14.617978833Z" level=info msg="StartContainer for \"4976134c09ebf1529d23e63067ee76da59c74ee4f3cc961e6eb862af4aa2ab8a\"" Feb 13 15:55:14.663808 systemd[1]: Started cri-containerd-4976134c09ebf1529d23e63067ee76da59c74ee4f3cc961e6eb862af4aa2ab8a.scope - libcontainer container 4976134c09ebf1529d23e63067ee76da59c74ee4f3cc961e6eb862af4aa2ab8a. Feb 13 15:55:14.722077 containerd[1488]: time="2025-02-13T15:55:14.722024416Z" level=info msg="StartContainer for \"4976134c09ebf1529d23e63067ee76da59c74ee4f3cc961e6eb862af4aa2ab8a\" returns successfully" Feb 13 15:55:15.182837 kubelet[1860]: I0213 15:55:15.182005 1860 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-jfhxb" podStartSLOduration=17.514915588 podStartE2EDuration="22.181949939s" podCreationTimestamp="2025-02-13 15:54:53 +0000 UTC" firstStartedPulling="2025-02-13 15:55:06.588725659 +0000 UTC m=+27.761517914" lastFinishedPulling="2025-02-13 15:55:11.255759996 +0000 UTC m=+32.428552265" observedRunningTime="2025-02-13 15:55:12.177825525 +0000 UTC m=+33.350617798" watchObservedRunningTime="2025-02-13 15:55:15.181949939 +0000 UTC m=+36.354742210" Feb 13 15:55:15.547062 kubelet[1860]: I0213 15:55:15.547012 1860 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-65f6b8654f-jsv2p" podStartSLOduration=2.678699963 podStartE2EDuration="10.54694056s" podCreationTimestamp="2025-02-13 15:55:05 +0000 UTC" firstStartedPulling="2025-02-13 15:55:06.704462344 +0000 UTC m=+27.877254596" lastFinishedPulling="2025-02-13 15:55:14.572702927 +0000 UTC m=+35.745495193" observedRunningTime="2025-02-13 15:55:15.182972738 +0000 UTC m=+36.355765010" watchObservedRunningTime="2025-02-13 15:55:15.54694056 +0000 UTC m=+36.719732831" Feb 13 15:55:15.577510 kubelet[1860]: E0213 15:55:15.577094 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:15.839686 update_engine[1475]: I20250213 15:55:15.838608 1475 update_attempter.cc:509] Updating boot flags... Feb 13 15:55:15.949873 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (4262) Feb 13 15:55:15.956639 containerd[1488]: time="2025-02-13T15:55:15.956369797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:15.960268 containerd[1488]: time="2025-02-13T15:55:15.960164694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 15:55:15.961799 containerd[1488]: time="2025-02-13T15:55:15.961299723Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:15.966886 containerd[1488]: time="2025-02-13T15:55:15.966845386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:15.970909 containerd[1488]: time="2025-02-13T15:55:15.968203485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.394375572s" Feb 13 15:55:15.970909 containerd[1488]: time="2025-02-13T15:55:15.970239811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 15:55:15.982618 containerd[1488]: time="2025-02-13T15:55:15.982349961Z" level=info msg="CreateContainer within sandbox \"eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:55:16.041641 containerd[1488]: time="2025-02-13T15:55:16.040640761Z" level=info msg="CreateContainer within sandbox \"eb34b7dd6c4201a3a6d94bde306c9f9412865c00872c9f15b8b479c3bb9c19c2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8abf71b04259ae160dbc6928ac3ccda4c8b02b7e49983a189b825f4dd1c7b944\"" Feb 13 15:55:16.043852 containerd[1488]: time="2025-02-13T15:55:16.043813812Z" level=info msg="StartContainer for \"8abf71b04259ae160dbc6928ac3ccda4c8b02b7e49983a189b825f4dd1c7b944\"" Feb 13 15:55:16.120592 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (4258) Feb 13 15:55:16.165492 systemd[1]: Started cri-containerd-8abf71b04259ae160dbc6928ac3ccda4c8b02b7e49983a189b825f4dd1c7b944.scope - libcontainer container 8abf71b04259ae160dbc6928ac3ccda4c8b02b7e49983a189b825f4dd1c7b944. Feb 13 15:55:16.262751 containerd[1488]: time="2025-02-13T15:55:16.262474142Z" level=info msg="StartContainer for \"8abf71b04259ae160dbc6928ac3ccda4c8b02b7e49983a189b825f4dd1c7b944\" returns successfully" Feb 13 15:55:16.577781 kubelet[1860]: E0213 15:55:16.577691 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:16.696462 kubelet[1860]: I0213 15:55:16.696389 1860 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:55:16.696462 kubelet[1860]: I0213 15:55:16.696438 1860 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:55:17.578755 kubelet[1860]: E0213 15:55:17.578702 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:18.579402 kubelet[1860]: E0213 15:55:18.579331 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:19.545164 kubelet[1860]: E0213 15:55:19.545093 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:19.579724 kubelet[1860]: E0213 15:55:19.579657 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:20.580233 kubelet[1860]: E0213 15:55:20.580164 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:21.580380 kubelet[1860]: E0213 15:55:21.580306 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:21.761273 kubelet[1860]: I0213 15:55:21.761219 1860 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-hkphc" podStartSLOduration=33.416054647 podStartE2EDuration="42.761165771s" podCreationTimestamp="2025-02-13 15:54:39 +0000 UTC" firstStartedPulling="2025-02-13 15:55:06.62670673 +0000 UTC m=+27.799498988" lastFinishedPulling="2025-02-13 15:55:15.971817845 +0000 UTC m=+37.144610112" observedRunningTime="2025-02-13 15:55:17.268619509 +0000 UTC m=+38.441411781" watchObservedRunningTime="2025-02-13 15:55:21.761165771 +0000 UTC m=+42.933958042" Feb 13 15:55:21.761943 kubelet[1860]: I0213 15:55:21.761542 1860 topology_manager.go:215] "Topology Admit Handler" podUID="3f5536a5-00fb-4825-8257-52996988d410" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 15:55:21.769640 systemd[1]: Created slice kubepods-besteffort-pod3f5536a5_00fb_4825_8257_52996988d410.slice - libcontainer container kubepods-besteffort-pod3f5536a5_00fb_4825_8257_52996988d410.slice. Feb 13 15:55:21.867315 kubelet[1860]: I0213 15:55:21.867088 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3f5536a5-00fb-4825-8257-52996988d410-data\") pod \"nfs-server-provisioner-0\" (UID: \"3f5536a5-00fb-4825-8257-52996988d410\") " pod="default/nfs-server-provisioner-0" Feb 13 15:55:21.867315 kubelet[1860]: I0213 15:55:21.867171 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pwf2\" (UniqueName: \"kubernetes.io/projected/3f5536a5-00fb-4825-8257-52996988d410-kube-api-access-5pwf2\") pod \"nfs-server-provisioner-0\" (UID: \"3f5536a5-00fb-4825-8257-52996988d410\") " pod="default/nfs-server-provisioner-0" Feb 13 15:55:22.073739 containerd[1488]: time="2025-02-13T15:55:22.073681789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3f5536a5-00fb-4825-8257-52996988d410,Namespace:default,Attempt:0,}" Feb 13 15:55:22.248750 systemd-networkd[1398]: cali60e51b789ff: Link UP Feb 13 15:55:22.249088 systemd-networkd[1398]: cali60e51b789ff: Gained carrier Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.137 [INFO][4325] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.29-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 3f5536a5-00fb-4825-8257-52996988d410 1447 0 2025-02-13 15:55:21 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.128.0.29 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.29-k8s-nfs--server--provisioner--0-" Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.137 [INFO][4325] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.29-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.173 [INFO][4336] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" HandleID="k8s-pod-network.3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" Workload="10.128.0.29-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.189 [INFO][4336] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" HandleID="k8s-pod-network.3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" Workload="10.128.0.29-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318ba0), Attrs:map[string]string{"namespace":"default", "node":"10.128.0.29", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 15:55:22.17340334 +0000 UTC"}, Hostname:"10.128.0.29", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.189 [INFO][4336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.189 [INFO][4336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.189 [INFO][4336] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.29' Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.193 [INFO][4336] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" host="10.128.0.29" Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.198 [INFO][4336] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.29" Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.204 [INFO][4336] ipam/ipam.go 489: Trying affinity for 192.168.125.128/26 host="10.128.0.29" Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.207 [INFO][4336] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.128/26 host="10.128.0.29" Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.210 [INFO][4336] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="10.128.0.29" Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.210 [INFO][4336] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" host="10.128.0.29" Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.212 [INFO][4336] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0 Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.223 [INFO][4336] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" host="10.128.0.29" Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.241 [INFO][4336] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.132/26] block=192.168.125.128/26 handle="k8s-pod-network.3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" host="10.128.0.29" Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.241 [INFO][4336] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.132/26] handle="k8s-pod-network.3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" host="10.128.0.29" Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.241 [INFO][4336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:55:22.269803 containerd[1488]: 2025-02-13 15:55:22.241 [INFO][4336] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.132/26] IPv6=[] ContainerID="3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" HandleID="k8s-pod-network.3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" Workload="10.128.0.29-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:55:22.271989 containerd[1488]: 2025-02-13 15:55:22.243 [INFO][4325] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.29-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.29-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"3f5536a5-00fb-4825-8257-52996988d410", ResourceVersion:"1447", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 55, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.29", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.125.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:55:22.271989 containerd[1488]: 2025-02-13 15:55:22.243 [INFO][4325] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.132/32] ContainerID="3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.29-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:55:22.271989 containerd[1488]: 2025-02-13 15:55:22.243 [INFO][4325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.29-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:55:22.271989 containerd[1488]: 2025-02-13 15:55:22.247 [INFO][4325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.29-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:55:22.272970 containerd[1488]: 2025-02-13 15:55:22.247 [INFO][4325] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.29-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.29-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"3f5536a5-00fb-4825-8257-52996988d410", ResourceVersion:"1447", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 55, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.29", ContainerID:"3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.125.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"16:78:1c:52:b9:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:55:22.272970 containerd[1488]: 2025-02-13 15:55:22.267 [INFO][4325] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.29-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:55:22.306639 containerd[1488]: time="2025-02-13T15:55:22.305984983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:55:22.306639 containerd[1488]: time="2025-02-13T15:55:22.306253381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:55:22.306639 containerd[1488]: time="2025-02-13T15:55:22.306280892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:22.306639 containerd[1488]: time="2025-02-13T15:55:22.306434281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:22.345767 systemd[1]: Started cri-containerd-3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0.scope - libcontainer container 3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0. Feb 13 15:55:22.402139 containerd[1488]: time="2025-02-13T15:55:22.402085732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3f5536a5-00fb-4825-8257-52996988d410,Namespace:default,Attempt:0,} returns sandbox id \"3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0\"" Feb 13 15:55:22.404635 containerd[1488]: time="2025-02-13T15:55:22.404539463Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 15:55:22.580968 kubelet[1860]: E0213 15:55:22.580898 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:23.581815 kubelet[1860]: E0213 15:55:23.581756 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:24.003817 systemd-networkd[1398]: cali60e51b789ff: Gained IPv6LL Feb 13 15:55:24.583590 kubelet[1860]: E0213 15:55:24.582799 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:24.875249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount121502081.mount: Deactivated successfully. Feb 13 15:55:25.583449 kubelet[1860]: E0213 15:55:25.583395 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:26.584354 kubelet[1860]: E0213 15:55:26.583976 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:26.829054 ntpd[1458]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 15:55:26.829607 ntpd[1458]: 13 Feb 15:55:26 ntpd[1458]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 15:55:27.278421 containerd[1488]: time="2025-02-13T15:55:27.278353472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:27.280051 containerd[1488]: time="2025-02-13T15:55:27.279980623Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91045236" Feb 13 15:55:27.282582 containerd[1488]: time="2025-02-13T15:55:27.281303130Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:27.285487 containerd[1488]: time="2025-02-13T15:55:27.285445886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:27.289719 containerd[1488]: time="2025-02-13T15:55:27.289677668Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.885029129s" Feb 13 15:55:27.289900 containerd[1488]: time="2025-02-13T15:55:27.289875513Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 15:55:27.296857 containerd[1488]: time="2025-02-13T15:55:27.296819063Z" level=info msg="CreateContainer within sandbox \"3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 15:55:27.319318 containerd[1488]: time="2025-02-13T15:55:27.318529604Z" level=info msg="CreateContainer within sandbox \"3827ac269f0e01430403394f3fe3fa8ca67602381711139d23aff7642b2211a0\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"503f11e3660fc912c5c254521b9092cde9ce8713929c2f60466a455c50827d19\"" Feb 13 15:55:27.321882 containerd[1488]: time="2025-02-13T15:55:27.321838039Z" level=info msg="StartContainer for \"503f11e3660fc912c5c254521b9092cde9ce8713929c2f60466a455c50827d19\"" Feb 13 15:55:27.373804 systemd[1]: Started cri-containerd-503f11e3660fc912c5c254521b9092cde9ce8713929c2f60466a455c50827d19.scope - libcontainer container 503f11e3660fc912c5c254521b9092cde9ce8713929c2f60466a455c50827d19. Feb 13 15:55:27.410435 containerd[1488]: time="2025-02-13T15:55:27.410361463Z" level=info msg="StartContainer for \"503f11e3660fc912c5c254521b9092cde9ce8713929c2f60466a455c50827d19\" returns successfully" Feb 13 15:55:27.585641 kubelet[1860]: E0213 15:55:27.585085 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:28.302552 kubelet[1860]: I0213 15:55:28.302345 1860 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.412388265 podStartE2EDuration="7.302284685s" podCreationTimestamp="2025-02-13 15:55:21 +0000 UTC" firstStartedPulling="2025-02-13 15:55:22.403849134 +0000 UTC m=+43.576641390" lastFinishedPulling="2025-02-13 15:55:27.293745552 +0000 UTC m=+48.466537810" observedRunningTime="2025-02-13 15:55:28.301941679 +0000 UTC m=+49.474733951" watchObservedRunningTime="2025-02-13 15:55:28.302284685 +0000 UTC m=+49.475076961" Feb 13 15:55:28.585477 kubelet[1860]: E0213 15:55:28.585294 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:29.585555 kubelet[1860]: E0213 15:55:29.585480 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:30.586735 kubelet[1860]: E0213 15:55:30.586659 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:31.587375 kubelet[1860]: E0213 15:55:31.587300 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:31.692606 kubelet[1860]: I0213 15:55:31.692548 1860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:55:31.812743 systemd[1]: run-containerd-runc-k8s.io-6553e5181190f761141ee3496fb5f1c760513658b0cb7a77b3e85046d734b3ca-runc.AhA1ZN.mount: Deactivated successfully. Feb 13 15:55:32.588404 kubelet[1860]: E0213 15:55:32.588337 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:33.588813 kubelet[1860]: E0213 15:55:33.588745 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:34.589850 kubelet[1860]: E0213 15:55:34.589780 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:35.590503 kubelet[1860]: E0213 15:55:35.590437 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:36.591391 kubelet[1860]: E0213 15:55:36.591327 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:37.200354 kubelet[1860]: I0213 15:55:37.199779 1860 topology_manager.go:215] "Topology Admit Handler" podUID="fd30d61d-017c-4de9-bd0b-dd82491c27d9" podNamespace="default" podName="test-pod-1" Feb 13 15:55:37.207965 systemd[1]: Created slice kubepods-besteffort-podfd30d61d_017c_4de9_bd0b_dd82491c27d9.slice - libcontainer container kubepods-besteffort-podfd30d61d_017c_4de9_bd0b_dd82491c27d9.slice. Feb 13 15:55:37.358597 kubelet[1860]: I0213 15:55:37.358493 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjvgw\" (UniqueName: \"kubernetes.io/projected/fd30d61d-017c-4de9-bd0b-dd82491c27d9-kube-api-access-rjvgw\") pod \"test-pod-1\" (UID: \"fd30d61d-017c-4de9-bd0b-dd82491c27d9\") " pod="default/test-pod-1" Feb 13 15:55:37.358820 kubelet[1860]: I0213 15:55:37.358655 1860 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-df37a927-8953-4a61-b5cb-043ebebeb14d\" (UniqueName: \"kubernetes.io/nfs/fd30d61d-017c-4de9-bd0b-dd82491c27d9-pvc-df37a927-8953-4a61-b5cb-043ebebeb14d\") pod \"test-pod-1\" (UID: \"fd30d61d-017c-4de9-bd0b-dd82491c27d9\") " pod="default/test-pod-1" Feb 13 15:55:37.502620 kernel: FS-Cache: Loaded Feb 13 15:55:37.589861 kernel: RPC: Registered named UNIX socket transport module. Feb 13 15:55:37.590045 kernel: RPC: Registered udp transport module. Feb 13 15:55:37.590085 kernel: RPC: Registered tcp transport module. Feb 13 15:55:37.592100 kubelet[1860]: E0213 15:55:37.592028 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:37.594539 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 15:55:37.606758 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 15:55:37.873612 kernel: NFS: Registering the id_resolver key type Feb 13 15:55:37.873792 kernel: Key type id_resolver registered Feb 13 15:55:37.882239 kernel: Key type id_legacy registered Feb 13 15:55:37.924587 nfsidmap[4567]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Feb 13 15:55:37.936806 nfsidmap[4568]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Feb 13 15:55:38.116322 containerd[1488]: time="2025-02-13T15:55:38.116253740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:fd30d61d-017c-4de9-bd0b-dd82491c27d9,Namespace:default,Attempt:0,}" Feb 13 15:55:38.276793 systemd-networkd[1398]: cali5ec59c6bf6e: Link UP Feb 13 15:55:38.277400 systemd-networkd[1398]: cali5ec59c6bf6e: Gained carrier Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.188 [INFO][4570] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.29-k8s-test--pod--1-eth0 default fd30d61d-017c-4de9-bd0b-dd82491c27d9 1510 0 2025-02-13 15:55:22 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.128.0.29 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.29-k8s-test--pod--1-" Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.188 [INFO][4570] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.29-k8s-test--pod--1-eth0" Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.221 [INFO][4580] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" HandleID="k8s-pod-network.bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" Workload="10.128.0.29-k8s-test--pod--1-eth0" Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.234 [INFO][4580] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" HandleID="k8s-pod-network.bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" Workload="10.128.0.29-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000314b60), Attrs:map[string]string{"namespace":"default", "node":"10.128.0.29", "pod":"test-pod-1", "timestamp":"2025-02-13 15:55:38.221359483 +0000 UTC"}, Hostname:"10.128.0.29", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.234 [INFO][4580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.234 [INFO][4580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.234 [INFO][4580] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.29' Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.236 [INFO][4580] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" host="10.128.0.29" Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.241 [INFO][4580] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.29" Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.247 [INFO][4580] ipam/ipam.go 489: Trying affinity for 192.168.125.128/26 host="10.128.0.29" Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.249 [INFO][4580] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.128/26 host="10.128.0.29" Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.252 [INFO][4580] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="10.128.0.29" Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.252 [INFO][4580] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" host="10.128.0.29" Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.254 [INFO][4580] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836 Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.261 [INFO][4580] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" host="10.128.0.29" Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.270 [INFO][4580] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.133/26] block=192.168.125.128/26 handle="k8s-pod-network.bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" host="10.128.0.29" Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.270 [INFO][4580] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.133/26] handle="k8s-pod-network.bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" host="10.128.0.29" Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.270 [INFO][4580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.270 [INFO][4580] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.133/26] IPv6=[] ContainerID="bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" HandleID="k8s-pod-network.bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" Workload="10.128.0.29-k8s-test--pod--1-eth0" Feb 13 15:55:38.297708 containerd[1488]: 2025-02-13 15:55:38.272 [INFO][4570] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.29-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.29-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"fd30d61d-017c-4de9-bd0b-dd82491c27d9", ResourceVersion:"1510", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 55, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.29", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:55:38.299548 containerd[1488]: 2025-02-13 15:55:38.273 [INFO][4570] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.133/32] ContainerID="bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.29-k8s-test--pod--1-eth0" Feb 13 15:55:38.299548 containerd[1488]: 2025-02-13 15:55:38.273 [INFO][4570] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.29-k8s-test--pod--1-eth0" Feb 13 15:55:38.299548 containerd[1488]: 2025-02-13 15:55:38.276 [INFO][4570] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.29-k8s-test--pod--1-eth0" Feb 13 15:55:38.299548 containerd[1488]: 2025-02-13 15:55:38.277 [INFO][4570] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.29-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.29-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"fd30d61d-017c-4de9-bd0b-dd82491c27d9", ResourceVersion:"1510", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 55, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.29", ContainerID:"bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"26:6f:93:88:9c:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:55:38.299548 containerd[1488]: 2025-02-13 15:55:38.292 [INFO][4570] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.29-k8s-test--pod--1-eth0" Feb 13 15:55:38.335345 containerd[1488]: time="2025-02-13T15:55:38.335211648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:55:38.335345 containerd[1488]: time="2025-02-13T15:55:38.335292415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:55:38.335608 containerd[1488]: time="2025-02-13T15:55:38.335320361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:38.336175 containerd[1488]: time="2025-02-13T15:55:38.336080924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:38.363794 systemd[1]: Started cri-containerd-bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836.scope - libcontainer container bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836. Feb 13 15:55:38.419710 containerd[1488]: time="2025-02-13T15:55:38.419640870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:fd30d61d-017c-4de9-bd0b-dd82491c27d9,Namespace:default,Attempt:0,} returns sandbox id \"bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836\"" Feb 13 15:55:38.421977 containerd[1488]: time="2025-02-13T15:55:38.421939352Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:55:38.593213 kubelet[1860]: E0213 15:55:38.593046 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:38.695834 containerd[1488]: time="2025-02-13T15:55:38.695763854Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:38.696947 containerd[1488]: time="2025-02-13T15:55:38.696875807Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 15:55:38.702595 containerd[1488]: time="2025-02-13T15:55:38.701081846Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 279.091931ms" Feb 13 15:55:38.702595 containerd[1488]: time="2025-02-13T15:55:38.701129795Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 15:55:38.707938 containerd[1488]: time="2025-02-13T15:55:38.707889467Z" level=info msg="CreateContainer within sandbox \"bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 15:55:38.733255 containerd[1488]: time="2025-02-13T15:55:38.733158165Z" level=info msg="CreateContainer within sandbox \"bafacb00991084d8cb68d8f2bcbe87c0c63a4fb4aaad3bdd439f4f366c6f2836\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"5043cbcba5f297162527e5b21af5d417c6315fd1da29c15e39100988302ac0bc\"" Feb 13 15:55:38.734333 containerd[1488]: time="2025-02-13T15:55:38.734194770Z" level=info msg="StartContainer for \"5043cbcba5f297162527e5b21af5d417c6315fd1da29c15e39100988302ac0bc\"" Feb 13 15:55:38.783798 systemd[1]: Started cri-containerd-5043cbcba5f297162527e5b21af5d417c6315fd1da29c15e39100988302ac0bc.scope - libcontainer container 5043cbcba5f297162527e5b21af5d417c6315fd1da29c15e39100988302ac0bc. Feb 13 15:55:38.825752 containerd[1488]: time="2025-02-13T15:55:38.825693545Z" level=info msg="StartContainer for \"5043cbcba5f297162527e5b21af5d417c6315fd1da29c15e39100988302ac0bc\" returns successfully" Feb 13 15:55:39.331140 kubelet[1860]: I0213 15:55:39.330494 1860 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.04992588 podStartE2EDuration="17.330434506s" podCreationTimestamp="2025-02-13 15:55:22 +0000 UTC" firstStartedPulling="2025-02-13 15:55:38.421230412 +0000 UTC m=+59.594022670" lastFinishedPulling="2025-02-13 15:55:38.701739035 +0000 UTC m=+59.874531296" observedRunningTime="2025-02-13 15:55:39.330157571 +0000 UTC m=+60.502949842" watchObservedRunningTime="2025-02-13 15:55:39.330434506 +0000 UTC m=+60.503226781" Feb 13 15:55:39.545155 kubelet[1860]: E0213 15:55:39.545082 1860 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:39.577629 containerd[1488]: time="2025-02-13T15:55:39.577550160Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\"" Feb 13 15:55:39.578293 containerd[1488]: time="2025-02-13T15:55:39.577743732Z" level=info msg="TearDown network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" successfully" Feb 13 15:55:39.578293 containerd[1488]: time="2025-02-13T15:55:39.577765266Z" level=info msg="StopPodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" returns successfully" Feb 13 15:55:39.578740 containerd[1488]: time="2025-02-13T15:55:39.578661669Z" level=info msg="RemovePodSandbox for \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\"" Feb 13 15:55:39.578740 containerd[1488]: time="2025-02-13T15:55:39.578715613Z" level=info msg="Forcibly stopping sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\"" Feb 13 15:55:39.578932 containerd[1488]: time="2025-02-13T15:55:39.578838027Z" level=info msg="TearDown network for sandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" successfully" Feb 13 15:55:39.583340 containerd[1488]: time="2025-02-13T15:55:39.583204406Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.583340 containerd[1488]: time="2025-02-13T15:55:39.583277444Z" level=info msg="RemovePodSandbox \"ed742b6968d8c9868f62f06984a45bd3383080beba2ef26d45794cc74d36263a\" returns successfully" Feb 13 15:55:39.584331 containerd[1488]: time="2025-02-13T15:55:39.583962326Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\"" Feb 13 15:55:39.584331 containerd[1488]: time="2025-02-13T15:55:39.584083712Z" level=info msg="TearDown network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" successfully" Feb 13 15:55:39.584331 containerd[1488]: time="2025-02-13T15:55:39.584103633Z" level=info msg="StopPodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" returns successfully" Feb 13 15:55:39.584649 containerd[1488]: time="2025-02-13T15:55:39.584628092Z" level=info msg="RemovePodSandbox for \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\"" Feb 13 15:55:39.584742 containerd[1488]: time="2025-02-13T15:55:39.584720204Z" level=info msg="Forcibly stopping sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\"" Feb 13 15:55:39.584899 containerd[1488]: time="2025-02-13T15:55:39.584824370Z" level=info msg="TearDown network for sandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" successfully" Feb 13 15:55:39.588893 containerd[1488]: time="2025-02-13T15:55:39.588848436Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.588893 containerd[1488]: time="2025-02-13T15:55:39.588921103Z" level=info msg="RemovePodSandbox \"9b1dfecee0126f0f42f7ae654d4f338738fca0d153e1fc5d6900a059dac1fd3f\" returns successfully" Feb 13 15:55:39.589381 containerd[1488]: time="2025-02-13T15:55:39.589344230Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\"" Feb 13 15:55:39.589490 containerd[1488]: time="2025-02-13T15:55:39.589463181Z" level=info msg="TearDown network for sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" successfully" Feb 13 15:55:39.589615 containerd[1488]: time="2025-02-13T15:55:39.589486399Z" level=info msg="StopPodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" returns successfully" Feb 13 15:55:39.589892 containerd[1488]: time="2025-02-13T15:55:39.589860587Z" level=info msg="RemovePodSandbox for \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\"" Feb 13 15:55:39.589991 containerd[1488]: time="2025-02-13T15:55:39.589897094Z" level=info msg="Forcibly stopping sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\"" Feb 13 15:55:39.590052 containerd[1488]: time="2025-02-13T15:55:39.589992067Z" level=info msg="TearDown network for sandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" successfully" Feb 13 15:55:39.593421 kubelet[1860]: E0213 15:55:39.593372 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:39.594003 containerd[1488]: time="2025-02-13T15:55:39.593703818Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.594003 containerd[1488]: time="2025-02-13T15:55:39.593769608Z" level=info msg="RemovePodSandbox \"27c0bed45db5cb3bdd3f14465f4d064fc3265dcd84d9e16b0aafbbdda4b4e498\" returns successfully" Feb 13 15:55:39.594296 containerd[1488]: time="2025-02-13T15:55:39.594181421Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\"" Feb 13 15:55:39.594398 containerd[1488]: time="2025-02-13T15:55:39.594295104Z" level=info msg="TearDown network for sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" successfully" Feb 13 15:55:39.594398 containerd[1488]: time="2025-02-13T15:55:39.594314440Z" level=info msg="StopPodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" returns successfully" Feb 13 15:55:39.595742 containerd[1488]: time="2025-02-13T15:55:39.594693988Z" level=info msg="RemovePodSandbox for \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\"" Feb 13 15:55:39.595742 containerd[1488]: time="2025-02-13T15:55:39.594724803Z" level=info msg="Forcibly stopping sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\"" Feb 13 15:55:39.595742 containerd[1488]: time="2025-02-13T15:55:39.594829675Z" level=info msg="TearDown network for sandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" successfully" Feb 13 15:55:39.598930 containerd[1488]: time="2025-02-13T15:55:39.598871207Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.598930 containerd[1488]: time="2025-02-13T15:55:39.598929444Z" level=info msg="RemovePodSandbox \"2568eb005e0d2fe05e2c8d64e06a92078151be9ba0446dade458c0c82f46b5d5\" returns successfully" Feb 13 15:55:39.599421 containerd[1488]: time="2025-02-13T15:55:39.599332744Z" level=info msg="StopPodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\"" Feb 13 15:55:39.599511 containerd[1488]: time="2025-02-13T15:55:39.599468122Z" level=info msg="TearDown network for sandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" successfully" Feb 13 15:55:39.599511 containerd[1488]: time="2025-02-13T15:55:39.599486994Z" level=info msg="StopPodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" returns successfully" Feb 13 15:55:39.600029 containerd[1488]: time="2025-02-13T15:55:39.599915870Z" level=info msg="RemovePodSandbox for \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\"" Feb 13 15:55:39.600029 containerd[1488]: time="2025-02-13T15:55:39.599952010Z" level=info msg="Forcibly stopping sandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\"" Feb 13 15:55:39.600280 containerd[1488]: time="2025-02-13T15:55:39.600086751Z" level=info msg="TearDown network for sandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" successfully" Feb 13 15:55:39.603746 containerd[1488]: time="2025-02-13T15:55:39.603695248Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.603955 containerd[1488]: time="2025-02-13T15:55:39.603752433Z" level=info msg="RemovePodSandbox \"0824a1031b8245f09d61a9ef42ec743f375719f1142fee2c4d5a6041726951aa\" returns successfully" Feb 13 15:55:39.604118 containerd[1488]: time="2025-02-13T15:55:39.604089458Z" level=info msg="StopPodSandbox for \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\"" Feb 13 15:55:39.604225 containerd[1488]: time="2025-02-13T15:55:39.604205874Z" level=info msg="TearDown network for sandbox \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\" successfully" Feb 13 15:55:39.604345 containerd[1488]: time="2025-02-13T15:55:39.604223893Z" level=info msg="StopPodSandbox for \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\" returns successfully" Feb 13 15:55:39.604731 containerd[1488]: time="2025-02-13T15:55:39.604700515Z" level=info msg="RemovePodSandbox for \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\"" Feb 13 15:55:39.604842 containerd[1488]: time="2025-02-13T15:55:39.604735251Z" level=info msg="Forcibly stopping sandbox \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\"" Feb 13 15:55:39.604842 containerd[1488]: time="2025-02-13T15:55:39.604830099Z" level=info msg="TearDown network for sandbox \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\" successfully" Feb 13 15:55:39.607979 containerd[1488]: time="2025-02-13T15:55:39.607924889Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.608123 containerd[1488]: time="2025-02-13T15:55:39.607980015Z" level=info msg="RemovePodSandbox \"6d14f84380b9cf96e5e23e328acd784d6e5aabe44ef329e0b56af1454dd56f8b\" returns successfully" Feb 13 15:55:39.608529 containerd[1488]: time="2025-02-13T15:55:39.608320559Z" level=info msg="StopPodSandbox for \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\"" Feb 13 15:55:39.608529 containerd[1488]: time="2025-02-13T15:55:39.608407630Z" level=info msg="TearDown network for sandbox \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\" successfully" Feb 13 15:55:39.608529 containerd[1488]: time="2025-02-13T15:55:39.608418918Z" level=info msg="StopPodSandbox for \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\" returns successfully" Feb 13 15:55:39.608967 containerd[1488]: time="2025-02-13T15:55:39.608840985Z" level=info msg="RemovePodSandbox for \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\"" Feb 13 15:55:39.608967 containerd[1488]: time="2025-02-13T15:55:39.608874820Z" level=info msg="Forcibly stopping sandbox \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\"" Feb 13 15:55:39.609106 containerd[1488]: time="2025-02-13T15:55:39.608976016Z" level=info msg="TearDown network for sandbox \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\" successfully" Feb 13 15:55:39.612342 containerd[1488]: time="2025-02-13T15:55:39.612291026Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.612510 containerd[1488]: time="2025-02-13T15:55:39.612349347Z" level=info msg="RemovePodSandbox \"4b792c0863d9953646a39033365133953de68154fbd91241442aa0b4a2326b61\" returns successfully" Feb 13 15:55:39.613116 containerd[1488]: time="2025-02-13T15:55:39.612914504Z" level=info msg="StopPodSandbox for \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\"" Feb 13 15:55:39.613116 containerd[1488]: time="2025-02-13T15:55:39.613032677Z" level=info msg="TearDown network for sandbox \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\" successfully" Feb 13 15:55:39.613116 containerd[1488]: time="2025-02-13T15:55:39.613050751Z" level=info msg="StopPodSandbox for \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\" returns successfully" Feb 13 15:55:39.615266 containerd[1488]: time="2025-02-13T15:55:39.613656663Z" level=info msg="RemovePodSandbox for \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\"" Feb 13 15:55:39.615266 containerd[1488]: time="2025-02-13T15:55:39.613697422Z" level=info msg="Forcibly stopping sandbox \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\"" Feb 13 15:55:39.615266 containerd[1488]: time="2025-02-13T15:55:39.613790865Z" level=info msg="TearDown network for sandbox \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\" successfully" Feb 13 15:55:39.617625 containerd[1488]: time="2025-02-13T15:55:39.617536258Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.617625 containerd[1488]: time="2025-02-13T15:55:39.617617413Z" level=info msg="RemovePodSandbox \"2f645975176f08ada137ad0a191d3ce5ace7ece38c5c525e29249bec5fb79199\" returns successfully" Feb 13 15:55:39.618007 containerd[1488]: time="2025-02-13T15:55:39.617978475Z" level=info msg="StopPodSandbox for \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\"" Feb 13 15:55:39.618122 containerd[1488]: time="2025-02-13T15:55:39.618099030Z" level=info msg="TearDown network for sandbox \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\" successfully" Feb 13 15:55:39.618190 containerd[1488]: time="2025-02-13T15:55:39.618121885Z" level=info msg="StopPodSandbox for \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\" returns successfully" Feb 13 15:55:39.618534 containerd[1488]: time="2025-02-13T15:55:39.618491808Z" level=info msg="RemovePodSandbox for \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\"" Feb 13 15:55:39.618534 containerd[1488]: time="2025-02-13T15:55:39.618524974Z" level=info msg="Forcibly stopping sandbox \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\"" Feb 13 15:55:39.618716 containerd[1488]: time="2025-02-13T15:55:39.618651531Z" level=info msg="TearDown network for sandbox \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\" successfully" Feb 13 15:55:39.622153 containerd[1488]: time="2025-02-13T15:55:39.622099031Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.622246 containerd[1488]: time="2025-02-13T15:55:39.622159206Z" level=info msg="RemovePodSandbox \"e76e835081b0962b47bd0fea6d7aa2007b67b8660cda14123eb788af035a359f\" returns successfully" Feb 13 15:55:39.622918 containerd[1488]: time="2025-02-13T15:55:39.622683814Z" level=info msg="StopPodSandbox for \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\"" Feb 13 15:55:39.622918 containerd[1488]: time="2025-02-13T15:55:39.622799253Z" level=info msg="TearDown network for sandbox \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\" successfully" Feb 13 15:55:39.622918 containerd[1488]: time="2025-02-13T15:55:39.622811955Z" level=info msg="StopPodSandbox for \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\" returns successfully" Feb 13 15:55:39.623211 containerd[1488]: time="2025-02-13T15:55:39.623184025Z" level=info msg="RemovePodSandbox for \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\"" Feb 13 15:55:39.623305 containerd[1488]: time="2025-02-13T15:55:39.623214710Z" level=info msg="Forcibly stopping sandbox \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\"" Feb 13 15:55:39.623392 containerd[1488]: time="2025-02-13T15:55:39.623309194Z" level=info msg="TearDown network for sandbox \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\" successfully" Feb 13 15:55:39.626818 containerd[1488]: time="2025-02-13T15:55:39.626772535Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.627166 containerd[1488]: time="2025-02-13T15:55:39.626826301Z" level=info msg="RemovePodSandbox \"d1dec3f313eb6dcf1f53e276ece9dba54fb80226041180e0f9986f62af6df2d8\" returns successfully" Feb 13 15:55:39.627315 containerd[1488]: time="2025-02-13T15:55:39.627284441Z" level=info msg="StopPodSandbox for \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\"" Feb 13 15:55:39.627575 containerd[1488]: time="2025-02-13T15:55:39.627456432Z" level=info msg="TearDown network for sandbox \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\" successfully" Feb 13 15:55:39.627575 containerd[1488]: time="2025-02-13T15:55:39.627481367Z" level=info msg="StopPodSandbox for \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\" returns successfully" Feb 13 15:55:39.627857 containerd[1488]: time="2025-02-13T15:55:39.627826055Z" level=info msg="RemovePodSandbox for \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\"" Feb 13 15:55:39.627943 containerd[1488]: time="2025-02-13T15:55:39.627864162Z" level=info msg="Forcibly stopping sandbox \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\"" Feb 13 15:55:39.628008 containerd[1488]: time="2025-02-13T15:55:39.627962591Z" level=info msg="TearDown network for sandbox \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\" successfully" Feb 13 15:55:39.631224 containerd[1488]: time="2025-02-13T15:55:39.631167278Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.631360 containerd[1488]: time="2025-02-13T15:55:39.631226203Z" level=info msg="RemovePodSandbox \"f28e0eab02a44b1ec46edcf434f1fde019990f3ae5058e02f6aed7d1cceb6f64\" returns successfully" Feb 13 15:55:39.631683 containerd[1488]: time="2025-02-13T15:55:39.631641169Z" level=info msg="StopPodSandbox for \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\"" Feb 13 15:55:39.631786 containerd[1488]: time="2025-02-13T15:55:39.631736968Z" level=info msg="TearDown network for sandbox \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\" successfully" Feb 13 15:55:39.631786 containerd[1488]: time="2025-02-13T15:55:39.631756113Z" level=info msg="StopPodSandbox for \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\" returns successfully" Feb 13 15:55:39.632204 containerd[1488]: time="2025-02-13T15:55:39.632126827Z" level=info msg="RemovePodSandbox for \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\"" Feb 13 15:55:39.632204 containerd[1488]: time="2025-02-13T15:55:39.632160403Z" level=info msg="Forcibly stopping sandbox \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\"" Feb 13 15:55:39.632365 containerd[1488]: time="2025-02-13T15:55:39.632243726Z" level=info msg="TearDown network for sandbox \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\" successfully" Feb 13 15:55:39.636466 containerd[1488]: time="2025-02-13T15:55:39.636413294Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.636588 containerd[1488]: time="2025-02-13T15:55:39.636472091Z" level=info msg="RemovePodSandbox \"f487fc40a6fc8e52d2671dc295f4ab5a638545605c7100c38d3c664dce751a11\" returns successfully" Feb 13 15:55:39.637129 containerd[1488]: time="2025-02-13T15:55:39.637002351Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:55:39.637129 containerd[1488]: time="2025-02-13T15:55:39.637119914Z" level=info msg="TearDown network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" successfully" Feb 13 15:55:39.637339 containerd[1488]: time="2025-02-13T15:55:39.637137203Z" level=info msg="StopPodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" returns successfully" Feb 13 15:55:39.637646 containerd[1488]: time="2025-02-13T15:55:39.637604529Z" level=info msg="RemovePodSandbox for \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:55:39.637762 containerd[1488]: time="2025-02-13T15:55:39.637717690Z" level=info msg="Forcibly stopping sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\"" Feb 13 15:55:39.637912 containerd[1488]: time="2025-02-13T15:55:39.637839696Z" level=info msg="TearDown network for sandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" successfully" Feb 13 15:55:39.641258 containerd[1488]: time="2025-02-13T15:55:39.641203107Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.641434 containerd[1488]: time="2025-02-13T15:55:39.641264064Z" level=info msg="RemovePodSandbox \"19bb89576d4112a685241c22796047edc6bf4297dc5d2b3d2e7d925a8160c626\" returns successfully" Feb 13 15:55:39.641823 containerd[1488]: time="2025-02-13T15:55:39.641782689Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" Feb 13 15:55:39.641945 containerd[1488]: time="2025-02-13T15:55:39.641913407Z" level=info msg="TearDown network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" successfully" Feb 13 15:55:39.641945 containerd[1488]: time="2025-02-13T15:55:39.641938035Z" level=info msg="StopPodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" returns successfully" Feb 13 15:55:39.642370 containerd[1488]: time="2025-02-13T15:55:39.642334692Z" level=info msg="RemovePodSandbox for \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" Feb 13 15:55:39.642453 containerd[1488]: time="2025-02-13T15:55:39.642374039Z" level=info msg="Forcibly stopping sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\"" Feb 13 15:55:39.642536 containerd[1488]: time="2025-02-13T15:55:39.642486001Z" level=info msg="TearDown network for sandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" successfully" Feb 13 15:55:39.645969 containerd[1488]: time="2025-02-13T15:55:39.645914605Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.646060 containerd[1488]: time="2025-02-13T15:55:39.645980745Z" level=info msg="RemovePodSandbox \"a287c54373381bd0c8e92a908726240e8e7881de5651f002a5a4202c2614d451\" returns successfully" Feb 13 15:55:39.646469 containerd[1488]: time="2025-02-13T15:55:39.646416901Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\"" Feb 13 15:55:39.646649 containerd[1488]: time="2025-02-13T15:55:39.646539983Z" level=info msg="TearDown network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" successfully" Feb 13 15:55:39.646649 containerd[1488]: time="2025-02-13T15:55:39.646579167Z" level=info msg="StopPodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" returns successfully" Feb 13 15:55:39.647000 containerd[1488]: time="2025-02-13T15:55:39.646946007Z" level=info msg="RemovePodSandbox for \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\"" Feb 13 15:55:39.647000 containerd[1488]: time="2025-02-13T15:55:39.646977057Z" level=info msg="Forcibly stopping sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\"" Feb 13 15:55:39.647140 containerd[1488]: time="2025-02-13T15:55:39.647074019Z" level=info msg="TearDown network for sandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" successfully" Feb 13 15:55:39.650483 containerd[1488]: time="2025-02-13T15:55:39.650428492Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.650644 containerd[1488]: time="2025-02-13T15:55:39.650485773Z" level=info msg="RemovePodSandbox \"76a9eb21edb7955f5ac9b3c9a852017cf580bc04d14f8a74c9bc52e3e46cced2\" returns successfully" Feb 13 15:55:39.651007 containerd[1488]: time="2025-02-13T15:55:39.650875615Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\"" Feb 13 15:55:39.651007 containerd[1488]: time="2025-02-13T15:55:39.650996253Z" level=info msg="TearDown network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" successfully" Feb 13 15:55:39.651208 containerd[1488]: time="2025-02-13T15:55:39.651014735Z" level=info msg="StopPodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" returns successfully" Feb 13 15:55:39.651471 containerd[1488]: time="2025-02-13T15:55:39.651434135Z" level=info msg="RemovePodSandbox for \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\"" Feb 13 15:55:39.651471 containerd[1488]: time="2025-02-13T15:55:39.651467315Z" level=info msg="Forcibly stopping sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\"" Feb 13 15:55:39.651651 containerd[1488]: time="2025-02-13T15:55:39.651576346Z" level=info msg="TearDown network for sandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" successfully" Feb 13 15:55:39.654837 containerd[1488]: time="2025-02-13T15:55:39.654778782Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.655092 containerd[1488]: time="2025-02-13T15:55:39.654840238Z" level=info msg="RemovePodSandbox \"6055a85d8108a3f3fb6183e5a920db5ffaf31d9980d58affb7ff1652f97a52a2\" returns successfully" Feb 13 15:55:39.655383 containerd[1488]: time="2025-02-13T15:55:39.655303419Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\"" Feb 13 15:55:39.655522 containerd[1488]: time="2025-02-13T15:55:39.655436529Z" level=info msg="TearDown network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" successfully" Feb 13 15:55:39.655522 containerd[1488]: time="2025-02-13T15:55:39.655455786Z" level=info msg="StopPodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" returns successfully" Feb 13 15:55:39.655852 containerd[1488]: time="2025-02-13T15:55:39.655808678Z" level=info msg="RemovePodSandbox for \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\"" Feb 13 15:55:39.655852 containerd[1488]: time="2025-02-13T15:55:39.655840495Z" level=info msg="Forcibly stopping sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\"" Feb 13 15:55:39.656120 containerd[1488]: time="2025-02-13T15:55:39.655948386Z" level=info msg="TearDown network for sandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" successfully" Feb 13 15:55:39.659457 containerd[1488]: time="2025-02-13T15:55:39.659412068Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.659647 containerd[1488]: time="2025-02-13T15:55:39.659472046Z" level=info msg="RemovePodSandbox \"889a9a4a3d0b9f13fd7c63f869780b9f47dd8a226d3443c61caa5e460894eaaa\" returns successfully" Feb 13 15:55:39.659953 containerd[1488]: time="2025-02-13T15:55:39.659841697Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\"" Feb 13 15:55:39.660101 containerd[1488]: time="2025-02-13T15:55:39.659977130Z" level=info msg="TearDown network for sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" successfully" Feb 13 15:55:39.660101 containerd[1488]: time="2025-02-13T15:55:39.659997921Z" level=info msg="StopPodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" returns successfully" Feb 13 15:55:39.660439 containerd[1488]: time="2025-02-13T15:55:39.660327821Z" level=info msg="RemovePodSandbox for \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\"" Feb 13 15:55:39.660439 containerd[1488]: time="2025-02-13T15:55:39.660372898Z" level=info msg="Forcibly stopping sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\"" Feb 13 15:55:39.660672 containerd[1488]: time="2025-02-13T15:55:39.660507356Z" level=info msg="TearDown network for sandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" successfully" Feb 13 15:55:39.663747 containerd[1488]: time="2025-02-13T15:55:39.663688857Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.663850 containerd[1488]: time="2025-02-13T15:55:39.663751162Z" level=info msg="RemovePodSandbox \"cf95d2b67bcb689c690366bbf4557fa0ee04771ec0de6f180abd835928cf9b3d\" returns successfully" Feb 13 15:55:39.664249 containerd[1488]: time="2025-02-13T15:55:39.664091988Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\"" Feb 13 15:55:39.664249 containerd[1488]: time="2025-02-13T15:55:39.664175849Z" level=info msg="TearDown network for sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" successfully" Feb 13 15:55:39.664249 containerd[1488]: time="2025-02-13T15:55:39.664186188Z" level=info msg="StopPodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" returns successfully" Feb 13 15:55:39.665889 containerd[1488]: time="2025-02-13T15:55:39.664753304Z" level=info msg="RemovePodSandbox for \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\"" Feb 13 15:55:39.665889 containerd[1488]: time="2025-02-13T15:55:39.664789843Z" level=info msg="Forcibly stopping sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\"" Feb 13 15:55:39.665889 containerd[1488]: time="2025-02-13T15:55:39.664872865Z" level=info msg="TearDown network for sandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" successfully" Feb 13 15:55:39.668217 containerd[1488]: time="2025-02-13T15:55:39.668160214Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.668304 containerd[1488]: time="2025-02-13T15:55:39.668244332Z" level=info msg="RemovePodSandbox \"d849700f715172bcecbca890386ec2329d80b4373202e80abafc7e0caebba66e\" returns successfully" Feb 13 15:55:39.668658 containerd[1488]: time="2025-02-13T15:55:39.668630234Z" level=info msg="StopPodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\"" Feb 13 15:55:39.668869 containerd[1488]: time="2025-02-13T15:55:39.668826103Z" level=info msg="TearDown network for sandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" successfully" Feb 13 15:55:39.668869 containerd[1488]: time="2025-02-13T15:55:39.668851933Z" level=info msg="StopPodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" returns successfully" Feb 13 15:55:39.669378 containerd[1488]: time="2025-02-13T15:55:39.669348571Z" level=info msg="RemovePodSandbox for \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\"" Feb 13 15:55:39.669462 containerd[1488]: time="2025-02-13T15:55:39.669381383Z" level=info msg="Forcibly stopping sandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\"" Feb 13 15:55:39.669526 containerd[1488]: time="2025-02-13T15:55:39.669476150Z" level=info msg="TearDown network for sandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" successfully" Feb 13 15:55:39.673133 containerd[1488]: time="2025-02-13T15:55:39.673096143Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.673254 containerd[1488]: time="2025-02-13T15:55:39.673161201Z" level=info msg="RemovePodSandbox \"c937e05a5a2b78e52bbdbc7a85284e6e9b1990e96618a3c269421e3e23146652\" returns successfully" Feb 13 15:55:39.673667 containerd[1488]: time="2025-02-13T15:55:39.673604339Z" level=info msg="StopPodSandbox for \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\"" Feb 13 15:55:39.673751 containerd[1488]: time="2025-02-13T15:55:39.673734743Z" level=info msg="TearDown network for sandbox \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\" successfully" Feb 13 15:55:39.673805 containerd[1488]: time="2025-02-13T15:55:39.673756049Z" level=info msg="StopPodSandbox for \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\" returns successfully" Feb 13 15:55:39.674324 containerd[1488]: time="2025-02-13T15:55:39.674203004Z" level=info msg="RemovePodSandbox for \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\"" Feb 13 15:55:39.674324 containerd[1488]: time="2025-02-13T15:55:39.674238787Z" level=info msg="Forcibly stopping sandbox \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\"" Feb 13 15:55:39.674497 containerd[1488]: time="2025-02-13T15:55:39.674352336Z" level=info msg="TearDown network for sandbox \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\" successfully" Feb 13 15:55:39.677767 containerd[1488]: time="2025-02-13T15:55:39.677728017Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.677873 containerd[1488]: time="2025-02-13T15:55:39.677794374Z" level=info msg="RemovePodSandbox \"7adc3428ca839418a18321b603fed4df4ffd2641b8529bb5384756a2497bb3ff\" returns successfully" Feb 13 15:55:39.678302 containerd[1488]: time="2025-02-13T15:55:39.678267743Z" level=info msg="StopPodSandbox for \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\"" Feb 13 15:55:39.678449 containerd[1488]: time="2025-02-13T15:55:39.678396056Z" level=info msg="TearDown network for sandbox \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\" successfully" Feb 13 15:55:39.678449 containerd[1488]: time="2025-02-13T15:55:39.678418408Z" level=info msg="StopPodSandbox for \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\" returns successfully" Feb 13 15:55:39.678847 containerd[1488]: time="2025-02-13T15:55:39.678801826Z" level=info msg="RemovePodSandbox for \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\"" Feb 13 15:55:39.678847 containerd[1488]: time="2025-02-13T15:55:39.678831915Z" level=info msg="Forcibly stopping sandbox \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\"" Feb 13 15:55:39.679002 containerd[1488]: time="2025-02-13T15:55:39.678926610Z" level=info msg="TearDown network for sandbox \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\" successfully" Feb 13 15:55:39.682941 containerd[1488]: time="2025-02-13T15:55:39.682784024Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.682941 containerd[1488]: time="2025-02-13T15:55:39.682846347Z" level=info msg="RemovePodSandbox \"49bfccbc6aa05d4d5edd33c21625820743957c32deb0557ce57b072affbf2077\" returns successfully" Feb 13 15:55:39.683921 containerd[1488]: time="2025-02-13T15:55:39.683431843Z" level=info msg="StopPodSandbox for \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\"" Feb 13 15:55:39.683921 containerd[1488]: time="2025-02-13T15:55:39.683586765Z" level=info msg="TearDown network for sandbox \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\" successfully" Feb 13 15:55:39.683921 containerd[1488]: time="2025-02-13T15:55:39.683608103Z" level=info msg="StopPodSandbox for \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\" returns successfully" Feb 13 15:55:39.684277 containerd[1488]: time="2025-02-13T15:55:39.684250282Z" level=info msg="RemovePodSandbox for \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\"" Feb 13 15:55:39.684412 containerd[1488]: time="2025-02-13T15:55:39.684376911Z" level=info msg="Forcibly stopping sandbox \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\"" Feb 13 15:55:39.684540 containerd[1488]: time="2025-02-13T15:55:39.684485741Z" level=info msg="TearDown network for sandbox \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\" successfully" Feb 13 15:55:39.687962 containerd[1488]: time="2025-02-13T15:55:39.687908744Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.688077 containerd[1488]: time="2025-02-13T15:55:39.687974300Z" level=info msg="RemovePodSandbox \"23d010e7999608c46ee794f1ba11e3ad41c1fb94c2f3473762ff8d76c3d9be86\" returns successfully" Feb 13 15:55:39.688366 containerd[1488]: time="2025-02-13T15:55:39.688334086Z" level=info msg="StopPodSandbox for \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\"" Feb 13 15:55:39.688519 containerd[1488]: time="2025-02-13T15:55:39.688456333Z" level=info msg="TearDown network for sandbox \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\" successfully" Feb 13 15:55:39.688519 containerd[1488]: time="2025-02-13T15:55:39.688476479Z" level=info msg="StopPodSandbox for \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\" returns successfully" Feb 13 15:55:39.690725 containerd[1488]: time="2025-02-13T15:55:39.688955982Z" level=info msg="RemovePodSandbox for \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\"" Feb 13 15:55:39.690725 containerd[1488]: time="2025-02-13T15:55:39.688994438Z" level=info msg="Forcibly stopping sandbox \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\"" Feb 13 15:55:39.690725 containerd[1488]: time="2025-02-13T15:55:39.689089592Z" level=info msg="TearDown network for sandbox \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\" successfully" Feb 13 15:55:39.694982 containerd[1488]: time="2025-02-13T15:55:39.693398276Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.694982 containerd[1488]: time="2025-02-13T15:55:39.693506161Z" level=info msg="RemovePodSandbox \"0ff44aeba526ecd14d13561dad85484e177da940f183726e9b24975fdcdb3ead\" returns successfully" Feb 13 15:55:39.695523 containerd[1488]: time="2025-02-13T15:55:39.695487363Z" level=info msg="StopPodSandbox for \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\"" Feb 13 15:55:39.695847 containerd[1488]: time="2025-02-13T15:55:39.695823671Z" level=info msg="TearDown network for sandbox \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\" successfully" Feb 13 15:55:39.696015 containerd[1488]: time="2025-02-13T15:55:39.695983746Z" level=info msg="StopPodSandbox for \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\" returns successfully" Feb 13 15:55:39.698837 containerd[1488]: time="2025-02-13T15:55:39.698806191Z" level=info msg="RemovePodSandbox for \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\"" Feb 13 15:55:39.699039 containerd[1488]: time="2025-02-13T15:55:39.699015263Z" level=info msg="Forcibly stopping sandbox \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\"" Feb 13 15:55:39.699404 containerd[1488]: time="2025-02-13T15:55:39.699347564Z" level=info msg="TearDown network for sandbox \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\" successfully" Feb 13 15:55:39.705204 containerd[1488]: time="2025-02-13T15:55:39.705157842Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.705353 containerd[1488]: time="2025-02-13T15:55:39.705238488Z" level=info msg="RemovePodSandbox \"a76974639a3a547f342cdf3424f675a7c8f9e70716ed83f78cc4b2a7a8d73f50\" returns successfully" Feb 13 15:55:39.706265 containerd[1488]: time="2025-02-13T15:55:39.706024437Z" level=info msg="StopPodSandbox for \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\"" Feb 13 15:55:39.706265 containerd[1488]: time="2025-02-13T15:55:39.706145567Z" level=info msg="TearDown network for sandbox \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\" successfully" Feb 13 15:55:39.706265 containerd[1488]: time="2025-02-13T15:55:39.706161430Z" level=info msg="StopPodSandbox for \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\" returns successfully" Feb 13 15:55:39.706694 containerd[1488]: time="2025-02-13T15:55:39.706645506Z" level=info msg="RemovePodSandbox for \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\"" Feb 13 15:55:39.706694 containerd[1488]: time="2025-02-13T15:55:39.706678944Z" level=info msg="Forcibly stopping sandbox \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\"" Feb 13 15:55:39.706853 containerd[1488]: time="2025-02-13T15:55:39.706773621Z" level=info msg="TearDown network for sandbox \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\" successfully" Feb 13 15:55:39.711728 containerd[1488]: time="2025-02-13T15:55:39.711481255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:55:39.711728 containerd[1488]: time="2025-02-13T15:55:39.711541368Z" level=info msg="RemovePodSandbox \"fa33055521b468ca518b10e1ad42d6140f34547d78c1b91af770a26805de24a8\" returns successfully" Feb 13 15:55:40.002944 systemd-networkd[1398]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 15:55:40.593871 kubelet[1860]: E0213 15:55:40.593803 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:41.594739 kubelet[1860]: E0213 15:55:41.594672 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:42.595378 kubelet[1860]: E0213 15:55:42.595308 1860 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:55:42.829109 ntpd[1458]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%10]:123 Feb 13 15:55:42.829600 ntpd[1458]: 13 Feb 15:55:42 ntpd[1458]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%10]:123