Feb 13 19:49:20.095999 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 19:49:20.096050 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:49:20.096067 kernel: BIOS-provided physical RAM map: Feb 13 19:49:20.107640 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:49:20.107665 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:49:20.107676 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:49:20.107696 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 13 19:49:20.107707 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 13 19:49:20.107718 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 13 19:49:20.107729 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:49:20.107740 kernel: NX (Execute Disable) protection: active Feb 13 19:49:20.107750 kernel: APIC: Static calls initialized Feb 13 19:49:20.107760 kernel: SMBIOS 2.7 present. Feb 13 19:49:20.107772 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 13 19:49:20.107788 kernel: Hypervisor detected: KVM Feb 13 19:49:20.107801 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:49:20.107813 kernel: kvm-clock: using sched offset of 8475396479 cycles Feb 13 19:49:20.107826 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:49:20.107839 kernel: tsc: Detected 2499.998 MHz processor Feb 13 19:49:20.107851 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:49:20.107863 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:49:20.107879 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 13 19:49:20.107891 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:49:20.107903 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:49:20.107915 kernel: Using GB pages for direct mapping Feb 13 19:49:20.107927 kernel: ACPI: Early table checksum verification disabled Feb 13 19:49:20.107938 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 13 19:49:20.107951 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 13 19:49:20.107962 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:49:20.107974 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 13 19:49:20.107989 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 13 19:49:20.108002 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 19:49:20.108013 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:49:20.108025 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 13 19:49:20.108037 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:49:20.108049 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 13 19:49:20.108061 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 13 19:49:20.108073 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 19:49:20.108099 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 13 19:49:20.108115 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 13 19:49:20.108133 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 13 19:49:20.108146 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 13 19:49:20.108158 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 13 19:49:20.108171 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 13 19:49:20.108187 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 13 19:49:20.108201 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 13 19:49:20.108213 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 13 19:49:20.108236 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 13 19:49:20.108249 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:49:20.109119 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 19:49:20.109155 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 13 19:49:20.109172 kernel: NUMA: Initialized distance table, cnt=1 Feb 13 19:49:20.109186 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 13 19:49:20.109207 kernel: Zone ranges: Feb 13 19:49:20.109222 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:49:20.109236 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 13 19:49:20.109251 kernel: Normal empty Feb 13 19:49:20.109266 kernel: Movable zone start for each node Feb 13 19:49:20.109280 kernel: Early memory node ranges Feb 13 19:49:20.109295 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:49:20.109310 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 13 19:49:20.109325 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 13 19:49:20.109340 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:49:20.109358 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:49:20.109373 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 13 19:49:20.109387 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 19:49:20.109402 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:49:20.109416 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 13 19:49:20.109431 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:49:20.109445 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:49:20.109460 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:49:20.109475 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:49:20.109493 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:49:20.109507 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:49:20.109522 kernel: TSC deadline timer available Feb 13 19:49:20.109536 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 19:49:20.109551 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:49:20.109566 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 13 19:49:20.109581 kernel: Booting paravirtualized kernel on KVM Feb 13 19:49:20.109595 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:49:20.109613 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 19:49:20.109628 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 19:49:20.109643 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 19:49:20.109657 kernel: pcpu-alloc: [0] 0 1 Feb 13 19:49:20.109671 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:49:20.109685 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:49:20.109702 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:49:20.109717 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:49:20.109735 kernel: random: crng init done Feb 13 19:49:20.109749 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:49:20.109764 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:49:20.109779 kernel: Fallback order for Node 0: 0 Feb 13 19:49:20.109793 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 13 19:49:20.109807 kernel: Policy zone: DMA32 Feb 13 19:49:20.109822 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:49:20.109837 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 125152K reserved, 0K cma-reserved) Feb 13 19:49:20.109852 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:49:20.109870 kernel: Kernel/User page tables isolation: enabled Feb 13 19:49:20.109885 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 19:49:20.109899 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:49:20.109914 kernel: Dynamic Preempt: voluntary Feb 13 19:49:20.109929 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:49:20.109945 kernel: rcu: RCU event tracing is enabled. Feb 13 19:49:20.109960 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:49:20.109975 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:49:20.109990 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:49:20.110004 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:49:20.110022 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:49:20.110037 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:49:20.110051 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 19:49:20.110066 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:49:20.114131 kernel: Console: colour VGA+ 80x25 Feb 13 19:49:20.114160 kernel: printk: console [ttyS0] enabled Feb 13 19:49:20.114176 kernel: ACPI: Core revision 20230628 Feb 13 19:49:20.114191 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 13 19:49:20.114206 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:49:20.114228 kernel: x2apic enabled Feb 13 19:49:20.114243 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:49:20.114270 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 13 19:49:20.114289 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Feb 13 19:49:20.114304 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 19:49:20.114320 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 19:49:20.114335 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:49:20.114350 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:49:20.114366 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:49:20.114381 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:49:20.114397 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 19:49:20.114412 kernel: RETBleed: Vulnerable Feb 13 19:49:20.114431 kernel: Speculative Store Bypass: Vulnerable Feb 13 19:49:20.114447 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:49:20.114462 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:49:20.114475 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 19:49:20.114489 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:49:20.114504 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:49:20.114522 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:49:20.114537 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 19:49:20.114553 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 19:49:20.114568 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 19:49:20.114583 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 19:49:20.114598 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 19:49:20.114614 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 19:49:20.114629 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:49:20.114644 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 19:49:20.114659 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 19:49:20.114674 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 13 19:49:20.114692 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 13 19:49:20.114707 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 13 19:49:20.114722 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 13 19:49:20.114737 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 13 19:49:20.114752 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:49:20.114767 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:49:20.114782 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:49:20.114798 kernel: landlock: Up and running. Feb 13 19:49:20.114813 kernel: SELinux: Initializing. Feb 13 19:49:20.114828 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:49:20.114844 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:49:20.114859 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 19:49:20.114877 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:49:20.114893 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:49:20.114908 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:49:20.114924 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 19:49:20.114939 kernel: signal: max sigframe size: 3632 Feb 13 19:49:20.114955 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:49:20.114972 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:49:20.114987 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:49:20.115003 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:49:20.115021 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:49:20.115036 kernel: .... node #0, CPUs: #1 Feb 13 19:49:20.115053 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 19:49:20.115069 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:49:20.115119 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:49:20.115135 kernel: smpboot: Max logical packages: 1 Feb 13 19:49:20.115150 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Feb 13 19:49:20.115165 kernel: devtmpfs: initialized Feb 13 19:49:20.115184 kernel: x86/mm: Memory block size: 128MB Feb 13 19:49:20.115200 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:49:20.115215 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:49:20.115231 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:49:20.115246 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:49:20.115260 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:49:20.115276 kernel: audit: type=2000 audit(1739476159.158:1): state=initialized audit_enabled=0 res=1 Feb 13 19:49:20.115291 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:49:20.115306 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:49:20.115326 kernel: cpuidle: using governor menu Feb 13 19:49:20.115341 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:49:20.115355 kernel: dca service started, version 1.12.1 Feb 13 19:49:20.115368 kernel: PCI: Using configuration type 1 for base access Feb 13 19:49:20.115384 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:49:20.115399 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:49:20.115413 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:49:20.115426 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:49:20.115440 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:49:20.115456 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:49:20.115470 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:49:20.115484 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:49:20.115499 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:49:20.115514 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 19:49:20.115528 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:49:20.115543 kernel: ACPI: Interpreter enabled Feb 13 19:49:20.115557 kernel: ACPI: PM: (supports S0 S5) Feb 13 19:49:20.115572 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:49:20.115591 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:49:20.115606 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:49:20.115621 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 19:49:20.115637 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:49:20.115890 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:49:20.116038 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 19:49:20.118686 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 19:49:20.118734 kernel: acpiphp: Slot [3] registered Feb 13 19:49:20.118749 kernel: acpiphp: Slot [4] registered Feb 13 19:49:20.118763 kernel: acpiphp: Slot [5] registered Feb 13 19:49:20.118776 kernel: acpiphp: Slot [6] registered Feb 13 19:49:20.118790 kernel: acpiphp: Slot [7] registered Feb 13 19:49:20.118803 kernel: acpiphp: Slot [8] registered Feb 13 19:49:20.118818 kernel: acpiphp: Slot [9] registered Feb 13 19:49:20.118831 kernel: acpiphp: Slot [10] registered Feb 13 19:49:20.118845 kernel: acpiphp: Slot [11] registered Feb 13 19:49:20.118858 kernel: acpiphp: Slot [12] registered Feb 13 19:49:20.118875 kernel: acpiphp: Slot [13] registered Feb 13 19:49:20.118888 kernel: acpiphp: Slot [14] registered Feb 13 19:49:20.118902 kernel: acpiphp: Slot [15] registered Feb 13 19:49:20.118916 kernel: acpiphp: Slot [16] registered Feb 13 19:49:20.118929 kernel: acpiphp: Slot [17] registered Feb 13 19:49:20.118943 kernel: acpiphp: Slot [18] registered Feb 13 19:49:20.118959 kernel: acpiphp: Slot [19] registered Feb 13 19:49:20.118974 kernel: acpiphp: Slot [20] registered Feb 13 19:49:20.118990 kernel: acpiphp: Slot [21] registered Feb 13 19:49:20.119009 kernel: acpiphp: Slot [22] registered Feb 13 19:49:20.119025 kernel: acpiphp: Slot [23] registered Feb 13 19:49:20.119040 kernel: acpiphp: Slot [24] registered Feb 13 19:49:20.119056 kernel: acpiphp: Slot [25] registered Feb 13 19:49:20.119072 kernel: acpiphp: Slot [26] registered Feb 13 19:49:20.119329 kernel: acpiphp: Slot [27] registered Feb 13 19:49:20.119348 kernel: acpiphp: Slot [28] registered Feb 13 19:49:20.119364 kernel: acpiphp: Slot [29] registered Feb 13 19:49:20.119381 kernel: acpiphp: Slot [30] registered Feb 13 19:49:20.119397 kernel: acpiphp: Slot [31] registered Feb 13 19:49:20.119418 kernel: PCI host bridge to bus 0000:00 Feb 13 19:49:20.119587 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:49:20.119711 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:49:20.119829 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:49:20.119945 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 19:49:20.120060 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:49:20.123075 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 19:49:20.123297 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 19:49:20.123454 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 13 19:49:20.124356 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 19:49:20.124523 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 13 19:49:20.124672 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 13 19:49:20.128143 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 13 19:49:20.128361 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 13 19:49:20.128500 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 13 19:49:20.128633 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 13 19:49:20.128768 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 13 19:49:20.129001 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 13 19:49:20.129253 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 13 19:49:20.129404 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 19:49:20.129536 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:49:20.129679 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:49:20.129811 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 13 19:49:20.129951 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:49:20.131132 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 13 19:49:20.131167 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:49:20.131185 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:49:20.131209 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:49:20.131226 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:49:20.131244 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 19:49:20.131261 kernel: iommu: Default domain type: Translated Feb 13 19:49:20.131278 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:49:20.131296 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:49:20.131420 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:49:20.131438 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:49:20.131453 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 13 19:49:20.131637 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 13 19:49:20.131771 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 13 19:49:20.131907 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:49:20.131929 kernel: vgaarb: loaded Feb 13 19:49:20.131944 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 19:49:20.131959 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 13 19:49:20.131974 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:49:20.131990 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:49:20.132005 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:49:20.132025 kernel: pnp: PnP ACPI init Feb 13 19:49:20.132041 kernel: pnp: PnP ACPI: found 5 devices Feb 13 19:49:20.132056 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:49:20.132072 kernel: NET: Registered PF_INET protocol family Feb 13 19:49:20.136212 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:49:20.136232 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 19:49:20.136251 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:49:20.136268 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:49:20.136292 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 19:49:20.136309 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 19:49:20.136326 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:49:20.136344 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:49:20.136360 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:49:20.136377 kernel: NET: Registered PF_XDP protocol family Feb 13 19:49:20.136546 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:49:20.136680 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:49:20.136809 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:49:20.137014 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 19:49:20.137328 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 19:49:20.137364 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:49:20.137384 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:49:20.137405 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 13 19:49:20.137426 kernel: clocksource: Switched to clocksource tsc Feb 13 19:49:20.137445 kernel: Initialise system trusted keyrings Feb 13 19:49:20.137465 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 19:49:20.137491 kernel: Key type asymmetric registered Feb 13 19:49:20.137510 kernel: Asymmetric key parser 'x509' registered Feb 13 19:49:20.137529 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:49:20.137549 kernel: io scheduler mq-deadline registered Feb 13 19:49:20.137569 kernel: io scheduler kyber registered Feb 13 19:49:20.137588 kernel: io scheduler bfq registered Feb 13 19:49:20.137608 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:49:20.137628 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:49:20.137647 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:49:20.137671 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:49:20.137690 kernel: i8042: Warning: Keylock active Feb 13 19:49:20.137709 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:49:20.137729 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:49:20.137915 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 19:49:20.138065 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 19:49:20.142389 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T19:49:19 UTC (1739476159) Feb 13 19:49:20.142528 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 19:49:20.142557 kernel: intel_pstate: CPU model not supported Feb 13 19:49:20.142575 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:49:20.142591 kernel: Segment Routing with IPv6 Feb 13 19:49:20.142608 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:49:20.142626 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:49:20.142643 kernel: Key type dns_resolver registered Feb 13 19:49:20.142660 kernel: IPI shorthand broadcast: enabled Feb 13 19:49:20.142677 kernel: sched_clock: Marking stable (668002058, 233461007)->(1042892151, -141429086) Feb 13 19:49:20.142695 kernel: registered taskstats version 1 Feb 13 19:49:20.142716 kernel: Loading compiled-in X.509 certificates Feb 13 19:49:20.142733 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 19:49:20.142749 kernel: Key type .fscrypt registered Feb 13 19:49:20.142766 kernel: Key type fscrypt-provisioning registered Feb 13 19:49:20.142782 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:49:20.142798 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:49:20.142815 kernel: ima: No architecture policies found Feb 13 19:49:20.142832 kernel: clk: Disabling unused clocks Feb 13 19:49:20.142852 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 19:49:20.142868 kernel: Write protecting the kernel read-only data: 36864k Feb 13 19:49:20.142885 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 19:49:20.142978 kernel: Run /init as init process Feb 13 19:49:20.142997 kernel: with arguments: Feb 13 19:49:20.143014 kernel: /init Feb 13 19:49:20.143030 kernel: with environment: Feb 13 19:49:20.143046 kernel: HOME=/ Feb 13 19:49:20.143063 kernel: TERM=linux Feb 13 19:49:20.143091 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:49:20.144148 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:49:20.144184 systemd[1]: Detected virtualization amazon. Feb 13 19:49:20.144206 systemd[1]: Detected architecture x86-64. Feb 13 19:49:20.144224 systemd[1]: Running in initrd. Feb 13 19:49:20.144245 systemd[1]: No hostname configured, using default hostname. Feb 13 19:49:20.144262 systemd[1]: Hostname set to . Feb 13 19:49:20.144281 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:49:20.144299 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:49:20.144319 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:49:20.144338 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:49:20.144356 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:49:20.144375 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:49:20.144397 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:49:20.144415 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:49:20.144436 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:49:20.144454 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:49:20.144474 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:49:20.144492 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:49:20.144511 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:49:20.144532 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:49:20.144550 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:49:20.144569 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:49:20.144588 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:49:20.144606 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:49:20.144624 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:49:20.144644 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:49:20.144662 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:49:20.144681 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:49:20.144703 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:49:20.144721 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:49:20.144739 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:49:20.144757 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:49:20.144780 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:49:20.144803 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:49:20.144821 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:49:20.144846 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:49:20.144864 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:49:20.144883 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:49:20.144941 systemd-journald[179]: Collecting audit messages is disabled. Feb 13 19:49:20.144985 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:49:20.145004 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:49:20.145023 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:49:20.145043 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:49:20.145063 systemd-journald[179]: Journal started Feb 13 19:49:20.146189 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2c842d2206859dd78b3a16c299c6c2) is 4.8M, max 38.6M, 33.7M free. Feb 13 19:49:20.120638 systemd-modules-load[180]: Inserted module 'overlay' Feb 13 19:49:20.269098 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:49:20.269138 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:49:20.269160 kernel: Bridge firewalling registered Feb 13 19:49:20.172583 systemd-modules-load[180]: Inserted module 'br_netfilter' Feb 13 19:49:20.267447 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:49:20.269376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:49:20.271507 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:49:20.281323 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:49:20.296310 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:49:20.301073 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:49:20.305371 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:49:20.312269 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:49:20.323628 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:49:20.326030 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:49:20.342569 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:49:20.356701 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:49:20.369728 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:49:20.378748 dracut-cmdline[207]: dracut-dracut-053 Feb 13 19:49:20.383934 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:49:20.443891 systemd-resolved[219]: Positive Trust Anchors: Feb 13 19:49:20.443915 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:49:20.443975 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:49:20.449106 systemd-resolved[219]: Defaulting to hostname 'linux'. Feb 13 19:49:20.452213 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:49:20.458407 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:49:20.508115 kernel: SCSI subsystem initialized Feb 13 19:49:20.520115 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:49:20.551035 kernel: iscsi: registered transport (tcp) Feb 13 19:49:20.590112 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:49:20.590190 kernel: QLogic iSCSI HBA Driver Feb 13 19:49:20.637906 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:49:20.644440 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:49:20.675249 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:49:20.675330 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:49:20.675353 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:49:20.721116 kernel: raid6: avx512x4 gen() 8333 MB/s Feb 13 19:49:20.739116 kernel: raid6: avx512x2 gen() 10652 MB/s Feb 13 19:49:20.756105 kernel: raid6: avx512x1 gen() 15232 MB/s Feb 13 19:49:20.775121 kernel: raid6: avx2x4 gen() 9004 MB/s Feb 13 19:49:20.793118 kernel: raid6: avx2x2 gen() 8895 MB/s Feb 13 19:49:20.811850 kernel: raid6: avx2x1 gen() 6307 MB/s Feb 13 19:49:20.811918 kernel: raid6: using algorithm avx512x1 gen() 15232 MB/s Feb 13 19:49:20.829461 kernel: raid6: .... xor() 13374 MB/s, rmw enabled Feb 13 19:49:20.829542 kernel: raid6: using avx512x2 recovery algorithm Feb 13 19:49:20.851104 kernel: xor: automatically using best checksumming function avx Feb 13 19:49:21.083105 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:49:21.099097 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:49:21.108397 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:49:21.129306 systemd-udevd[398]: Using default interface naming scheme 'v255'. Feb 13 19:49:21.134875 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:49:21.146323 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:49:21.197500 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Feb 13 19:49:21.238380 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:49:21.244282 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:49:21.309753 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:49:21.321328 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:49:21.355483 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:49:21.360391 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:49:21.361662 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:49:21.363325 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:49:21.368445 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:49:21.417423 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:49:21.455456 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:49:21.455696 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:49:21.455720 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 13 19:49:21.455907 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:49:21.455937 kernel: AES CTR mode by8 optimization enabled Feb 13 19:49:21.455960 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:ba:60:4b:91:eb Feb 13 19:49:21.418650 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:49:21.436620 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:49:21.436788 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:49:21.438551 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:49:21.439879 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:49:21.440095 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:49:21.441367 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:49:21.451930 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:49:21.458957 (udev-worker)[450]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:49:21.503373 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:49:21.506447 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 19:49:21.519173 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:49:21.531114 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:49:21.531187 kernel: GPT:9289727 != 16777215 Feb 13 19:49:21.531206 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:49:21.531225 kernel: GPT:9289727 != 16777215 Feb 13 19:49:21.531241 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:49:21.531261 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:49:21.643788 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (460) Feb 13 19:49:21.644270 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:49:21.653307 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:49:21.661925 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (454) Feb 13 19:49:21.706505 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:49:21.756598 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:49:21.769338 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:49:21.776761 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:49:21.784241 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:49:21.785519 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:49:21.798271 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:49:21.811651 disk-uuid[632]: Primary Header is updated. Feb 13 19:49:21.811651 disk-uuid[632]: Secondary Entries is updated. Feb 13 19:49:21.811651 disk-uuid[632]: Secondary Header is updated. Feb 13 19:49:21.820111 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:49:21.836103 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:49:22.842107 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:49:22.852379 disk-uuid[633]: The operation has completed successfully. Feb 13 19:49:23.072488 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:49:23.072644 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:49:23.131343 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:49:23.140241 sh[891]: Success Feb 13 19:49:23.165113 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:49:23.312233 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:49:23.323217 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:49:23.336214 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:49:23.379644 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 19:49:23.379710 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:49:23.379725 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:49:23.380459 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:49:23.381487 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:49:23.454114 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:49:23.459257 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:49:23.460968 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:49:23.468364 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:49:23.474450 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:49:23.486221 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:49:23.486288 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:49:23.486308 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:49:23.493148 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:49:23.513842 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:49:23.515655 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:49:23.529826 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:49:23.538953 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:49:23.646361 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:49:23.657514 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:49:23.685298 systemd-networkd[1084]: lo: Link UP Feb 13 19:49:23.685309 systemd-networkd[1084]: lo: Gained carrier Feb 13 19:49:23.688374 systemd-networkd[1084]: Enumeration completed Feb 13 19:49:23.688513 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:49:23.689949 systemd-networkd[1084]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:49:23.689955 systemd-networkd[1084]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:49:23.689993 systemd[1]: Reached target network.target - Network. Feb 13 19:49:23.697990 systemd-networkd[1084]: eth0: Link UP Feb 13 19:49:23.697997 systemd-networkd[1084]: eth0: Gained carrier Feb 13 19:49:23.698014 systemd-networkd[1084]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:49:23.717312 systemd-networkd[1084]: eth0: DHCPv4 address 172.31.31.165/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:49:23.899743 ignition[1002]: Ignition 2.20.0 Feb 13 19:49:23.899754 ignition[1002]: Stage: fetch-offline Feb 13 19:49:23.899931 ignition[1002]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:49:23.907372 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:49:23.899939 ignition[1002]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:49:23.900844 ignition[1002]: Ignition finished successfully Feb 13 19:49:23.931381 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:49:23.995693 ignition[1094]: Ignition 2.20.0 Feb 13 19:49:23.995708 ignition[1094]: Stage: fetch Feb 13 19:49:23.996183 ignition[1094]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:49:23.996196 ignition[1094]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:49:23.996397 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:49:24.047359 ignition[1094]: PUT result: OK Feb 13 19:49:24.069740 ignition[1094]: parsed url from cmdline: "" Feb 13 19:49:24.069751 ignition[1094]: no config URL provided Feb 13 19:49:24.069762 ignition[1094]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:49:24.069778 ignition[1094]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:49:24.069804 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:49:24.071785 ignition[1094]: PUT result: OK Feb 13 19:49:24.074314 ignition[1094]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:49:24.077467 ignition[1094]: GET result: OK Feb 13 19:49:24.077531 ignition[1094]: parsing config with SHA512: 0b246a9289a50643c9dcc2e1bb4aa4677e82e811a07b790b8d878f5829317ea1180aeb9978d413f0e3c2c8b97bc812bde5dcea863d1d3aa6343840b7d41a3155 Feb 13 19:49:24.081260 unknown[1094]: fetched base config from "system" Feb 13 19:49:24.081276 unknown[1094]: fetched base config from "system" Feb 13 19:49:24.081580 ignition[1094]: fetch: fetch complete Feb 13 19:49:24.081285 unknown[1094]: fetched user config from "aws" Feb 13 19:49:24.081586 ignition[1094]: fetch: fetch passed Feb 13 19:49:24.081640 ignition[1094]: Ignition finished successfully Feb 13 19:49:24.087588 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:49:24.107197 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:49:24.152446 ignition[1101]: Ignition 2.20.0 Feb 13 19:49:24.152460 ignition[1101]: Stage: kargs Feb 13 19:49:24.152907 ignition[1101]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:49:24.152921 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:49:24.153036 ignition[1101]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:49:24.156358 ignition[1101]: PUT result: OK Feb 13 19:49:24.161209 ignition[1101]: kargs: kargs passed Feb 13 19:49:24.161288 ignition[1101]: Ignition finished successfully Feb 13 19:49:24.164432 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:49:24.172343 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:49:24.188763 ignition[1107]: Ignition 2.20.0 Feb 13 19:49:24.188777 ignition[1107]: Stage: disks Feb 13 19:49:24.189257 ignition[1107]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:49:24.189270 ignition[1107]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:49:24.189378 ignition[1107]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:49:24.190944 ignition[1107]: PUT result: OK Feb 13 19:49:24.196595 ignition[1107]: disks: disks passed Feb 13 19:49:24.196660 ignition[1107]: Ignition finished successfully Feb 13 19:49:24.198558 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:49:24.201552 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:49:24.202799 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:49:24.204074 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:49:24.205608 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:49:24.206902 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:49:24.223639 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:49:24.279933 systemd-fsck[1115]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:49:24.284736 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:49:24.298593 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:49:24.465104 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 19:49:24.465996 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:49:24.468639 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:49:24.479468 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:49:24.490232 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:49:24.493672 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:49:24.493740 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:49:24.500510 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1134) Feb 13 19:49:24.493769 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:49:24.504498 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:49:24.504552 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:49:24.504567 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:49:24.507994 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:49:24.514128 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:49:24.518659 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:49:24.521949 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:49:24.797407 initrd-setup-root[1158]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:49:24.833423 initrd-setup-root[1165]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:49:24.843367 initrd-setup-root[1172]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:49:24.855881 initrd-setup-root[1179]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:49:25.115275 systemd-networkd[1084]: eth0: Gained IPv6LL Feb 13 19:49:25.236829 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:49:25.244311 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:49:25.260450 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:49:25.270784 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:49:25.272389 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:49:25.315814 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:49:25.332525 ignition[1247]: INFO : Ignition 2.20.0 Feb 13 19:49:25.332525 ignition[1247]: INFO : Stage: mount Feb 13 19:49:25.334418 ignition[1247]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:49:25.334418 ignition[1247]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:49:25.334418 ignition[1247]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:49:25.339818 ignition[1247]: INFO : PUT result: OK Feb 13 19:49:25.342631 ignition[1247]: INFO : mount: mount passed Feb 13 19:49:25.343477 ignition[1247]: INFO : Ignition finished successfully Feb 13 19:49:25.346448 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:49:25.351222 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:49:25.473382 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:49:25.507935 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1259) Feb 13 19:49:25.510455 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:49:25.510588 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:49:25.510605 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:49:25.517109 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:49:25.521322 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:49:25.594648 ignition[1276]: INFO : Ignition 2.20.0 Feb 13 19:49:25.594648 ignition[1276]: INFO : Stage: files Feb 13 19:49:25.596637 ignition[1276]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:49:25.596637 ignition[1276]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:49:25.596637 ignition[1276]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:49:25.600460 ignition[1276]: INFO : PUT result: OK Feb 13 19:49:25.605172 ignition[1276]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:49:25.608977 ignition[1276]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:49:25.608977 ignition[1276]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:49:25.618414 ignition[1276]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:49:25.620179 ignition[1276]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:49:25.621710 unknown[1276]: wrote ssh authorized keys file for user: core Feb 13 19:49:25.623423 ignition[1276]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:49:25.626471 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:49:25.631248 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:49:25.631248 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:49:25.631248 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:49:25.631248 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:49:25.631248 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:49:25.631248 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:49:25.631248 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:49:26.004052 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:49:26.391956 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:49:26.395373 ignition[1276]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:49:26.395373 ignition[1276]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:49:26.395373 ignition[1276]: INFO : files: files passed Feb 13 19:49:26.395373 ignition[1276]: INFO : Ignition finished successfully Feb 13 19:49:26.395993 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:49:26.409490 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:49:26.414410 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:49:26.416303 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:49:26.416418 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:49:26.477795 initrd-setup-root-after-ignition[1304]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:49:26.477795 initrd-setup-root-after-ignition[1304]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:49:26.494109 initrd-setup-root-after-ignition[1308]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:49:26.500447 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:49:26.504498 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:49:26.511303 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:49:26.578228 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:49:26.578377 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:49:26.583589 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:49:26.585749 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:49:26.586842 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:49:26.596388 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:49:26.621590 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:49:26.631794 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:49:26.652800 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:49:26.654267 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:49:26.655471 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:49:26.659485 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:49:26.659642 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:49:26.662423 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:49:26.664499 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:49:26.677351 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:49:26.679960 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:49:26.682557 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:49:26.684895 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:49:26.687087 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:49:26.688513 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:49:26.689792 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:49:26.694492 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:49:26.699382 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:49:26.699528 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:49:26.703548 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:49:26.705517 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:49:26.708308 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:49:26.709001 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:49:26.712958 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:49:26.713145 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:49:26.720759 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:49:26.721046 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:49:26.724136 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:49:26.724287 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:49:26.748431 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:49:26.752155 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:49:26.752477 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:49:26.794435 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:49:26.795816 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:49:26.796148 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:49:26.797608 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:49:26.797785 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:49:26.811575 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:49:26.813057 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:49:26.824106 ignition[1328]: INFO : Ignition 2.20.0 Feb 13 19:49:26.824106 ignition[1328]: INFO : Stage: umount Feb 13 19:49:26.824106 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:49:26.824106 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:49:26.824106 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:49:26.833749 ignition[1328]: INFO : PUT result: OK Feb 13 19:49:26.829806 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:49:26.836765 ignition[1328]: INFO : umount: umount passed Feb 13 19:49:26.837676 ignition[1328]: INFO : Ignition finished successfully Feb 13 19:49:26.838991 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:49:26.841157 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:49:26.844455 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:49:26.845230 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:49:26.848163 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:49:26.848245 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:49:26.855929 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:49:26.855993 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:49:26.857996 systemd[1]: Stopped target network.target - Network. Feb 13 19:49:26.863678 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:49:26.864071 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:49:26.875851 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:49:26.884449 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:49:26.890614 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:49:26.904880 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:49:26.905840 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:49:26.917612 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:49:26.917707 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:49:26.922637 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:49:26.922703 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:49:26.931053 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:49:26.931163 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:49:26.934464 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:49:26.934609 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:49:26.942779 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:49:26.945135 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:49:26.950137 systemd-networkd[1084]: eth0: DHCPv6 lease lost Feb 13 19:49:26.952925 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:49:26.953182 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:49:26.961214 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:49:26.961321 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:49:26.978127 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:49:26.984560 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:49:26.984780 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:49:27.000088 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:49:27.011815 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:49:27.011978 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:49:27.030588 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:49:27.030715 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:49:27.032062 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:49:27.032141 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:49:27.034561 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:49:27.034703 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:49:27.037891 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:49:27.038159 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:49:27.042748 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:49:27.042897 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:49:27.046158 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:49:27.046300 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:49:27.048228 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:49:27.048278 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:49:27.050175 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:49:27.050232 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:49:27.056572 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:49:27.056653 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:49:27.058858 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:49:27.058937 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:49:27.088090 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:49:27.096652 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:49:27.096773 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:49:27.100563 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:49:27.100663 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:49:27.106271 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:49:27.106471 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:49:27.247546 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:49:27.247712 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:49:27.255793 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:49:27.262253 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:49:27.262348 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:49:27.281478 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:49:27.295007 systemd[1]: Switching root. Feb 13 19:49:27.321443 systemd-journald[179]: Journal stopped Feb 13 19:49:30.014207 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Feb 13 19:49:30.014303 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:49:30.014334 kernel: SELinux: policy capability open_perms=1 Feb 13 19:49:30.014356 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:49:30.014376 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:49:30.014404 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:49:30.014425 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:49:30.014449 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:49:30.014469 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:49:30.014491 kernel: audit: type=1403 audit(1739476168.122:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:49:30.014518 systemd[1]: Successfully loaded SELinux policy in 92.272ms. Feb 13 19:49:30.014558 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.819ms. Feb 13 19:49:30.014590 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:49:30.014614 systemd[1]: Detected virtualization amazon. Feb 13 19:49:30.014636 systemd[1]: Detected architecture x86-64. Feb 13 19:49:30.014657 systemd[1]: Detected first boot. Feb 13 19:49:30.014680 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:49:30.014703 zram_generator::config[1371]: No configuration found. Feb 13 19:49:30.014732 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:49:30.014840 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:49:30.014867 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:49:30.014891 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:49:30.014915 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:49:30.014938 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:49:30.014964 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:49:30.014987 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:49:30.015008 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:49:30.015028 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:49:30.015049 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:49:30.015069 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:49:30.015109 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:49:30.015128 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:49:30.015146 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:49:30.015169 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:49:30.015186 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:49:30.015205 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:49:30.015224 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:49:30.015243 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:49:30.015261 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:49:30.015281 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:49:30.015299 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:49:30.015321 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:49:30.015339 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:49:30.015358 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:49:30.015377 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:49:30.015395 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:49:30.015413 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:49:30.015432 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:49:30.015450 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:49:30.015469 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:49:30.015490 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:49:30.015508 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:49:30.015526 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:49:30.015545 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:49:30.015563 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:49:30.015581 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:49:30.015598 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:49:30.015617 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:49:30.015635 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:49:30.015656 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:49:30.015674 systemd[1]: Reached target machines.target - Containers. Feb 13 19:49:30.015694 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:49:30.015713 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:49:30.015733 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:49:30.015750 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:49:30.015768 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:49:30.015788 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:49:30.015810 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:49:30.015828 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:49:30.015846 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:49:30.015866 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:49:30.015884 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:49:30.015903 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:49:30.015921 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:49:30.015940 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:49:30.015961 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:49:30.015979 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:49:30.015997 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:49:30.016016 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:49:30.016034 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:49:30.016092 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:49:30.016110 systemd[1]: Stopped verity-setup.service. Feb 13 19:49:30.016129 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:49:30.016147 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:49:30.016170 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:49:30.016188 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:49:30.016206 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:49:30.016225 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:49:30.016243 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:49:30.016264 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:49:30.016282 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:49:30.016301 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:49:30.016319 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:49:30.016337 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:49:30.016355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:49:30.016375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:49:30.016394 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:49:30.016413 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:49:30.016475 systemd-journald[1453]: Collecting audit messages is disabled. Feb 13 19:49:30.016567 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:49:30.016588 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:49:30.016611 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:49:30.016634 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:49:30.016653 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:49:30.016674 systemd-journald[1453]: Journal started Feb 13 19:49:30.016712 systemd-journald[1453]: Runtime Journal (/run/log/journal/ec2c842d2206859dd78b3a16c299c6c2) is 4.8M, max 38.6M, 33.7M free. Feb 13 19:49:29.356316 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:49:29.406317 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:49:29.406725 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:49:30.033205 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:49:30.044114 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:49:30.044208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:49:30.047155 kernel: fuse: init (API version 7.39) Feb 13 19:49:30.065166 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:49:30.070107 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:49:30.081602 kernel: loop: module loaded Feb 13 19:49:30.083222 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:49:30.117743 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:49:30.117844 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:49:30.121912 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:49:30.125206 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:49:30.127344 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:49:30.128067 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:49:30.132161 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:49:30.134547 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:49:30.137743 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:49:30.216166 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:49:30.345457 kernel: loop0: detected capacity change from 0 to 62848 Feb 13 19:49:30.328541 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:49:30.336297 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:49:30.341436 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:49:30.355440 systemd-journald[1453]: Time spent on flushing to /var/log/journal/ec2c842d2206859dd78b3a16c299c6c2 is 210.308ms for 937 entries. Feb 13 19:49:30.355440 systemd-journald[1453]: System Journal (/var/log/journal/ec2c842d2206859dd78b3a16c299c6c2) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:49:30.586689 kernel: ACPI: bus type drm_connector registered Feb 13 19:49:30.586899 systemd-journald[1453]: Received client request to flush runtime journal. Feb 13 19:49:30.586964 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:49:30.587612 kernel: loop1: detected capacity change from 0 to 218376 Feb 13 19:49:30.355316 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:49:30.367396 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:49:30.375953 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:49:30.376555 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:49:30.378682 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:49:30.382434 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:49:30.384661 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:49:30.412724 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:49:30.487828 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:49:30.511611 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:49:30.567327 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:49:30.579904 udevadm[1508]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:49:30.590886 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:49:30.630733 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:49:30.646615 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:49:30.651302 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:49:30.655127 kernel: loop2: detected capacity change from 0 to 140992 Feb 13 19:49:30.653283 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:49:30.686500 systemd-tmpfiles[1516]: ACLs are not supported, ignoring. Feb 13 19:49:30.687174 systemd-tmpfiles[1516]: ACLs are not supported, ignoring. Feb 13 19:49:30.700073 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:49:30.789176 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 19:49:30.937447 kernel: loop4: detected capacity change from 0 to 62848 Feb 13 19:49:30.951218 kernel: loop5: detected capacity change from 0 to 218376 Feb 13 19:49:30.979118 kernel: loop6: detected capacity change from 0 to 140992 Feb 13 19:49:31.010130 kernel: loop7: detected capacity change from 0 to 138184 Feb 13 19:49:31.043278 (sd-merge)[1523]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:49:31.043981 (sd-merge)[1523]: Merged extensions into '/usr'. Feb 13 19:49:31.061966 systemd[1]: Reloading requested from client PID 1477 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:49:31.061985 systemd[1]: Reloading... Feb 13 19:49:31.229106 zram_generator::config[1555]: No configuration found. Feb 13 19:49:31.512425 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:49:31.673206 systemd[1]: Reloading finished in 610 ms. Feb 13 19:49:31.727315 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:49:31.751589 systemd[1]: Starting ensure-sysext.service... Feb 13 19:49:31.769436 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:49:31.794407 systemd[1]: Reloading requested from client PID 1597 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:49:31.794427 systemd[1]: Reloading... Feb 13 19:49:31.853182 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:49:31.854810 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:49:31.859147 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:49:31.864107 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Feb 13 19:49:31.864210 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Feb 13 19:49:31.877036 systemd-tmpfiles[1598]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:49:31.877057 systemd-tmpfiles[1598]: Skipping /boot Feb 13 19:49:31.931126 zram_generator::config[1624]: No configuration found. Feb 13 19:49:31.937413 systemd-tmpfiles[1598]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:49:31.937432 systemd-tmpfiles[1598]: Skipping /boot Feb 13 19:49:32.177138 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:49:32.217042 ldconfig[1467]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:49:32.287788 systemd[1]: Reloading finished in 492 ms. Feb 13 19:49:32.321949 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:49:32.333797 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:49:32.358703 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:49:32.366248 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:49:32.377652 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:49:32.383442 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:49:32.389567 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:49:32.395101 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:49:32.395398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:49:32.404269 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:49:32.418348 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:49:32.437216 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:49:32.438695 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:49:32.438896 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:49:32.440986 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:49:32.455826 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:49:32.456367 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:49:32.459441 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:49:32.460198 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:49:32.463010 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:49:32.463321 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:49:32.466269 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:49:32.471515 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:49:32.490761 systemd[1]: Finished ensure-sysext.service. Feb 13 19:49:32.494301 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:49:32.494605 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:49:32.502709 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:49:32.506812 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:49:32.520415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:49:32.527609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:49:32.528887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:49:32.528987 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:49:32.533461 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:49:32.539295 augenrules[1715]: No rules Feb 13 19:49:32.540372 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:49:32.552011 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:49:32.554217 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:49:32.554996 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:49:32.555248 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:49:32.556884 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:49:32.557094 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:49:32.567990 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:49:32.568383 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:49:32.575679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:49:32.575870 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:49:32.580418 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:49:32.605232 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:49:32.606382 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:49:32.611982 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:49:32.640187 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:49:32.642814 systemd-udevd[1718]: Using default interface naming scheme 'v255'. Feb 13 19:49:32.643659 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:49:32.650198 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:49:32.682456 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:49:32.714669 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:49:32.724311 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:49:32.823404 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:49:32.839624 systemd-resolved[1681]: Positive Trust Anchors: Feb 13 19:49:32.839643 systemd-resolved[1681]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:49:32.839692 systemd-resolved[1681]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:49:32.856191 systemd-resolved[1681]: Defaulting to hostname 'linux'. Feb 13 19:49:32.859346 systemd-networkd[1737]: lo: Link UP Feb 13 19:49:32.860734 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:49:32.860984 systemd-networkd[1737]: lo: Gained carrier Feb 13 19:49:32.862041 systemd-networkd[1737]: Enumeration completed Feb 13 19:49:32.862258 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:49:32.864169 systemd[1]: Reached target network.target - Network. Feb 13 19:49:32.865336 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:49:32.877845 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:49:32.894868 (udev-worker)[1744]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:49:33.042131 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Feb 13 19:49:33.057390 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 19:49:33.084112 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:49:33.086156 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Feb 13 19:49:33.088098 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 13 19:49:33.091884 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 19:49:33.117103 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1743) Feb 13 19:49:33.158480 systemd-networkd[1737]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:49:33.158493 systemd-networkd[1737]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:49:33.162278 systemd-networkd[1737]: eth0: Link UP Feb 13 19:49:33.163541 systemd-networkd[1737]: eth0: Gained carrier Feb 13 19:49:33.163578 systemd-networkd[1737]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:49:33.179192 systemd-networkd[1737]: eth0: DHCPv4 address 172.31.31.165/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:49:33.234148 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:49:33.250545 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:49:33.335898 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:49:33.339511 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:49:33.353328 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:49:33.358257 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:49:33.403335 lvm[1850]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:49:33.434717 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:49:33.556126 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:49:33.558163 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:49:33.564755 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:49:33.567018 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:49:33.568704 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:49:33.571597 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:49:33.573364 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:49:33.575027 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:49:33.576710 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:49:33.578138 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:49:33.578329 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:49:33.579305 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:49:33.581913 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:49:33.584856 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:49:33.610264 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:49:33.618380 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:49:33.621137 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:49:33.623014 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:49:33.624505 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:49:33.627564 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:49:33.627603 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:49:33.637168 lvm[1860]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:49:33.644764 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:49:33.666345 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:49:33.694431 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:49:33.707288 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:49:33.713584 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:49:33.715576 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:49:33.726415 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:49:33.731967 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:49:33.737231 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:49:33.741281 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:49:33.751471 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:49:33.781471 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:49:33.794665 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:49:33.796436 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:49:33.814444 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:49:33.827242 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:49:33.835732 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:49:33.853118 jq[1875]: true Feb 13 19:49:33.879373 jq[1864]: false Feb 13 19:49:33.907230 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:49:33.908649 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:49:33.927873 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:49:33.929849 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:49:33.963996 jq[1877]: true Feb 13 19:49:34.022146 extend-filesystems[1865]: Found loop4 Feb 13 19:49:34.022146 extend-filesystems[1865]: Found loop5 Feb 13 19:49:34.046065 extend-filesystems[1865]: Found loop6 Feb 13 19:49:34.046065 extend-filesystems[1865]: Found loop7 Feb 13 19:49:34.046065 extend-filesystems[1865]: Found nvme0n1 Feb 13 19:49:34.046065 extend-filesystems[1865]: Found nvme0n1p1 Feb 13 19:49:34.046065 extend-filesystems[1865]: Found nvme0n1p2 Feb 13 19:49:34.046065 extend-filesystems[1865]: Found nvme0n1p3 Feb 13 19:49:34.046065 extend-filesystems[1865]: Found usr Feb 13 19:49:34.046065 extend-filesystems[1865]: Found nvme0n1p4 Feb 13 19:49:34.046065 extend-filesystems[1865]: Found nvme0n1p6 Feb 13 19:49:34.046065 extend-filesystems[1865]: Found nvme0n1p7 Feb 13 19:49:34.046065 extend-filesystems[1865]: Found nvme0n1p9 Feb 13 19:49:34.046065 extend-filesystems[1865]: Checking size of /dev/nvme0n1p9 Feb 13 19:49:34.130370 update_engine[1874]: I20250213 19:49:34.030884 1874 main.cc:92] Flatcar Update Engine starting Feb 13 19:49:34.130370 update_engine[1874]: I20250213 19:49:34.064908 1874 update_check_scheduler.cc:74] Next update check in 2m6s Feb 13 19:49:34.039887 dbus-daemon[1863]: [system] SELinux support is enabled Feb 13 19:49:34.030220 (ntainerd)[1885]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:07:00 UTC 2025 (1): Starting Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: ---------------------------------------------------- Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: corporation. Support and training for ntp-4 are Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: available at https://www.nwtime.org/support Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: ---------------------------------------------------- Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: proto: precision = 0.092 usec (-23) Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: basedate set to 2025-02-01 Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: gps base set to 2025-02-02 (week 2352) Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: Listen normally on 3 eth0 172.31.31.165:123 Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: Listen normally on 4 lo [::1]:123 Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: bind(21) AF_INET6 fe80::4ba:60ff:fe4b:91eb%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: unable to create socket on eth0 (5) for fe80::4ba:60ff:fe4b:91eb%2#123 Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: failed to init interface for address fe80::4ba:60ff:fe4b:91eb%2 Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: Listening on routing socket on fd #21 for interface updates Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:49:34.150578 ntpd[1867]: 13 Feb 19:49:34 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:49:34.063381 dbus-daemon[1863]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1737 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:49:34.167217 extend-filesystems[1865]: Resized partition /dev/nvme0n1p9 Feb 13 19:49:34.040204 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:49:34.084847 dbus-daemon[1863]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:49:34.048938 systemd-logind[1871]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 19:49:34.115115 ntpd[1867]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:07:00 UTC 2025 (1): Starting Feb 13 19:49:34.200141 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:49:34.200309 extend-filesystems[1915]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:49:34.048965 systemd-logind[1871]: Watching system buttons on /dev/input/event3 (Sleep Button) Feb 13 19:49:34.115144 ntpd[1867]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:49:34.048989 systemd-logind[1871]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:49:34.115155 ntpd[1867]: ---------------------------------------------------- Feb 13 19:49:34.052232 systemd-logind[1871]: New seat seat0. Feb 13 19:49:34.115165 ntpd[1867]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:49:34.058435 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:49:34.115175 ntpd[1867]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:49:34.058678 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:49:34.115184 ntpd[1867]: corporation. Support and training for ntp-4 are Feb 13 19:49:34.063677 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:49:34.115194 ntpd[1867]: available at https://www.nwtime.org/support Feb 13 19:49:34.079837 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:49:34.115204 ntpd[1867]: ---------------------------------------------------- Feb 13 19:49:34.079901 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:49:34.117331 ntpd[1867]: proto: precision = 0.092 usec (-23) Feb 13 19:49:34.086874 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:49:34.118172 ntpd[1867]: basedate set to 2025-02-01 Feb 13 19:49:34.086907 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:49:34.118192 ntpd[1867]: gps base set to 2025-02-02 (week 2352) Feb 13 19:49:34.089929 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:49:34.120452 ntpd[1867]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:49:34.105531 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:49:34.120503 ntpd[1867]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:49:34.132289 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:49:34.130743 ntpd[1867]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:49:34.204871 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:49:34.130797 ntpd[1867]: Listen normally on 3 eth0 172.31.31.165:123 Feb 13 19:49:34.130842 ntpd[1867]: Listen normally on 4 lo [::1]:123 Feb 13 19:49:34.131946 ntpd[1867]: bind(21) AF_INET6 fe80::4ba:60ff:fe4b:91eb%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:49:34.131981 ntpd[1867]: unable to create socket on eth0 (5) for fe80::4ba:60ff:fe4b:91eb%2#123 Feb 13 19:49:34.132000 ntpd[1867]: failed to init interface for address fe80::4ba:60ff:fe4b:91eb%2 Feb 13 19:49:34.132204 ntpd[1867]: Listening on routing socket on fd #21 for interface updates Feb 13 19:49:34.133874 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:49:34.133905 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:49:34.265051 coreos-metadata[1862]: Feb 13 19:49:34.253 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:49:34.265051 coreos-metadata[1862]: Feb 13 19:49:34.258 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:49:34.266027 coreos-metadata[1862]: Feb 13 19:49:34.265 INFO Fetch successful Feb 13 19:49:34.266027 coreos-metadata[1862]: Feb 13 19:49:34.265 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:49:34.266619 coreos-metadata[1862]: Feb 13 19:49:34.266 INFO Fetch successful Feb 13 19:49:34.266619 coreos-metadata[1862]: Feb 13 19:49:34.266 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:49:34.268059 coreos-metadata[1862]: Feb 13 19:49:34.267 INFO Fetch successful Feb 13 19:49:34.268059 coreos-metadata[1862]: Feb 13 19:49:34.267 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:49:34.268940 coreos-metadata[1862]: Feb 13 19:49:34.268 INFO Fetch successful Feb 13 19:49:34.268940 coreos-metadata[1862]: Feb 13 19:49:34.268 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:49:34.270796 coreos-metadata[1862]: Feb 13 19:49:34.270 INFO Fetch failed with 404: resource not found Feb 13 19:49:34.270796 coreos-metadata[1862]: Feb 13 19:49:34.270 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:49:34.275227 coreos-metadata[1862]: Feb 13 19:49:34.274 INFO Fetch successful Feb 13 19:49:34.275227 coreos-metadata[1862]: Feb 13 19:49:34.275 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:49:34.281194 coreos-metadata[1862]: Feb 13 19:49:34.280 INFO Fetch successful Feb 13 19:49:34.281194 coreos-metadata[1862]: Feb 13 19:49:34.281 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:49:34.290110 coreos-metadata[1862]: Feb 13 19:49:34.287 INFO Fetch successful Feb 13 19:49:34.290110 coreos-metadata[1862]: Feb 13 19:49:34.287 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:49:34.292578 coreos-metadata[1862]: Feb 13 19:49:34.292 INFO Fetch successful Feb 13 19:49:34.292578 coreos-metadata[1862]: Feb 13 19:49:34.292 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:49:34.295107 coreos-metadata[1862]: Feb 13 19:49:34.293 INFO Fetch successful Feb 13 19:49:34.305102 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:49:34.380147 extend-filesystems[1915]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:49:34.380147 extend-filesystems[1915]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:49:34.380147 extend-filesystems[1915]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:49:34.388672 extend-filesystems[1865]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:49:34.380655 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:49:34.382209 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:49:34.416280 bash[1929]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:49:34.418010 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:49:34.446523 systemd[1]: Starting sshkeys.service... Feb 13 19:49:34.465178 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1751) Feb 13 19:49:34.489702 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:49:34.496644 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:49:34.528950 dbus-daemon[1863]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:49:34.529609 dbus-daemon[1863]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1904 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:49:34.531504 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:49:34.548557 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:49:34.586780 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:49:34.595680 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:49:34.651343 polkitd[1954]: Started polkitd version 121 Feb 13 19:49:34.682761 polkitd[1954]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:49:34.682849 polkitd[1954]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:49:34.697143 polkitd[1954]: Finished loading, compiling and executing 2 rules Feb 13 19:49:34.700485 dbus-daemon[1863]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:49:34.700826 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:49:34.707885 polkitd[1954]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:49:34.773269 systemd-hostnamed[1904]: Hostname set to (transient) Feb 13 19:49:34.773524 systemd-resolved[1681]: System hostname changed to 'ip-172-31-31-165'. Feb 13 19:49:34.840963 locksmithd[1907]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:49:34.952925 coreos-metadata[1963]: Feb 13 19:49:34.950 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:49:34.953521 coreos-metadata[1963]: Feb 13 19:49:34.953 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:49:34.957311 coreos-metadata[1963]: Feb 13 19:49:34.957 INFO Fetch successful Feb 13 19:49:34.957886 coreos-metadata[1963]: Feb 13 19:49:34.957 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:49:34.961596 coreos-metadata[1963]: Feb 13 19:49:34.961 INFO Fetch successful Feb 13 19:49:34.968195 unknown[1963]: wrote ssh authorized keys file for user: core Feb 13 19:49:35.035059 update-ssh-keys[2052]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:49:35.043357 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:49:35.057911 systemd[1]: Finished sshkeys.service. Feb 13 19:49:35.099285 systemd-networkd[1737]: eth0: Gained IPv6LL Feb 13 19:49:35.107149 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:49:35.110142 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:49:35.122070 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:49:35.133819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:35.145213 containerd[1885]: time="2025-02-13T19:49:35.145103800Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:49:35.151553 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:49:35.230372 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:49:35.236511 containerd[1885]: time="2025-02-13T19:49:35.236309003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:49:35.240024 containerd[1885]: time="2025-02-13T19:49:35.238567423Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:49:35.240024 containerd[1885]: time="2025-02-13T19:49:35.238622526Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:49:35.240024 containerd[1885]: time="2025-02-13T19:49:35.238648148Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:49:35.240024 containerd[1885]: time="2025-02-13T19:49:35.238869344Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:49:35.240024 containerd[1885]: time="2025-02-13T19:49:35.238892781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:49:35.240024 containerd[1885]: time="2025-02-13T19:49:35.238965232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:49:35.240024 containerd[1885]: time="2025-02-13T19:49:35.238983797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:49:35.240024 containerd[1885]: time="2025-02-13T19:49:35.239257025Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:49:35.240024 containerd[1885]: time="2025-02-13T19:49:35.239279157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:49:35.240024 containerd[1885]: time="2025-02-13T19:49:35.239300432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:49:35.240024 containerd[1885]: time="2025-02-13T19:49:35.239315058Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:49:35.240531 containerd[1885]: time="2025-02-13T19:49:35.239411568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:49:35.240531 containerd[1885]: time="2025-02-13T19:49:35.239769655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:49:35.240531 containerd[1885]: time="2025-02-13T19:49:35.239988981Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:49:35.240531 containerd[1885]: time="2025-02-13T19:49:35.240009976Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:49:35.240531 containerd[1885]: time="2025-02-13T19:49:35.240138453Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:49:35.240531 containerd[1885]: time="2025-02-13T19:49:35.240195132Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:49:35.254872 containerd[1885]: time="2025-02-13T19:49:35.254143838Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:49:35.254872 containerd[1885]: time="2025-02-13T19:49:35.254243673Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:49:35.254872 containerd[1885]: time="2025-02-13T19:49:35.254267989Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:49:35.254872 containerd[1885]: time="2025-02-13T19:49:35.254339268Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:49:35.254872 containerd[1885]: time="2025-02-13T19:49:35.254362650Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:49:35.254872 containerd[1885]: time="2025-02-13T19:49:35.254577722Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:49:35.255198 containerd[1885]: time="2025-02-13T19:49:35.255013105Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:49:35.255241 containerd[1885]: time="2025-02-13T19:49:35.255195924Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:49:35.255241 containerd[1885]: time="2025-02-13T19:49:35.255220047Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:49:35.255316 containerd[1885]: time="2025-02-13T19:49:35.255242743Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:49:35.255316 containerd[1885]: time="2025-02-13T19:49:35.255263960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:49:35.255316 containerd[1885]: time="2025-02-13T19:49:35.255285774Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:49:35.255316 containerd[1885]: time="2025-02-13T19:49:35.255305470Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:49:35.255449 containerd[1885]: time="2025-02-13T19:49:35.255326095Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:49:35.255449 containerd[1885]: time="2025-02-13T19:49:35.255348685Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:49:35.255449 containerd[1885]: time="2025-02-13T19:49:35.255367725Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:49:35.255449 containerd[1885]: time="2025-02-13T19:49:35.255386159Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:49:35.255449 containerd[1885]: time="2025-02-13T19:49:35.255403763Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:49:35.255449 containerd[1885]: time="2025-02-13T19:49:35.255434312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.255653 containerd[1885]: time="2025-02-13T19:49:35.255455113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.255653 containerd[1885]: time="2025-02-13T19:49:35.255473312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.255653 containerd[1885]: time="2025-02-13T19:49:35.255503667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.255653 containerd[1885]: time="2025-02-13T19:49:35.255521609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.255653 containerd[1885]: time="2025-02-13T19:49:35.255542516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.255653 containerd[1885]: time="2025-02-13T19:49:35.255561262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.255653 containerd[1885]: time="2025-02-13T19:49:35.255581064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.255653 containerd[1885]: time="2025-02-13T19:49:35.255600325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.255653 containerd[1885]: time="2025-02-13T19:49:35.255620698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.255653 containerd[1885]: time="2025-02-13T19:49:35.255638733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.256054 containerd[1885]: time="2025-02-13T19:49:35.255656668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.256054 containerd[1885]: time="2025-02-13T19:49:35.255676595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.256054 containerd[1885]: time="2025-02-13T19:49:35.255697817Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:49:35.256054 containerd[1885]: time="2025-02-13T19:49:35.255729625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.256054 containerd[1885]: time="2025-02-13T19:49:35.255748314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.256054 containerd[1885]: time="2025-02-13T19:49:35.255768117Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:49:35.257784 containerd[1885]: time="2025-02-13T19:49:35.257047473Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:49:35.257784 containerd[1885]: time="2025-02-13T19:49:35.257116584Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:49:35.257784 containerd[1885]: time="2025-02-13T19:49:35.257137883Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:49:35.257784 containerd[1885]: time="2025-02-13T19:49:35.257157130Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:49:35.257784 containerd[1885]: time="2025-02-13T19:49:35.257172509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.257784 containerd[1885]: time="2025-02-13T19:49:35.257193533Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:49:35.257784 containerd[1885]: time="2025-02-13T19:49:35.257208933Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:49:35.257784 containerd[1885]: time="2025-02-13T19:49:35.257224658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:49:35.258128 containerd[1885]: time="2025-02-13T19:49:35.257637887Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:49:35.258128 containerd[1885]: time="2025-02-13T19:49:35.257707017Z" level=info msg="Connect containerd service" Feb 13 19:49:35.260277 containerd[1885]: time="2025-02-13T19:49:35.257767035Z" level=info msg="using legacy CRI server" Feb 13 19:49:35.260277 containerd[1885]: time="2025-02-13T19:49:35.259112885Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:49:35.260277 containerd[1885]: time="2025-02-13T19:49:35.259291518Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:49:35.260277 containerd[1885]: time="2025-02-13T19:49:35.260044250Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:49:35.262979 containerd[1885]: time="2025-02-13T19:49:35.262922334Z" level=info msg="Start subscribing containerd event" Feb 13 19:49:35.263195 containerd[1885]: time="2025-02-13T19:49:35.262995021Z" level=info msg="Start recovering state" Feb 13 19:49:35.265103 containerd[1885]: time="2025-02-13T19:49:35.263310189Z" level=info msg="Start event monitor" Feb 13 19:49:35.265103 containerd[1885]: time="2025-02-13T19:49:35.263340853Z" level=info msg="Start snapshots syncer" Feb 13 19:49:35.265103 containerd[1885]: time="2025-02-13T19:49:35.263355717Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:49:35.265103 containerd[1885]: time="2025-02-13T19:49:35.263366315Z" level=info msg="Start streaming server" Feb 13 19:49:35.265103 containerd[1885]: time="2025-02-13T19:49:35.263865765Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:49:35.265103 containerd[1885]: time="2025-02-13T19:49:35.263920719Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:49:35.264124 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:49:35.265754 containerd[1885]: time="2025-02-13T19:49:35.265723522Z" level=info msg="containerd successfully booted in 0.128235s" Feb 13 19:49:35.292816 amazon-ssm-agent[2062]: Initializing new seelog logger Feb 13 19:49:35.293421 amazon-ssm-agent[2062]: New Seelog Logger Creation Complete Feb 13 19:49:35.293576 amazon-ssm-agent[2062]: 2025/02/13 19:49:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:49:35.293657 amazon-ssm-agent[2062]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:49:35.294194 amazon-ssm-agent[2062]: 2025/02/13 19:49:35 processing appconfig overrides Feb 13 19:49:35.294658 amazon-ssm-agent[2062]: 2025/02/13 19:49:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:49:35.294744 amazon-ssm-agent[2062]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:49:35.294888 amazon-ssm-agent[2062]: 2025/02/13 19:49:35 processing appconfig overrides Feb 13 19:49:35.295326 amazon-ssm-agent[2062]: 2025/02/13 19:49:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:49:35.295396 amazon-ssm-agent[2062]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:49:35.295603 amazon-ssm-agent[2062]: 2025/02/13 19:49:35 processing appconfig overrides Feb 13 19:49:35.296210 amazon-ssm-agent[2062]: 2025-02-13 19:49:35 INFO Proxy environment variables: Feb 13 19:49:35.301275 amazon-ssm-agent[2062]: 2025/02/13 19:49:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:49:35.301275 amazon-ssm-agent[2062]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:49:35.301450 amazon-ssm-agent[2062]: 2025/02/13 19:49:35 processing appconfig overrides Feb 13 19:49:35.397601 amazon-ssm-agent[2062]: 2025-02-13 19:49:35 INFO no_proxy: Feb 13 19:49:35.497490 amazon-ssm-agent[2062]: 2025-02-13 19:49:35 INFO https_proxy: Feb 13 19:49:35.595435 amazon-ssm-agent[2062]: 2025-02-13 19:49:35 INFO http_proxy: Feb 13 19:49:35.694464 amazon-ssm-agent[2062]: 2025-02-13 19:49:35 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:49:35.795210 amazon-ssm-agent[2062]: 2025-02-13 19:49:35 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:49:35.885428 sshd_keygen[1898]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:49:35.896308 amazon-ssm-agent[2062]: 2025-02-13 19:49:35 INFO Agent will take identity from EC2 Feb 13 19:49:35.921885 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:49:35.935775 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:49:35.948291 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:49:35.948529 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:49:35.958207 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:49:35.985523 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:49:35.993839 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:49:35.997855 amazon-ssm-agent[2062]: 2025-02-13 19:49:35 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:49:36.003834 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:49:36.005398 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:49:36.096743 amazon-ssm-agent[2062]: 2025-02-13 19:49:35 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:49:36.170403 amazon-ssm-agent[2062]: 2025-02-13 19:49:35 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:49:36.170573 amazon-ssm-agent[2062]: 2025-02-13 19:49:35 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:49:36.170640 amazon-ssm-agent[2062]: 2025-02-13 19:49:35 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Feb 13 19:49:36.171074 amazon-ssm-agent[2062]: 2025-02-13 19:49:35 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:49:36.171333 amazon-ssm-agent[2062]: 2025-02-13 19:49:35 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:49:36.171333 amazon-ssm-agent[2062]: 2025-02-13 19:49:35 INFO [Registrar] Starting registrar module Feb 13 19:49:36.171543 amazon-ssm-agent[2062]: 2025-02-13 19:49:35 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:49:36.171543 amazon-ssm-agent[2062]: 2025-02-13 19:49:36 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:49:36.171543 amazon-ssm-agent[2062]: 2025-02-13 19:49:36 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:49:36.171543 amazon-ssm-agent[2062]: 2025-02-13 19:49:36 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:49:36.171543 amazon-ssm-agent[2062]: 2025-02-13 19:49:36 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:49:36.197202 amazon-ssm-agent[2062]: 2025-02-13 19:49:36 INFO [CredentialRefresher] Next credential rotation will be in 32.3916443129 minutes Feb 13 19:49:36.782843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:36.791762 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:49:36.797537 systemd[1]: Startup finished in 835ms (kernel) + 8.311s (initrd) + 8.762s (userspace) = 17.909s. Feb 13 19:49:36.983717 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:49:37.027403 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:49:37.038229 systemd[1]: Started sshd@0-172.31.31.165:22-139.178.89.65:43290.service - OpenSSH per-connection server daemon (139.178.89.65:43290). Feb 13 19:49:37.116004 ntpd[1867]: Listen normally on 6 eth0 [fe80::4ba:60ff:fe4b:91eb%2]:123 Feb 13 19:49:37.118278 ntpd[1867]: 13 Feb 19:49:37 ntpd[1867]: Listen normally on 6 eth0 [fe80::4ba:60ff:fe4b:91eb%2]:123 Feb 13 19:49:37.193517 amazon-ssm-agent[2062]: 2025-02-13 19:49:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:49:37.295542 amazon-ssm-agent[2062]: 2025-02-13 19:49:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2113) started Feb 13 19:49:37.378159 sshd[2110]: Accepted publickey for core from 139.178.89.65 port 43290 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:49:37.378850 sshd-session[2110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:37.397099 amazon-ssm-agent[2062]: 2025-02-13 19:49:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:49:37.413005 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:49:37.428289 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:49:37.440153 systemd-logind[1871]: New session 1 of user core. Feb 13 19:49:37.480450 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:49:37.487642 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:49:37.512667 (systemd)[2126]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:49:37.757768 systemd[2126]: Queued start job for default target default.target. Feb 13 19:49:37.764615 systemd[2126]: Created slice app.slice - User Application Slice. Feb 13 19:49:37.764659 systemd[2126]: Reached target paths.target - Paths. Feb 13 19:49:37.764680 systemd[2126]: Reached target timers.target - Timers. Feb 13 19:49:37.768322 systemd[2126]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:49:37.790261 systemd[2126]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:49:37.790561 systemd[2126]: Reached target sockets.target - Sockets. Feb 13 19:49:37.790586 systemd[2126]: Reached target basic.target - Basic System. Feb 13 19:49:37.791489 systemd[2126]: Reached target default.target - Main User Target. Feb 13 19:49:37.791545 systemd[2126]: Startup finished in 253ms. Feb 13 19:49:37.792363 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:49:37.799323 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:49:37.958670 systemd[1]: Started sshd@1-172.31.31.165:22-139.178.89.65:43294.service - OpenSSH per-connection server daemon (139.178.89.65:43294). Feb 13 19:49:38.193139 sshd[2142]: Accepted publickey for core from 139.178.89.65 port 43294 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:49:38.199649 sshd-session[2142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:38.207325 systemd-logind[1871]: New session 2 of user core. Feb 13 19:49:38.209271 kubelet[2104]: E0213 19:49:38.208634 2104 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:49:38.222378 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:49:38.222861 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:49:38.223042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:49:38.223425 systemd[1]: kubelet.service: Consumed 1.100s CPU time. Feb 13 19:49:38.350112 sshd[2145]: Connection closed by 139.178.89.65 port 43294 Feb 13 19:49:38.350785 sshd-session[2142]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:38.355218 systemd[1]: sshd@1-172.31.31.165:22-139.178.89.65:43294.service: Deactivated successfully. Feb 13 19:49:38.357336 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:49:38.359041 systemd-logind[1871]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:49:38.360426 systemd-logind[1871]: Removed session 2. Feb 13 19:49:38.404983 systemd[1]: Started sshd@2-172.31.31.165:22-139.178.89.65:43300.service - OpenSSH per-connection server daemon (139.178.89.65:43300). Feb 13 19:49:38.588958 sshd[2150]: Accepted publickey for core from 139.178.89.65 port 43300 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:49:38.591377 sshd-session[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:38.599407 systemd-logind[1871]: New session 3 of user core. Feb 13 19:49:38.607969 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:49:38.726221 sshd[2152]: Connection closed by 139.178.89.65 port 43300 Feb 13 19:49:38.727123 sshd-session[2150]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:38.735892 systemd[1]: sshd@2-172.31.31.165:22-139.178.89.65:43300.service: Deactivated successfully. Feb 13 19:49:38.740032 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:49:38.744724 systemd-logind[1871]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:49:38.746272 systemd-logind[1871]: Removed session 3. Feb 13 19:49:38.765517 systemd[1]: Started sshd@3-172.31.31.165:22-139.178.89.65:43308.service - OpenSSH per-connection server daemon (139.178.89.65:43308). Feb 13 19:49:38.934120 sshd[2157]: Accepted publickey for core from 139.178.89.65 port 43308 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:49:38.938577 sshd-session[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:38.953196 systemd-logind[1871]: New session 4 of user core. Feb 13 19:49:38.971147 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:49:39.093608 sshd[2159]: Connection closed by 139.178.89.65 port 43308 Feb 13 19:49:39.094586 sshd-session[2157]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:39.100494 systemd[1]: sshd@3-172.31.31.165:22-139.178.89.65:43308.service: Deactivated successfully. Feb 13 19:49:39.104683 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:49:39.105885 systemd-logind[1871]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:49:39.108914 systemd-logind[1871]: Removed session 4. Feb 13 19:49:39.137616 systemd[1]: Started sshd@4-172.31.31.165:22-139.178.89.65:43320.service - OpenSSH per-connection server daemon (139.178.89.65:43320). Feb 13 19:49:39.324003 sshd[2164]: Accepted publickey for core from 139.178.89.65 port 43320 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:49:39.325728 sshd-session[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:39.332508 systemd-logind[1871]: New session 5 of user core. Feb 13 19:49:39.338491 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:49:39.505380 sudo[2167]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:49:39.505795 sudo[2167]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:39.527380 sudo[2167]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:39.550029 sshd[2166]: Connection closed by 139.178.89.65 port 43320 Feb 13 19:49:39.551047 sshd-session[2164]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:39.557779 systemd[1]: sshd@4-172.31.31.165:22-139.178.89.65:43320.service: Deactivated successfully. Feb 13 19:49:39.561511 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:49:39.564048 systemd-logind[1871]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:49:39.565898 systemd-logind[1871]: Removed session 5. Feb 13 19:49:39.592616 systemd[1]: Started sshd@5-172.31.31.165:22-139.178.89.65:43330.service - OpenSSH per-connection server daemon (139.178.89.65:43330). Feb 13 19:49:39.761857 sshd[2172]: Accepted publickey for core from 139.178.89.65 port 43330 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:49:39.763369 sshd-session[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:39.770538 systemd-logind[1871]: New session 6 of user core. Feb 13 19:49:39.777447 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:49:39.887982 sudo[2176]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:49:39.888710 sudo[2176]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:39.897747 sudo[2176]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:39.906701 sudo[2175]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:49:39.907150 sudo[2175]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:39.925610 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:49:39.995182 augenrules[2198]: No rules Feb 13 19:49:39.997621 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:49:39.997961 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:49:39.999269 sudo[2175]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:40.023848 sshd[2174]: Connection closed by 139.178.89.65 port 43330 Feb 13 19:49:40.024545 sshd-session[2172]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:40.038572 systemd[1]: sshd@5-172.31.31.165:22-139.178.89.65:43330.service: Deactivated successfully. Feb 13 19:49:40.047791 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:49:40.067464 systemd-logind[1871]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:49:40.073666 systemd[1]: Started sshd@6-172.31.31.165:22-139.178.89.65:43342.service - OpenSSH per-connection server daemon (139.178.89.65:43342). Feb 13 19:49:40.077950 systemd-logind[1871]: Removed session 6. Feb 13 19:49:40.268916 sshd[2206]: Accepted publickey for core from 139.178.89.65 port 43342 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:49:40.270815 sshd-session[2206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:40.288320 systemd-logind[1871]: New session 7 of user core. Feb 13 19:49:40.294326 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:49:40.415697 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:49:40.417261 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:41.745589 systemd-resolved[1681]: Clock change detected. Flushing caches. Feb 13 19:49:42.283297 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:42.284310 systemd[1]: kubelet.service: Consumed 1.100s CPU time. Feb 13 19:49:42.294924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:42.335489 systemd[1]: Reloading requested from client PID 2242 ('systemctl') (unit session-7.scope)... Feb 13 19:49:42.335508 systemd[1]: Reloading... Feb 13 19:49:42.471209 zram_generator::config[2278]: No configuration found. Feb 13 19:49:42.735100 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:49:42.881682 systemd[1]: Reloading finished in 545 ms. Feb 13 19:49:42.968233 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:49:42.968356 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:49:42.968752 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:42.975618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:43.792853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:43.808884 (kubelet)[2339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:49:43.867154 kubelet[2339]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:43.867154 kubelet[2339]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:49:43.867154 kubelet[2339]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:43.867646 kubelet[2339]: I0213 19:49:43.867296 2339 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:49:44.843799 kubelet[2339]: I0213 19:49:44.843745 2339 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:49:44.843799 kubelet[2339]: I0213 19:49:44.843783 2339 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:49:44.844173 kubelet[2339]: I0213 19:49:44.844131 2339 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:49:44.877811 kubelet[2339]: I0213 19:49:44.877069 2339 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:49:44.895071 kubelet[2339]: E0213 19:49:44.894267 2339 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:49:44.895071 kubelet[2339]: I0213 19:49:44.894308 2339 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:49:44.898277 kubelet[2339]: I0213 19:49:44.898237 2339 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:49:44.898515 kubelet[2339]: I0213 19:49:44.898477 2339 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:49:44.898713 kubelet[2339]: I0213 19:49:44.898519 2339 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.31.165","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:49:44.898870 kubelet[2339]: I0213 19:49:44.898721 2339 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:49:44.898870 kubelet[2339]: I0213 19:49:44.898736 2339 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:49:44.898949 kubelet[2339]: I0213 19:49:44.898885 2339 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:44.907548 kubelet[2339]: I0213 19:49:44.907511 2339 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:49:44.907790 kubelet[2339]: I0213 19:49:44.907767 2339 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:49:44.907860 kubelet[2339]: I0213 19:49:44.907803 2339 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:49:44.907860 kubelet[2339]: I0213 19:49:44.907818 2339 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:49:44.912199 kubelet[2339]: E0213 19:49:44.911898 2339 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:49:44.912199 kubelet[2339]: E0213 19:49:44.912048 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:49:44.914181 kubelet[2339]: I0213 19:49:44.914129 2339 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:49:44.914626 kubelet[2339]: I0213 19:49:44.914601 2339 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:49:44.914707 kubelet[2339]: W0213 19:49:44.914665 2339 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:49:44.917003 kubelet[2339]: I0213 19:49:44.916970 2339 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:49:44.917102 kubelet[2339]: I0213 19:49:44.917017 2339 server.go:1287] "Started kubelet" Feb 13 19:49:44.920180 kubelet[2339]: I0213 19:49:44.917198 2339 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:49:44.920180 kubelet[2339]: I0213 19:49:44.917416 2339 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:49:44.920180 kubelet[2339]: I0213 19:49:44.918995 2339 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:49:44.920180 kubelet[2339]: I0213 19:49:44.919822 2339 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:49:44.923054 kubelet[2339]: I0213 19:49:44.923032 2339 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:49:44.923862 kubelet[2339]: I0213 19:49:44.923838 2339 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:49:44.928553 kubelet[2339]: E0213 19:49:44.928522 2339 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.31.165\" not found" Feb 13 19:49:44.928660 kubelet[2339]: I0213 19:49:44.928566 2339 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:49:44.929225 kubelet[2339]: I0213 19:49:44.929204 2339 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:49:44.929404 kubelet[2339]: I0213 19:49:44.929273 2339 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:49:44.931967 kubelet[2339]: I0213 19:49:44.931830 2339 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:49:44.932096 kubelet[2339]: I0213 19:49:44.932065 2339 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:49:44.935910 kubelet[2339]: W0213 19:49:44.935569 2339 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:49:44.936111 kubelet[2339]: E0213 19:49:44.936085 2339 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 19:49:44.937855 kubelet[2339]: E0213 19:49:44.936304 2339 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.31.165.1823dc5fc13a8aa8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.31.165,UID:172.31.31.165,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.31.165,},FirstTimestamp:2025-02-13 19:49:44.916986536 +0000 UTC m=+1.102215501,LastTimestamp:2025-02-13 19:49:44.916986536 +0000 UTC m=+1.102215501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.31.165,}" Feb 13 19:49:44.938011 kubelet[2339]: W0213 19:49:44.937936 2339 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:49:44.938011 kubelet[2339]: E0213 19:49:44.937964 2339 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:49:44.938115 kubelet[2339]: W0213 19:49:44.938074 2339 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.31.165" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:49:44.938115 kubelet[2339]: E0213 19:49:44.938093 2339 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.31.165\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:49:44.939770 kubelet[2339]: I0213 19:49:44.939746 2339 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:49:44.941820 kubelet[2339]: E0213 19:49:44.941785 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.31.165\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:49:44.941919 kubelet[2339]: E0213 19:49:44.941909 2339 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:49:44.955765 kubelet[2339]: I0213 19:49:44.955706 2339 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:49:44.955765 kubelet[2339]: I0213 19:49:44.955732 2339 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:49:44.955765 kubelet[2339]: I0213 19:49:44.955758 2339 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:44.964204 kubelet[2339]: E0213 19:49:44.961112 2339 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.31.165.1823dc5fc2b6a44d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.31.165,UID:172.31.31.165,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.31.165,},FirstTimestamp:2025-02-13 19:49:44.941896781 +0000 UTC m=+1.127125746,LastTimestamp:2025-02-13 19:49:44.941896781 +0000 UTC m=+1.127125746,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.31.165,}" Feb 13 19:49:44.964204 kubelet[2339]: I0213 19:49:44.961617 2339 policy_none.go:49] "None policy: Start" Feb 13 19:49:44.964204 kubelet[2339]: I0213 19:49:44.961649 2339 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:49:44.964204 kubelet[2339]: I0213 19:49:44.961664 2339 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:49:44.964526 kubelet[2339]: E0213 19:49:44.962647 2339 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.31.165.1823dc5fc36969c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.31.165,UID:172.31.31.165,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.31.165 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.31.165,},FirstTimestamp:2025-02-13 19:49:44.95361274 +0000 UTC m=+1.138841704,LastTimestamp:2025-02-13 19:49:44.95361274 +0000 UTC m=+1.138841704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.31.165,}" Feb 13 19:49:44.973678 kubelet[2339]: E0213 19:49:44.973460 2339 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.31.165.1823dc5fc3698616 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.31.165,UID:172.31.31.165,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 172.31.31.165 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:172.31.31.165,},FirstTimestamp:2025-02-13 19:49:44.95361999 +0000 UTC m=+1.138848934,LastTimestamp:2025-02-13 19:49:44.95361999 +0000 UTC m=+1.138848934,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.31.165,}" Feb 13 19:49:44.988812 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:49:45.005326 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:49:45.011404 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:49:45.022687 kubelet[2339]: I0213 19:49:45.022593 2339 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:49:45.026050 kubelet[2339]: I0213 19:49:45.026026 2339 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:49:45.027524 kubelet[2339]: I0213 19:49:45.026487 2339 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:49:45.027524 kubelet[2339]: I0213 19:49:45.026778 2339 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:49:45.029787 kubelet[2339]: E0213 19:49:45.029766 2339 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:49:45.029998 kubelet[2339]: E0213 19:49:45.029985 2339 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.31.165\" not found" Feb 13 19:49:45.107652 kubelet[2339]: I0213 19:49:45.107325 2339 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:49:45.111969 kubelet[2339]: I0213 19:49:45.111913 2339 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:49:45.111969 kubelet[2339]: I0213 19:49:45.111952 2339 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:49:45.111969 kubelet[2339]: I0213 19:49:45.111977 2339 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:49:45.114615 kubelet[2339]: I0213 19:49:45.111986 2339 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:49:45.114615 kubelet[2339]: E0213 19:49:45.112047 2339 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 19:49:45.128073 kubelet[2339]: I0213 19:49:45.128024 2339 kubelet_node_status.go:76] "Attempting to register node" node="172.31.31.165" Feb 13 19:49:45.136264 kubelet[2339]: I0213 19:49:45.136235 2339 kubelet_node_status.go:79] "Successfully registered node" node="172.31.31.165" Feb 13 19:49:45.136264 kubelet[2339]: E0213 19:49:45.136269 2339 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172.31.31.165\": node \"172.31.31.165\" not found" Feb 13 19:49:45.150577 kubelet[2339]: E0213 19:49:45.150497 2339 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.31.165\" not found" Feb 13 19:49:45.251309 kubelet[2339]: E0213 19:49:45.251265 2339 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.31.165\" not found" Feb 13 19:49:45.351888 kubelet[2339]: E0213 19:49:45.351842 2339 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.31.165\" not found" Feb 13 19:49:45.452574 kubelet[2339]: E0213 19:49:45.452349 2339 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.31.165\" not found" Feb 13 19:49:45.511185 sudo[2209]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:45.533133 sshd[2208]: Connection closed by 139.178.89.65 port 43342 Feb 13 19:49:45.534014 sshd-session[2206]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:45.545098 systemd[1]: sshd@6-172.31.31.165:22-139.178.89.65:43342.service: Deactivated successfully. Feb 13 19:49:45.551519 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:49:45.553837 kubelet[2339]: E0213 19:49:45.553458 2339 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.31.165\" not found" Feb 13 19:49:45.558220 systemd-logind[1871]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:49:45.563218 systemd-logind[1871]: Removed session 7. Feb 13 19:49:45.654563 kubelet[2339]: E0213 19:49:45.654502 2339 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.31.165\" not found" Feb 13 19:49:45.755660 kubelet[2339]: E0213 19:49:45.755485 2339 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.31.165\" not found" Feb 13 19:49:45.846600 kubelet[2339]: I0213 19:49:45.846555 2339 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:49:45.846801 kubelet[2339]: W0213 19:49:45.846754 2339 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:49:45.855755 kubelet[2339]: E0213 19:49:45.855667 2339 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.31.165\" not found" Feb 13 19:49:45.912630 kubelet[2339]: E0213 19:49:45.912572 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:49:45.956406 kubelet[2339]: E0213 19:49:45.956358 2339 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.31.165\" not found" Feb 13 19:49:46.057088 kubelet[2339]: E0213 19:49:46.056951 2339 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.31.165\" not found" Feb 13 19:49:46.157555 kubelet[2339]: E0213 19:49:46.157510 2339 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.31.165\" not found" Feb 13 19:49:46.259053 kubelet[2339]: I0213 19:49:46.259016 2339 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:49:46.259665 containerd[1885]: time="2025-02-13T19:49:46.259617080Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:49:46.260533 kubelet[2339]: I0213 19:49:46.259954 2339 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:49:46.913151 kubelet[2339]: I0213 19:49:46.913110 2339 apiserver.go:52] "Watching apiserver" Feb 13 19:49:46.913722 kubelet[2339]: E0213 19:49:46.913120 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:49:46.922474 kubelet[2339]: E0213 19:49:46.921892 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:49:46.929960 kubelet[2339]: I0213 19:49:46.929922 2339 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:49:46.939482 kubelet[2339]: I0213 19:49:46.939442 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vndf\" (UniqueName: \"kubernetes.io/projected/cdc462cb-a0ba-4702-9ce2-288e0299064b-kube-api-access-5vndf\") pod \"kube-proxy-cpkhk\" (UID: \"cdc462cb-a0ba-4702-9ce2-288e0299064b\") " pod="kube-system/kube-proxy-cpkhk" Feb 13 19:49:46.939640 kubelet[2339]: I0213 19:49:46.939529 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f980a18-9339-47a2-8d22-d71f11933c49-lib-modules\") pod \"calico-node-8s427\" (UID: \"8f980a18-9339-47a2-8d22-d71f11933c49\") " pod="calico-system/calico-node-8s427" Feb 13 19:49:46.939640 kubelet[2339]: I0213 19:49:46.939557 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f980a18-9339-47a2-8d22-d71f11933c49-xtables-lock\") pod \"calico-node-8s427\" (UID: \"8f980a18-9339-47a2-8d22-d71f11933c49\") " pod="calico-system/calico-node-8s427" Feb 13 19:49:46.939640 kubelet[2339]: I0213 19:49:46.939586 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8f980a18-9339-47a2-8d22-d71f11933c49-var-run-calico\") pod \"calico-node-8s427\" (UID: \"8f980a18-9339-47a2-8d22-d71f11933c49\") " pod="calico-system/calico-node-8s427" Feb 13 19:49:46.939640 kubelet[2339]: I0213 19:49:46.939609 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8f980a18-9339-47a2-8d22-d71f11933c49-var-lib-calico\") pod \"calico-node-8s427\" (UID: \"8f980a18-9339-47a2-8d22-d71f11933c49\") " pod="calico-system/calico-node-8s427" Feb 13 19:49:46.939640 kubelet[2339]: I0213 19:49:46.939631 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/43ba3476-37d5-44c8-9a35-8e22f8bd98f5-socket-dir\") pod \"csi-node-driver-fdgfj\" (UID: \"43ba3476-37d5-44c8-9a35-8e22f8bd98f5\") " pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:49:46.939873 kubelet[2339]: I0213 19:49:46.939653 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f980a18-9339-47a2-8d22-d71f11933c49-tigera-ca-bundle\") pod \"calico-node-8s427\" (UID: \"8f980a18-9339-47a2-8d22-d71f11933c49\") " pod="calico-system/calico-node-8s427" Feb 13 19:49:46.939873 kubelet[2339]: I0213 19:49:46.939676 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8f980a18-9339-47a2-8d22-d71f11933c49-cni-log-dir\") pod \"calico-node-8s427\" (UID: \"8f980a18-9339-47a2-8d22-d71f11933c49\") " pod="calico-system/calico-node-8s427" Feb 13 19:49:46.939873 kubelet[2339]: I0213 19:49:46.939712 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdc462cb-a0ba-4702-9ce2-288e0299064b-lib-modules\") pod \"kube-proxy-cpkhk\" (UID: \"cdc462cb-a0ba-4702-9ce2-288e0299064b\") " pod="kube-system/kube-proxy-cpkhk" Feb 13 19:49:46.939873 kubelet[2339]: I0213 19:49:46.939735 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8f980a18-9339-47a2-8d22-d71f11933c49-policysync\") pod \"calico-node-8s427\" (UID: \"8f980a18-9339-47a2-8d22-d71f11933c49\") " pod="calico-system/calico-node-8s427" Feb 13 19:49:46.939873 kubelet[2339]: I0213 19:49:46.939766 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8f980a18-9339-47a2-8d22-d71f11933c49-cni-bin-dir\") pod \"calico-node-8s427\" (UID: \"8f980a18-9339-47a2-8d22-d71f11933c49\") " pod="calico-system/calico-node-8s427" Feb 13 19:49:46.940150 kubelet[2339]: I0213 19:49:46.939795 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8f980a18-9339-47a2-8d22-d71f11933c49-cni-net-dir\") pod \"calico-node-8s427\" (UID: \"8f980a18-9339-47a2-8d22-d71f11933c49\") " pod="calico-system/calico-node-8s427" Feb 13 19:49:46.940150 kubelet[2339]: I0213 19:49:46.939820 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzvh5\" (UniqueName: \"kubernetes.io/projected/8f980a18-9339-47a2-8d22-d71f11933c49-kube-api-access-mzvh5\") pod \"calico-node-8s427\" (UID: \"8f980a18-9339-47a2-8d22-d71f11933c49\") " pod="calico-system/calico-node-8s427" Feb 13 19:49:46.940150 kubelet[2339]: I0213 19:49:46.939845 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdc462cb-a0ba-4702-9ce2-288e0299064b-xtables-lock\") pod \"kube-proxy-cpkhk\" (UID: \"cdc462cb-a0ba-4702-9ce2-288e0299064b\") " pod="kube-system/kube-proxy-cpkhk" Feb 13 19:49:46.940150 kubelet[2339]: I0213 19:49:46.939869 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8q29\" (UniqueName: \"kubernetes.io/projected/43ba3476-37d5-44c8-9a35-8e22f8bd98f5-kube-api-access-f8q29\") pod \"csi-node-driver-fdgfj\" (UID: \"43ba3476-37d5-44c8-9a35-8e22f8bd98f5\") " pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:49:46.940150 kubelet[2339]: I0213 19:49:46.939893 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cdc462cb-a0ba-4702-9ce2-288e0299064b-kube-proxy\") pod \"kube-proxy-cpkhk\" (UID: \"cdc462cb-a0ba-4702-9ce2-288e0299064b\") " pod="kube-system/kube-proxy-cpkhk" Feb 13 19:49:46.940383 kubelet[2339]: I0213 19:49:46.939975 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8f980a18-9339-47a2-8d22-d71f11933c49-node-certs\") pod \"calico-node-8s427\" (UID: \"8f980a18-9339-47a2-8d22-d71f11933c49\") " pod="calico-system/calico-node-8s427" Feb 13 19:49:46.940383 kubelet[2339]: I0213 19:49:46.940015 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8f980a18-9339-47a2-8d22-d71f11933c49-flexvol-driver-host\") pod \"calico-node-8s427\" (UID: \"8f980a18-9339-47a2-8d22-d71f11933c49\") " pod="calico-system/calico-node-8s427" Feb 13 19:49:46.940383 kubelet[2339]: I0213 19:49:46.940039 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/43ba3476-37d5-44c8-9a35-8e22f8bd98f5-varrun\") pod \"csi-node-driver-fdgfj\" (UID: \"43ba3476-37d5-44c8-9a35-8e22f8bd98f5\") " pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:49:46.940383 kubelet[2339]: I0213 19:49:46.940063 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/43ba3476-37d5-44c8-9a35-8e22f8bd98f5-kubelet-dir\") pod \"csi-node-driver-fdgfj\" (UID: \"43ba3476-37d5-44c8-9a35-8e22f8bd98f5\") " pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:49:46.940383 kubelet[2339]: I0213 19:49:46.940088 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/43ba3476-37d5-44c8-9a35-8e22f8bd98f5-registration-dir\") pod \"csi-node-driver-fdgfj\" (UID: \"43ba3476-37d5-44c8-9a35-8e22f8bd98f5\") " pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:49:46.944336 systemd[1]: Created slice kubepods-besteffort-podcdc462cb_a0ba_4702_9ce2_288e0299064b.slice - libcontainer container kubepods-besteffort-podcdc462cb_a0ba_4702_9ce2_288e0299064b.slice. Feb 13 19:49:46.971343 systemd[1]: Created slice kubepods-besteffort-pod8f980a18_9339_47a2_8d22_d71f11933c49.slice - libcontainer container kubepods-besteffort-pod8f980a18_9339_47a2_8d22_d71f11933c49.slice. Feb 13 19:49:47.046031 kubelet[2339]: E0213 19:49:47.045927 2339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.046031 kubelet[2339]: W0213 19:49:47.045974 2339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.046348 kubelet[2339]: E0213 19:49:47.046008 2339 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.052733 kubelet[2339]: E0213 19:49:47.052669 2339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.052733 kubelet[2339]: W0213 19:49:47.052696 2339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.054853 kubelet[2339]: E0213 19:49:47.054751 2339 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.110276 kubelet[2339]: E0213 19:49:47.110092 2339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.110276 kubelet[2339]: W0213 19:49:47.110219 2339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.110786 kubelet[2339]: E0213 19:49:47.110641 2339 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.113108 kubelet[2339]: E0213 19:49:47.112986 2339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.113108 kubelet[2339]: W0213 19:49:47.113010 2339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.113108 kubelet[2339]: E0213 19:49:47.113053 2339 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.114080 kubelet[2339]: E0213 19:49:47.113956 2339 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.114080 kubelet[2339]: W0213 19:49:47.113974 2339 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.114080 kubelet[2339]: E0213 19:49:47.113994 2339 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.269500 containerd[1885]: time="2025-02-13T19:49:47.269366217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cpkhk,Uid:cdc462cb-a0ba-4702-9ce2-288e0299064b,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:47.278618 containerd[1885]: time="2025-02-13T19:49:47.278558314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8s427,Uid:8f980a18-9339-47a2-8d22-d71f11933c49,Namespace:calico-system,Attempt:0,}" Feb 13 19:49:47.913673 kubelet[2339]: E0213 19:49:47.913626 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:49:47.969479 containerd[1885]: time="2025-02-13T19:49:47.968405798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:47.971517 containerd[1885]: time="2025-02-13T19:49:47.971467449Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:47.973060 containerd[1885]: time="2025-02-13T19:49:47.972969529Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:49:47.976067 containerd[1885]: time="2025-02-13T19:49:47.975817903Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:49:47.977917 containerd[1885]: time="2025-02-13T19:49:47.977856872Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:47.982195 containerd[1885]: time="2025-02-13T19:49:47.981829778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:47.985214 containerd[1885]: time="2025-02-13T19:49:47.983603538Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 714.071102ms" Feb 13 19:49:47.995617 containerd[1885]: time="2025-02-13T19:49:47.995291110Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 716.607116ms" Feb 13 19:49:48.058258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount960990642.mount: Deactivated successfully. Feb 13 19:49:48.248897 containerd[1885]: time="2025-02-13T19:49:48.244842291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:48.248897 containerd[1885]: time="2025-02-13T19:49:48.247949832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:48.248897 containerd[1885]: time="2025-02-13T19:49:48.247980193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:48.248897 containerd[1885]: time="2025-02-13T19:49:48.248105713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:48.252083 containerd[1885]: time="2025-02-13T19:49:48.251849618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:48.252083 containerd[1885]: time="2025-02-13T19:49:48.251984388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:48.252083 containerd[1885]: time="2025-02-13T19:49:48.252048826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:48.252508 containerd[1885]: time="2025-02-13T19:49:48.252451552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:48.447355 systemd[1]: run-containerd-runc-k8s.io-05b0383aab88c5bd826d6e504fee5d5d9fc7007f93e629b5bd511be71f84fb46-runc.4qf55F.mount: Deactivated successfully. Feb 13 19:49:48.459401 systemd[1]: Started cri-containerd-05b0383aab88c5bd826d6e504fee5d5d9fc7007f93e629b5bd511be71f84fb46.scope - libcontainer container 05b0383aab88c5bd826d6e504fee5d5d9fc7007f93e629b5bd511be71f84fb46. Feb 13 19:49:48.461398 systemd[1]: Started cri-containerd-3b9dd7615cbb8b785e9305297ab44b8d27f2612691ac2671e9dec781efbc831b.scope - libcontainer container 3b9dd7615cbb8b785e9305297ab44b8d27f2612691ac2671e9dec781efbc831b. Feb 13 19:49:48.516401 containerd[1885]: time="2025-02-13T19:49:48.516062877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8s427,Uid:8f980a18-9339-47a2-8d22-d71f11933c49,Namespace:calico-system,Attempt:0,} returns sandbox id \"05b0383aab88c5bd826d6e504fee5d5d9fc7007f93e629b5bd511be71f84fb46\"" Feb 13 19:49:48.521546 containerd[1885]: time="2025-02-13T19:49:48.520794026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cpkhk,Uid:cdc462cb-a0ba-4702-9ce2-288e0299064b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b9dd7615cbb8b785e9305297ab44b8d27f2612691ac2671e9dec781efbc831b\"" Feb 13 19:49:48.521546 containerd[1885]: time="2025-02-13T19:49:48.520844804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:49:48.913923 kubelet[2339]: E0213 19:49:48.913811 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:49:49.113085 kubelet[2339]: E0213 19:49:49.112680 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:49:49.914712 kubelet[2339]: E0213 19:49:49.914615 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:49:49.952067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount36494518.mount: Deactivated successfully. Feb 13 19:49:50.094505 containerd[1885]: time="2025-02-13T19:49:50.094452516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:50.095826 containerd[1885]: time="2025-02-13T19:49:50.095699191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 19:49:50.099016 containerd[1885]: time="2025-02-13T19:49:50.097540120Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:50.101313 containerd[1885]: time="2025-02-13T19:49:50.100333126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:50.101313 containerd[1885]: time="2025-02-13T19:49:50.101082256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.580207694s" Feb 13 19:49:50.101313 containerd[1885]: time="2025-02-13T19:49:50.101120136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:49:50.103209 containerd[1885]: time="2025-02-13T19:49:50.103154778Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:49:50.104265 containerd[1885]: time="2025-02-13T19:49:50.104181764Z" level=info msg="CreateContainer within sandbox \"05b0383aab88c5bd826d6e504fee5d5d9fc7007f93e629b5bd511be71f84fb46\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:49:50.126814 containerd[1885]: time="2025-02-13T19:49:50.126765436Z" level=info msg="CreateContainer within sandbox \"05b0383aab88c5bd826d6e504fee5d5d9fc7007f93e629b5bd511be71f84fb46\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d8268275d2f32bb9f5b67da0293680d30eb5c4c59635d345d468fd85e992b57b\"" Feb 13 19:49:50.128179 containerd[1885]: time="2025-02-13T19:49:50.128132054Z" level=info msg="StartContainer for \"d8268275d2f32bb9f5b67da0293680d30eb5c4c59635d345d468fd85e992b57b\"" Feb 13 19:49:50.173447 systemd[1]: Started cri-containerd-d8268275d2f32bb9f5b67da0293680d30eb5c4c59635d345d468fd85e992b57b.scope - libcontainer container d8268275d2f32bb9f5b67da0293680d30eb5c4c59635d345d468fd85e992b57b. Feb 13 19:49:50.222565 containerd[1885]: time="2025-02-13T19:49:50.222492557Z" level=info msg="StartContainer for \"d8268275d2f32bb9f5b67da0293680d30eb5c4c59635d345d468fd85e992b57b\" returns successfully" Feb 13 19:49:50.239227 systemd[1]: cri-containerd-d8268275d2f32bb9f5b67da0293680d30eb5c4c59635d345d468fd85e992b57b.scope: Deactivated successfully. Feb 13 19:49:50.339538 containerd[1885]: time="2025-02-13T19:49:50.339477662Z" level=info msg="shim disconnected" id=d8268275d2f32bb9f5b67da0293680d30eb5c4c59635d345d468fd85e992b57b namespace=k8s.io Feb 13 19:49:50.339538 containerd[1885]: time="2025-02-13T19:49:50.339531675Z" level=warning msg="cleaning up after shim disconnected" id=d8268275d2f32bb9f5b67da0293680d30eb5c4c59635d345d468fd85e992b57b namespace=k8s.io Feb 13 19:49:50.339538 containerd[1885]: time="2025-02-13T19:49:50.339542867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:49:50.919989 kubelet[2339]: E0213 19:49:50.916399 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:49:50.935836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8268275d2f32bb9f5b67da0293680d30eb5c4c59635d345d468fd85e992b57b-rootfs.mount: Deactivated successfully. Feb 13 19:49:51.113850 kubelet[2339]: E0213 19:49:51.113809 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:49:51.776618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1297606010.mount: Deactivated successfully. Feb 13 19:49:51.917491 kubelet[2339]: E0213 19:49:51.917387 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:49:52.773283 containerd[1885]: time="2025-02-13T19:49:52.773235281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:52.774985 containerd[1885]: time="2025-02-13T19:49:52.774932986Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 19:49:52.776308 containerd[1885]: time="2025-02-13T19:49:52.776249290Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:52.779915 containerd[1885]: time="2025-02-13T19:49:52.779185241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:52.779915 containerd[1885]: time="2025-02-13T19:49:52.779772058Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 2.676305579s" Feb 13 19:49:52.779915 containerd[1885]: time="2025-02-13T19:49:52.779807170Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 19:49:52.781177 containerd[1885]: time="2025-02-13T19:49:52.781121161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:49:52.783712 containerd[1885]: time="2025-02-13T19:49:52.783640020Z" level=info msg="CreateContainer within sandbox \"3b9dd7615cbb8b785e9305297ab44b8d27f2612691ac2671e9dec781efbc831b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:49:52.818111 containerd[1885]: time="2025-02-13T19:49:52.817996788Z" level=info msg="CreateContainer within sandbox \"3b9dd7615cbb8b785e9305297ab44b8d27f2612691ac2671e9dec781efbc831b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0882d8cd3dccf1ebd446456aaad87b2abf665480e97eb4ac604d99e3ea23c3cf\"" Feb 13 19:49:52.820078 containerd[1885]: time="2025-02-13T19:49:52.818629902Z" level=info msg="StartContainer for \"0882d8cd3dccf1ebd446456aaad87b2abf665480e97eb4ac604d99e3ea23c3cf\"" Feb 13 19:49:52.860135 systemd[1]: run-containerd-runc-k8s.io-0882d8cd3dccf1ebd446456aaad87b2abf665480e97eb4ac604d99e3ea23c3cf-runc.qd5bgv.mount: Deactivated successfully. Feb 13 19:49:52.869414 systemd[1]: Started cri-containerd-0882d8cd3dccf1ebd446456aaad87b2abf665480e97eb4ac604d99e3ea23c3cf.scope - libcontainer container 0882d8cd3dccf1ebd446456aaad87b2abf665480e97eb4ac604d99e3ea23c3cf. Feb 13 19:49:52.911225 containerd[1885]: time="2025-02-13T19:49:52.911084710Z" level=info msg="StartContainer for \"0882d8cd3dccf1ebd446456aaad87b2abf665480e97eb4ac604d99e3ea23c3cf\" returns successfully" Feb 13 19:49:52.918567 kubelet[2339]: E0213 19:49:52.918523 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:49:53.113446 kubelet[2339]: E0213 19:49:53.113057 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:49:53.247362 kubelet[2339]: I0213 19:49:53.247278 2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cpkhk" podStartSLOduration=3.989600346 podStartE2EDuration="8.247251731s" podCreationTimestamp="2025-02-13 19:49:45 +0000 UTC" firstStartedPulling="2025-02-13 19:49:48.523344454 +0000 UTC m=+4.708573397" lastFinishedPulling="2025-02-13 19:49:52.780995803 +0000 UTC m=+8.966224782" observedRunningTime="2025-02-13 19:49:53.246187379 +0000 UTC m=+9.431416373" watchObservedRunningTime="2025-02-13 19:49:53.247251731 +0000 UTC m=+9.432480697" Feb 13 19:49:53.919432 kubelet[2339]: E0213 19:49:53.919360 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:49:54.920949 kubelet[2339]: E0213 19:49:54.920800 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:49:55.113724 kubelet[2339]: E0213 19:49:55.113309 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:49:55.921410 kubelet[2339]: E0213 19:49:55.921366 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:49:56.923051 kubelet[2339]: E0213 19:49:56.922999 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:49:57.113750 kubelet[2339]: E0213 19:49:57.113399 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:49:57.923735 kubelet[2339]: E0213 19:49:57.923693 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:49:58.154631 containerd[1885]: time="2025-02-13T19:49:58.154573573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:58.156805 containerd[1885]: time="2025-02-13T19:49:58.156658889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:49:58.160921 containerd[1885]: time="2025-02-13T19:49:58.158864793Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:58.164665 containerd[1885]: time="2025-02-13T19:49:58.164617668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:58.165680 containerd[1885]: time="2025-02-13T19:49:58.165646128Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.384485237s" Feb 13 19:49:58.165819 containerd[1885]: time="2025-02-13T19:49:58.165799977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:49:58.171966 containerd[1885]: time="2025-02-13T19:49:58.171298632Z" level=info msg="CreateContainer within sandbox \"05b0383aab88c5bd826d6e504fee5d5d9fc7007f93e629b5bd511be71f84fb46\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:49:58.211271 containerd[1885]: time="2025-02-13T19:49:58.211137193Z" level=info msg="CreateContainer within sandbox \"05b0383aab88c5bd826d6e504fee5d5d9fc7007f93e629b5bd511be71f84fb46\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7270dde1f35424403b8470a07b9df0f51e0b220c58cd5d8a81db22058746e55e\"" Feb 13 19:49:58.217890 containerd[1885]: time="2025-02-13T19:49:58.217844621Z" level=info msg="StartContainer for \"7270dde1f35424403b8470a07b9df0f51e0b220c58cd5d8a81db22058746e55e\"" Feb 13 19:49:58.322788 systemd[1]: Started cri-containerd-7270dde1f35424403b8470a07b9df0f51e0b220c58cd5d8a81db22058746e55e.scope - libcontainer container 7270dde1f35424403b8470a07b9df0f51e0b220c58cd5d8a81db22058746e55e. Feb 13 19:49:58.379299 containerd[1885]: time="2025-02-13T19:49:58.378525947Z" level=info msg="StartContainer for \"7270dde1f35424403b8470a07b9df0f51e0b220c58cd5d8a81db22058746e55e\" returns successfully" Feb 13 19:49:58.924182 kubelet[2339]: E0213 19:49:58.924131 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:49:59.115007 kubelet[2339]: E0213 19:49:59.114592 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:49:59.768978 systemd[1]: cri-containerd-7270dde1f35424403b8470a07b9df0f51e0b220c58cd5d8a81db22058746e55e.scope: Deactivated successfully. Feb 13 19:49:59.795818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7270dde1f35424403b8470a07b9df0f51e0b220c58cd5d8a81db22058746e55e-rootfs.mount: Deactivated successfully. Feb 13 19:49:59.798682 kubelet[2339]: I0213 19:49:59.798371 2339 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:49:59.924492 kubelet[2339]: E0213 19:49:59.924444 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:00.599249 containerd[1885]: time="2025-02-13T19:50:00.599092996Z" level=info msg="shim disconnected" id=7270dde1f35424403b8470a07b9df0f51e0b220c58cd5d8a81db22058746e55e namespace=k8s.io Feb 13 19:50:00.599249 containerd[1885]: time="2025-02-13T19:50:00.599251289Z" level=warning msg="cleaning up after shim disconnected" id=7270dde1f35424403b8470a07b9df0f51e0b220c58cd5d8a81db22058746e55e namespace=k8s.io Feb 13 19:50:00.599835 containerd[1885]: time="2025-02-13T19:50:00.599265925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:50:00.925712 kubelet[2339]: E0213 19:50:00.925574 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:01.121920 systemd[1]: Created slice kubepods-besteffort-pod43ba3476_37d5_44c8_9a35_8e22f8bd98f5.slice - libcontainer container kubepods-besteffort-pod43ba3476_37d5_44c8_9a35_8e22f8bd98f5.slice. Feb 13 19:50:01.125092 containerd[1885]: time="2025-02-13T19:50:01.125051994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:0,}" Feb 13 19:50:01.232909 containerd[1885]: time="2025-02-13T19:50:01.232603929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:50:01.377936 containerd[1885]: time="2025-02-13T19:50:01.376624025Z" level=error msg="Failed to destroy network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:01.379229 containerd[1885]: time="2025-02-13T19:50:01.378739045Z" level=error msg="encountered an error cleaning up failed sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:01.379229 containerd[1885]: time="2025-02-13T19:50:01.378999315Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:01.385010 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67-shm.mount: Deactivated successfully. Feb 13 19:50:01.395037 kubelet[2339]: E0213 19:50:01.392380 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:01.395240 kubelet[2339]: E0213 19:50:01.395099 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:01.395240 kubelet[2339]: E0213 19:50:01.395134 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:01.395240 kubelet[2339]: E0213 19:50:01.395208 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:50:01.926194 kubelet[2339]: E0213 19:50:01.926130 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:02.233947 kubelet[2339]: I0213 19:50:02.233290 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67" Feb 13 19:50:02.234388 containerd[1885]: time="2025-02-13T19:50:02.234347393Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\"" Feb 13 19:50:02.234945 containerd[1885]: time="2025-02-13T19:50:02.234613282Z" level=info msg="Ensure that sandbox 3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67 in task-service has been cleanup successfully" Feb 13 19:50:02.241266 systemd[1]: run-netns-cni\x2dc49c0ba4\x2d5613\x2dc49c\x2d8d1c\x2df5cd3054db5b.mount: Deactivated successfully. Feb 13 19:50:02.241665 containerd[1885]: time="2025-02-13T19:50:02.241258236Z" level=info msg="TearDown network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" successfully" Feb 13 19:50:02.241665 containerd[1885]: time="2025-02-13T19:50:02.241294326Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" returns successfully" Feb 13 19:50:02.242688 containerd[1885]: time="2025-02-13T19:50:02.242656049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:1,}" Feb 13 19:50:02.423861 containerd[1885]: time="2025-02-13T19:50:02.423802526Z" level=error msg="Failed to destroy network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:02.430189 containerd[1885]: time="2025-02-13T19:50:02.424236412Z" level=error msg="encountered an error cleaning up failed sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:02.430189 containerd[1885]: time="2025-02-13T19:50:02.430070173Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:02.430155 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25-shm.mount: Deactivated successfully. Feb 13 19:50:02.431226 kubelet[2339]: E0213 19:50:02.430781 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:02.431226 kubelet[2339]: E0213 19:50:02.430848 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:02.431226 kubelet[2339]: E0213 19:50:02.430883 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:02.431662 kubelet[2339]: E0213 19:50:02.430941 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:50:02.926391 kubelet[2339]: E0213 19:50:02.926326 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:03.237114 kubelet[2339]: I0213 19:50:03.236993 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25" Feb 13 19:50:03.238581 containerd[1885]: time="2025-02-13T19:50:03.237937648Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\"" Feb 13 19:50:03.238581 containerd[1885]: time="2025-02-13T19:50:03.238207498Z" level=info msg="Ensure that sandbox 8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25 in task-service has been cleanup successfully" Feb 13 19:50:03.239471 containerd[1885]: time="2025-02-13T19:50:03.239306780Z" level=info msg="TearDown network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" successfully" Feb 13 19:50:03.239471 containerd[1885]: time="2025-02-13T19:50:03.239374945Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" returns successfully" Feb 13 19:50:03.242478 systemd[1]: run-netns-cni\x2ddd20a964\x2df9f8\x2d715e\x2d8d53\x2dffbc376bada4.mount: Deactivated successfully. Feb 13 19:50:03.243235 containerd[1885]: time="2025-02-13T19:50:03.242964569Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\"" Feb 13 19:50:03.243235 containerd[1885]: time="2025-02-13T19:50:03.243098800Z" level=info msg="TearDown network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" successfully" Feb 13 19:50:03.243235 containerd[1885]: time="2025-02-13T19:50:03.243115321Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" returns successfully" Feb 13 19:50:03.244449 containerd[1885]: time="2025-02-13T19:50:03.244420454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:2,}" Feb 13 19:50:03.475107 containerd[1885]: time="2025-02-13T19:50:03.475052262Z" level=error msg="Failed to destroy network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:03.479176 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a-shm.mount: Deactivated successfully. Feb 13 19:50:03.479534 containerd[1885]: time="2025-02-13T19:50:03.479484796Z" level=error msg="encountered an error cleaning up failed sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:03.479709 containerd[1885]: time="2025-02-13T19:50:03.479652213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:03.481354 kubelet[2339]: E0213 19:50:03.481307 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:03.482292 kubelet[2339]: E0213 19:50:03.482264 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:03.482788 kubelet[2339]: E0213 19:50:03.482410 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:03.482888 kubelet[2339]: E0213 19:50:03.482490 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:50:03.926746 kubelet[2339]: E0213 19:50:03.926679 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:04.250273 kubelet[2339]: I0213 19:50:04.246714 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a" Feb 13 19:50:04.250484 containerd[1885]: time="2025-02-13T19:50:04.248111673Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\"" Feb 13 19:50:04.250484 containerd[1885]: time="2025-02-13T19:50:04.248371548Z" level=info msg="Ensure that sandbox e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a in task-service has been cleanup successfully" Feb 13 19:50:04.254758 containerd[1885]: time="2025-02-13T19:50:04.253393638Z" level=info msg="TearDown network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" successfully" Feb 13 19:50:04.254758 containerd[1885]: time="2025-02-13T19:50:04.253432445Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" returns successfully" Feb 13 19:50:04.255154 systemd[1]: run-netns-cni\x2d4694b58f\x2dddb7\x2d4cbe\x2da851\x2dc0c45244edfd.mount: Deactivated successfully. Feb 13 19:50:04.258642 containerd[1885]: time="2025-02-13T19:50:04.255727385Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\"" Feb 13 19:50:04.258642 containerd[1885]: time="2025-02-13T19:50:04.255841374Z" level=info msg="TearDown network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" successfully" Feb 13 19:50:04.258642 containerd[1885]: time="2025-02-13T19:50:04.255856393Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" returns successfully" Feb 13 19:50:04.259930 containerd[1885]: time="2025-02-13T19:50:04.259179629Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\"" Feb 13 19:50:04.259930 containerd[1885]: time="2025-02-13T19:50:04.259484294Z" level=info msg="TearDown network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" successfully" Feb 13 19:50:04.259930 containerd[1885]: time="2025-02-13T19:50:04.259508314Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" returns successfully" Feb 13 19:50:04.260793 containerd[1885]: time="2025-02-13T19:50:04.260363561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:3,}" Feb 13 19:50:04.377393 containerd[1885]: time="2025-02-13T19:50:04.377336780Z" level=error msg="Failed to destroy network for sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:04.379957 containerd[1885]: time="2025-02-13T19:50:04.379898860Z" level=error msg="encountered an error cleaning up failed sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:04.380230 containerd[1885]: time="2025-02-13T19:50:04.379995073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:04.381438 kubelet[2339]: E0213 19:50:04.380449 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:04.381438 kubelet[2339]: E0213 19:50:04.381304 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:04.381438 kubelet[2339]: E0213 19:50:04.381375 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:04.381992 kubelet[2339]: E0213 19:50:04.381676 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:50:04.382470 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378-shm.mount: Deactivated successfully. Feb 13 19:50:04.909007 kubelet[2339]: E0213 19:50:04.908955 2339 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:04.927854 kubelet[2339]: E0213 19:50:04.927800 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:05.250522 kubelet[2339]: I0213 19:50:05.250404 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378" Feb 13 19:50:05.254058 containerd[1885]: time="2025-02-13T19:50:05.254006445Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\"" Feb 13 19:50:05.254573 containerd[1885]: time="2025-02-13T19:50:05.254281743Z" level=info msg="Ensure that sandbox e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378 in task-service has been cleanup successfully" Feb 13 19:50:05.254573 containerd[1885]: time="2025-02-13T19:50:05.254497808Z" level=info msg="TearDown network for sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" successfully" Feb 13 19:50:05.254573 containerd[1885]: time="2025-02-13T19:50:05.254517851Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" returns successfully" Feb 13 19:50:05.257179 containerd[1885]: time="2025-02-13T19:50:05.255072883Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\"" Feb 13 19:50:05.257179 containerd[1885]: time="2025-02-13T19:50:05.256954916Z" level=info msg="TearDown network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" successfully" Feb 13 19:50:05.257179 containerd[1885]: time="2025-02-13T19:50:05.256983255Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" returns successfully" Feb 13 19:50:05.262802 containerd[1885]: time="2025-02-13T19:50:05.259213537Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\"" Feb 13 19:50:05.262802 containerd[1885]: time="2025-02-13T19:50:05.259684390Z" level=info msg="TearDown network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" successfully" Feb 13 19:50:05.262802 containerd[1885]: time="2025-02-13T19:50:05.259706338Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" returns successfully" Feb 13 19:50:05.264471 systemd[1]: run-netns-cni\x2d003b3b79\x2d7433\x2d719d\x2daffa\x2d6bfcf1aa0a7b.mount: Deactivated successfully. Feb 13 19:50:05.272408 containerd[1885]: time="2025-02-13T19:50:05.266394986Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\"" Feb 13 19:50:05.272408 containerd[1885]: time="2025-02-13T19:50:05.267204131Z" level=info msg="TearDown network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" successfully" Feb 13 19:50:05.272408 containerd[1885]: time="2025-02-13T19:50:05.267597467Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" returns successfully" Feb 13 19:50:05.289355 containerd[1885]: time="2025-02-13T19:50:05.274278079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:4,}" Feb 13 19:50:05.416481 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:50:05.888763 containerd[1885]: time="2025-02-13T19:50:05.888709927Z" level=error msg="Failed to destroy network for sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:05.891605 containerd[1885]: time="2025-02-13T19:50:05.890916736Z" level=error msg="encountered an error cleaning up failed sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:05.891605 containerd[1885]: time="2025-02-13T19:50:05.891028909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:05.891795 kubelet[2339]: E0213 19:50:05.891422 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:05.891795 kubelet[2339]: E0213 19:50:05.891485 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:05.891795 kubelet[2339]: E0213 19:50:05.891515 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:05.892008 kubelet[2339]: E0213 19:50:05.891568 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:50:05.894816 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573-shm.mount: Deactivated successfully. Feb 13 19:50:05.929313 kubelet[2339]: E0213 19:50:05.929149 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:06.268200 kubelet[2339]: I0213 19:50:06.264984 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573" Feb 13 19:50:06.268332 containerd[1885]: time="2025-02-13T19:50:06.266032410Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\"" Feb 13 19:50:06.268332 containerd[1885]: time="2025-02-13T19:50:06.266446254Z" level=info msg="Ensure that sandbox d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573 in task-service has been cleanup successfully" Feb 13 19:50:06.268332 containerd[1885]: time="2025-02-13T19:50:06.267023749Z" level=info msg="TearDown network for sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" successfully" Feb 13 19:50:06.268332 containerd[1885]: time="2025-02-13T19:50:06.267046526Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" returns successfully" Feb 13 19:50:06.271238 systemd[1]: run-netns-cni\x2de0b94260\x2d3d55\x2d56cd\x2d99eb\x2d5f7142e28267.mount: Deactivated successfully. Feb 13 19:50:06.272303 containerd[1885]: time="2025-02-13T19:50:06.272264599Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\"" Feb 13 19:50:06.272950 containerd[1885]: time="2025-02-13T19:50:06.272921594Z" level=info msg="TearDown network for sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" successfully" Feb 13 19:50:06.274086 containerd[1885]: time="2025-02-13T19:50:06.274054912Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" returns successfully" Feb 13 19:50:06.275129 containerd[1885]: time="2025-02-13T19:50:06.275104329Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\"" Feb 13 19:50:06.275358 containerd[1885]: time="2025-02-13T19:50:06.275338698Z" level=info msg="TearDown network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" successfully" Feb 13 19:50:06.275540 containerd[1885]: time="2025-02-13T19:50:06.275521973Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" returns successfully" Feb 13 19:50:06.276323 containerd[1885]: time="2025-02-13T19:50:06.276302093Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\"" Feb 13 19:50:06.276665 containerd[1885]: time="2025-02-13T19:50:06.276644584Z" level=info msg="TearDown network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" successfully" Feb 13 19:50:06.276853 containerd[1885]: time="2025-02-13T19:50:06.276833832Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" returns successfully" Feb 13 19:50:06.277765 containerd[1885]: time="2025-02-13T19:50:06.277577131Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\"" Feb 13 19:50:06.277765 containerd[1885]: time="2025-02-13T19:50:06.277678912Z" level=info msg="TearDown network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" successfully" Feb 13 19:50:06.277765 containerd[1885]: time="2025-02-13T19:50:06.277730628Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" returns successfully" Feb 13 19:50:06.278320 containerd[1885]: time="2025-02-13T19:50:06.278294088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:5,}" Feb 13 19:50:06.455560 containerd[1885]: time="2025-02-13T19:50:06.455434430Z" level=error msg="Failed to destroy network for sandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:06.455829 systemd[1]: Created slice kubepods-besteffort-pod43673bfd_14eb_4ac2_bae2_c2c6f50ab414.slice - libcontainer container kubepods-besteffort-pod43673bfd_14eb_4ac2_bae2_c2c6f50ab414.slice. Feb 13 19:50:06.460478 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12-shm.mount: Deactivated successfully. Feb 13 19:50:06.460712 containerd[1885]: time="2025-02-13T19:50:06.460462679Z" level=error msg="encountered an error cleaning up failed sandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:06.460979 containerd[1885]: time="2025-02-13T19:50:06.460820631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:06.463229 kubelet[2339]: E0213 19:50:06.461940 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:06.463229 kubelet[2339]: E0213 19:50:06.461999 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:06.463229 kubelet[2339]: E0213 19:50:06.462029 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:06.463413 kubelet[2339]: E0213 19:50:06.462074 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:50:06.534579 kubelet[2339]: I0213 19:50:06.534450 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4xvp\" (UniqueName: \"kubernetes.io/projected/43673bfd-14eb-4ac2-bae2-c2c6f50ab414-kube-api-access-c4xvp\") pod \"nginx-deployment-7fcdb87857-5nrdn\" (UID: \"43673bfd-14eb-4ac2-bae2-c2c6f50ab414\") " pod="default/nginx-deployment-7fcdb87857-5nrdn" Feb 13 19:50:06.776039 containerd[1885]: time="2025-02-13T19:50:06.775994105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5nrdn,Uid:43673bfd-14eb-4ac2-bae2-c2c6f50ab414,Namespace:default,Attempt:0,}" Feb 13 19:50:06.919395 containerd[1885]: time="2025-02-13T19:50:06.919340232Z" level=error msg="Failed to destroy network for sandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:06.920334 containerd[1885]: time="2025-02-13T19:50:06.920284284Z" level=error msg="encountered an error cleaning up failed sandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:06.920440 containerd[1885]: time="2025-02-13T19:50:06.920375006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5nrdn,Uid:43673bfd-14eb-4ac2-bae2-c2c6f50ab414,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:06.920828 kubelet[2339]: E0213 19:50:06.920789 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:06.920922 kubelet[2339]: E0213 19:50:06.920850 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-5nrdn" Feb 13 19:50:06.920922 kubelet[2339]: E0213 19:50:06.920877 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-5nrdn" Feb 13 19:50:06.921008 kubelet[2339]: E0213 19:50:06.920933 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-5nrdn_default(43673bfd-14eb-4ac2-bae2-c2c6f50ab414)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-5nrdn_default(43673bfd-14eb-4ac2-bae2-c2c6f50ab414)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-5nrdn" podUID="43673bfd-14eb-4ac2-bae2-c2c6f50ab414" Feb 13 19:50:06.930092 kubelet[2339]: E0213 19:50:06.929931 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:07.273006 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1-shm.mount: Deactivated successfully. Feb 13 19:50:07.280855 kubelet[2339]: I0213 19:50:07.280824 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12" Feb 13 19:50:07.282992 containerd[1885]: time="2025-02-13T19:50:07.282943764Z" level=info msg="StopPodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\"" Feb 13 19:50:07.284870 kubelet[2339]: I0213 19:50:07.284306 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1" Feb 13 19:50:07.285963 containerd[1885]: time="2025-02-13T19:50:07.284619613Z" level=info msg="Ensure that sandbox 5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12 in task-service has been cleanup successfully" Feb 13 19:50:07.290266 containerd[1885]: time="2025-02-13T19:50:07.290194542Z" level=info msg="TearDown network for sandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" successfully" Feb 13 19:50:07.290446 containerd[1885]: time="2025-02-13T19:50:07.290234052Z" level=info msg="StopPodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" returns successfully" Feb 13 19:50:07.291096 containerd[1885]: time="2025-02-13T19:50:07.291063162Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\"" Feb 13 19:50:07.292153 containerd[1885]: time="2025-02-13T19:50:07.291521223Z" level=info msg="TearDown network for sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" successfully" Feb 13 19:50:07.292153 containerd[1885]: time="2025-02-13T19:50:07.291603080Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" returns successfully" Feb 13 19:50:07.292153 containerd[1885]: time="2025-02-13T19:50:07.291711248Z" level=info msg="StopPodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\"" Feb 13 19:50:07.292153 containerd[1885]: time="2025-02-13T19:50:07.291976178Z" level=info msg="Ensure that sandbox 2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1 in task-service has been cleanup successfully" Feb 13 19:50:07.291894 systemd[1]: run-netns-cni\x2d14eca98e\x2d416b\x2d4681\x2da823\x2d81ae12f5dc5b.mount: Deactivated successfully. Feb 13 19:50:07.292475 containerd[1885]: time="2025-02-13T19:50:07.292311056Z" level=info msg="TearDown network for sandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" successfully" Feb 13 19:50:07.292475 containerd[1885]: time="2025-02-13T19:50:07.292329993Z" level=info msg="StopPodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" returns successfully" Feb 13 19:50:07.319725 containerd[1885]: time="2025-02-13T19:50:07.305464072Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\"" Feb 13 19:50:07.319725 containerd[1885]: time="2025-02-13T19:50:07.305640035Z" level=info msg="TearDown network for sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" successfully" Feb 13 19:50:07.319725 containerd[1885]: time="2025-02-13T19:50:07.305657059Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" returns successfully" Feb 13 19:50:07.319725 containerd[1885]: time="2025-02-13T19:50:07.305804123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5nrdn,Uid:43673bfd-14eb-4ac2-bae2-c2c6f50ab414,Namespace:default,Attempt:1,}" Feb 13 19:50:07.326296 systemd[1]: run-netns-cni\x2d5d4d3971\x2d42b2\x2d6b3c\x2d0c84\x2d07a1c3b2ea71.mount: Deactivated successfully. Feb 13 19:50:07.331850 containerd[1885]: time="2025-02-13T19:50:07.331812712Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\"" Feb 13 19:50:07.332011 containerd[1885]: time="2025-02-13T19:50:07.331945491Z" level=info msg="TearDown network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" successfully" Feb 13 19:50:07.332011 containerd[1885]: time="2025-02-13T19:50:07.331960839Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" returns successfully" Feb 13 19:50:07.333627 containerd[1885]: time="2025-02-13T19:50:07.333217046Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\"" Feb 13 19:50:07.333627 containerd[1885]: time="2025-02-13T19:50:07.333334498Z" level=info msg="TearDown network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" successfully" Feb 13 19:50:07.333627 containerd[1885]: time="2025-02-13T19:50:07.333349422Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" returns successfully" Feb 13 19:50:07.335227 containerd[1885]: time="2025-02-13T19:50:07.335196895Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\"" Feb 13 19:50:07.337344 containerd[1885]: time="2025-02-13T19:50:07.337299459Z" level=info msg="TearDown network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" successfully" Feb 13 19:50:07.337344 containerd[1885]: time="2025-02-13T19:50:07.337325185Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" returns successfully" Feb 13 19:50:07.339866 containerd[1885]: time="2025-02-13T19:50:07.338965575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:6,}" Feb 13 19:50:07.779439 containerd[1885]: time="2025-02-13T19:50:07.779384149Z" level=error msg="Failed to destroy network for sandbox \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:07.780015 containerd[1885]: time="2025-02-13T19:50:07.779977012Z" level=error msg="encountered an error cleaning up failed sandbox \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:07.780110 containerd[1885]: time="2025-02-13T19:50:07.780062442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5nrdn,Uid:43673bfd-14eb-4ac2-bae2-c2c6f50ab414,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:07.781800 kubelet[2339]: E0213 19:50:07.781318 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:07.781800 kubelet[2339]: E0213 19:50:07.781476 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-5nrdn" Feb 13 19:50:07.781800 kubelet[2339]: E0213 19:50:07.781508 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-5nrdn" Feb 13 19:50:07.782192 kubelet[2339]: E0213 19:50:07.781568 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-5nrdn_default(43673bfd-14eb-4ac2-bae2-c2c6f50ab414)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-5nrdn_default(43673bfd-14eb-4ac2-bae2-c2c6f50ab414)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-5nrdn" podUID="43673bfd-14eb-4ac2-bae2-c2c6f50ab414" Feb 13 19:50:07.797238 containerd[1885]: time="2025-02-13T19:50:07.797052736Z" level=error msg="Failed to destroy network for sandbox \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:07.797494 containerd[1885]: time="2025-02-13T19:50:07.797466360Z" level=error msg="encountered an error cleaning up failed sandbox \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:07.797573 containerd[1885]: time="2025-02-13T19:50:07.797542696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:07.798075 kubelet[2339]: E0213 19:50:07.797819 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:07.798075 kubelet[2339]: E0213 19:50:07.797903 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:07.798075 kubelet[2339]: E0213 19:50:07.797933 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:07.798452 kubelet[2339]: E0213 19:50:07.798063 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:50:07.931384 kubelet[2339]: E0213 19:50:07.931340 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:08.271773 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9-shm.mount: Deactivated successfully. Feb 13 19:50:08.272290 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b-shm.mount: Deactivated successfully. Feb 13 19:50:08.291113 kubelet[2339]: I0213 19:50:08.291074 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9" Feb 13 19:50:08.295330 containerd[1885]: time="2025-02-13T19:50:08.292366436Z" level=info msg="StopPodSandbox for \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\"" Feb 13 19:50:08.295330 containerd[1885]: time="2025-02-13T19:50:08.292635395Z" level=info msg="Ensure that sandbox 88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9 in task-service has been cleanup successfully" Feb 13 19:50:08.295859 containerd[1885]: time="2025-02-13T19:50:08.295822161Z" level=info msg="TearDown network for sandbox \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\" successfully" Feb 13 19:50:08.295961 containerd[1885]: time="2025-02-13T19:50:08.295945677Z" level=info msg="StopPodSandbox for \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\" returns successfully" Feb 13 19:50:08.297320 containerd[1885]: time="2025-02-13T19:50:08.296469558Z" level=info msg="StopPodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\"" Feb 13 19:50:08.297320 containerd[1885]: time="2025-02-13T19:50:08.296573173Z" level=info msg="TearDown network for sandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" successfully" Feb 13 19:50:08.297320 containerd[1885]: time="2025-02-13T19:50:08.296587735Z" level=info msg="StopPodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" returns successfully" Feb 13 19:50:08.296529 systemd[1]: run-netns-cni\x2dc439dac8\x2da4ba\x2d6b3e\x2dc5b5\x2dc4a6415b53cb.mount: Deactivated successfully. Feb 13 19:50:08.298091 containerd[1885]: time="2025-02-13T19:50:08.298064112Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\"" Feb 13 19:50:08.298315 containerd[1885]: time="2025-02-13T19:50:08.298294384Z" level=info msg="TearDown network for sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" successfully" Feb 13 19:50:08.298407 containerd[1885]: time="2025-02-13T19:50:08.298390776Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" returns successfully" Feb 13 19:50:08.299747 containerd[1885]: time="2025-02-13T19:50:08.298933468Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\"" Feb 13 19:50:08.299747 containerd[1885]: time="2025-02-13T19:50:08.299025699Z" level=info msg="TearDown network for sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" successfully" Feb 13 19:50:08.299747 containerd[1885]: time="2025-02-13T19:50:08.299042825Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" returns successfully" Feb 13 19:50:08.299918 kubelet[2339]: I0213 19:50:08.299354 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b" Feb 13 19:50:08.300271 containerd[1885]: time="2025-02-13T19:50:08.300250871Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\"" Feb 13 19:50:08.300447 containerd[1885]: time="2025-02-13T19:50:08.300429646Z" level=info msg="TearDown network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" successfully" Feb 13 19:50:08.300617 containerd[1885]: time="2025-02-13T19:50:08.300598676Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" returns successfully" Feb 13 19:50:08.300994 containerd[1885]: time="2025-02-13T19:50:08.300974733Z" level=info msg="StopPodSandbox for \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\"" Feb 13 19:50:08.301395 containerd[1885]: time="2025-02-13T19:50:08.301372813Z" level=info msg="Ensure that sandbox efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b in task-service has been cleanup successfully" Feb 13 19:50:08.304082 systemd[1]: run-netns-cni\x2dc63e23f2\x2d5050\x2d0624\x2d76bc\x2dc154b6eb9635.mount: Deactivated successfully. Feb 13 19:50:08.304633 containerd[1885]: time="2025-02-13T19:50:08.304107577Z" level=info msg="TearDown network for sandbox \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\" successfully" Feb 13 19:50:08.304633 containerd[1885]: time="2025-02-13T19:50:08.304137945Z" level=info msg="StopPodSandbox for \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\" returns successfully" Feb 13 19:50:08.305185 containerd[1885]: time="2025-02-13T19:50:08.304916785Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\"" Feb 13 19:50:08.305185 containerd[1885]: time="2025-02-13T19:50:08.305019549Z" level=info msg="TearDown network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" successfully" Feb 13 19:50:08.305185 containerd[1885]: time="2025-02-13T19:50:08.305035030Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" returns successfully" Feb 13 19:50:08.305185 containerd[1885]: time="2025-02-13T19:50:08.305109901Z" level=info msg="StopPodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\"" Feb 13 19:50:08.305712 containerd[1885]: time="2025-02-13T19:50:08.305578234Z" level=info msg="TearDown network for sandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" successfully" Feb 13 19:50:08.305712 containerd[1885]: time="2025-02-13T19:50:08.305619688Z" level=info msg="StopPodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" returns successfully" Feb 13 19:50:08.306910 containerd[1885]: time="2025-02-13T19:50:08.306353318Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\"" Feb 13 19:50:08.306910 containerd[1885]: time="2025-02-13T19:50:08.306442252Z" level=info msg="TearDown network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" successfully" Feb 13 19:50:08.306910 containerd[1885]: time="2025-02-13T19:50:08.306458742Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" returns successfully" Feb 13 19:50:08.307389 containerd[1885]: time="2025-02-13T19:50:08.307364758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5nrdn,Uid:43673bfd-14eb-4ac2-bae2-c2c6f50ab414,Namespace:default,Attempt:2,}" Feb 13 19:50:08.308590 containerd[1885]: time="2025-02-13T19:50:08.308564983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:7,}" Feb 13 19:50:08.558344 containerd[1885]: time="2025-02-13T19:50:08.558290308Z" level=error msg="Failed to destroy network for sandbox \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:08.558925 containerd[1885]: time="2025-02-13T19:50:08.558886040Z" level=error msg="encountered an error cleaning up failed sandbox \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:08.559104 containerd[1885]: time="2025-02-13T19:50:08.559079308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5nrdn,Uid:43673bfd-14eb-4ac2-bae2-c2c6f50ab414,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:08.560386 kubelet[2339]: E0213 19:50:08.560339 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:08.560577 kubelet[2339]: E0213 19:50:08.560418 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-5nrdn" Feb 13 19:50:08.560577 kubelet[2339]: E0213 19:50:08.560451 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-5nrdn" Feb 13 19:50:08.560577 kubelet[2339]: E0213 19:50:08.560504 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-5nrdn_default(43673bfd-14eb-4ac2-bae2-c2c6f50ab414)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-5nrdn_default(43673bfd-14eb-4ac2-bae2-c2c6f50ab414)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-5nrdn" podUID="43673bfd-14eb-4ac2-bae2-c2c6f50ab414" Feb 13 19:50:08.577480 containerd[1885]: time="2025-02-13T19:50:08.577312475Z" level=error msg="Failed to destroy network for sandbox \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:08.578227 containerd[1885]: time="2025-02-13T19:50:08.578113584Z" level=error msg="encountered an error cleaning up failed sandbox \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:08.578665 containerd[1885]: time="2025-02-13T19:50:08.578514374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:08.578835 kubelet[2339]: E0213 19:50:08.578756 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:08.578948 kubelet[2339]: E0213 19:50:08.578829 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:08.578948 kubelet[2339]: E0213 19:50:08.578856 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:08.578948 kubelet[2339]: E0213 19:50:08.578906 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:50:08.932550 kubelet[2339]: E0213 19:50:08.932210 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:09.273018 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd-shm.mount: Deactivated successfully. Feb 13 19:50:09.273407 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f-shm.mount: Deactivated successfully. Feb 13 19:50:09.306411 kubelet[2339]: I0213 19:50:09.306373 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f" Feb 13 19:50:09.310625 containerd[1885]: time="2025-02-13T19:50:09.307767425Z" level=info msg="StopPodSandbox for \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\"" Feb 13 19:50:09.310625 containerd[1885]: time="2025-02-13T19:50:09.308259614Z" level=info msg="Ensure that sandbox 6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f in task-service has been cleanup successfully" Feb 13 19:50:09.316056 kubelet[2339]: I0213 19:50:09.316005 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd" Feb 13 19:50:09.318435 containerd[1885]: time="2025-02-13T19:50:09.316447440Z" level=info msg="TearDown network for sandbox \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\" successfully" Feb 13 19:50:09.318625 containerd[1885]: time="2025-02-13T19:50:09.318598547Z" level=info msg="StopPodSandbox for \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\" returns successfully" Feb 13 19:50:09.319988 containerd[1885]: time="2025-02-13T19:50:09.317050834Z" level=info msg="StopPodSandbox for \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\"" Feb 13 19:50:09.319979 systemd[1]: run-netns-cni\x2d3b6fc5b7\x2d8cb0\x2dbeef\x2d4b7f\x2d58bd448f2aa0.mount: Deactivated successfully. Feb 13 19:50:09.324914 containerd[1885]: time="2025-02-13T19:50:09.324878142Z" level=info msg="Ensure that sandbox 19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd in task-service has been cleanup successfully" Feb 13 19:50:09.325439 containerd[1885]: time="2025-02-13T19:50:09.325412198Z" level=info msg="StopPodSandbox for \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\"" Feb 13 19:50:09.327220 containerd[1885]: time="2025-02-13T19:50:09.325609705Z" level=info msg="TearDown network for sandbox \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\" successfully" Feb 13 19:50:09.327220 containerd[1885]: time="2025-02-13T19:50:09.325670679Z" level=info msg="StopPodSandbox for \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\" returns successfully" Feb 13 19:50:09.328194 containerd[1885]: time="2025-02-13T19:50:09.328062646Z" level=info msg="TearDown network for sandbox \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\" successfully" Feb 13 19:50:09.328409 containerd[1885]: time="2025-02-13T19:50:09.328279522Z" level=info msg="StopPodSandbox for \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\" returns successfully" Feb 13 19:50:09.330313 containerd[1885]: time="2025-02-13T19:50:09.329963119Z" level=info msg="StopPodSandbox for \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\"" Feb 13 19:50:09.330421 systemd[1]: run-netns-cni\x2dd26848da\x2d9006\x2d964f\x2d4004\x2d3f9d56af1e37.mount: Deactivated successfully. Feb 13 19:50:09.334483 containerd[1885]: time="2025-02-13T19:50:09.334286731Z" level=info msg="TearDown network for sandbox \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\" successfully" Feb 13 19:50:09.334873 containerd[1885]: time="2025-02-13T19:50:09.334851900Z" level=info msg="StopPodSandbox for \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\" returns successfully" Feb 13 19:50:09.335060 containerd[1885]: time="2025-02-13T19:50:09.335046939Z" level=info msg="StopPodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\"" Feb 13 19:50:09.335368 containerd[1885]: time="2025-02-13T19:50:09.335236771Z" level=info msg="TearDown network for sandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" successfully" Feb 13 19:50:09.335368 containerd[1885]: time="2025-02-13T19:50:09.335255573Z" level=info msg="StopPodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" returns successfully" Feb 13 19:50:09.336301 containerd[1885]: time="2025-02-13T19:50:09.336275479Z" level=info msg="StopPodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\"" Feb 13 19:50:09.336528 containerd[1885]: time="2025-02-13T19:50:09.336509007Z" level=info msg="TearDown network for sandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" successfully" Feb 13 19:50:09.336609 containerd[1885]: time="2025-02-13T19:50:09.336596321Z" level=info msg="StopPodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" returns successfully" Feb 13 19:50:09.337280 containerd[1885]: time="2025-02-13T19:50:09.336840163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5nrdn,Uid:43673bfd-14eb-4ac2-bae2-c2c6f50ab414,Namespace:default,Attempt:3,}" Feb 13 19:50:09.338354 containerd[1885]: time="2025-02-13T19:50:09.338322288Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\"" Feb 13 19:50:09.338577 containerd[1885]: time="2025-02-13T19:50:09.338520748Z" level=info msg="TearDown network for sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" successfully" Feb 13 19:50:09.338577 containerd[1885]: time="2025-02-13T19:50:09.338539462Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" returns successfully" Feb 13 19:50:09.340930 containerd[1885]: time="2025-02-13T19:50:09.340895510Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\"" Feb 13 19:50:09.341043 containerd[1885]: time="2025-02-13T19:50:09.341009636Z" level=info msg="TearDown network for sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" successfully" Feb 13 19:50:09.341043 containerd[1885]: time="2025-02-13T19:50:09.341025612Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" returns successfully" Feb 13 19:50:09.346153 containerd[1885]: time="2025-02-13T19:50:09.346056968Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\"" Feb 13 19:50:09.346332 containerd[1885]: time="2025-02-13T19:50:09.346265000Z" level=info msg="TearDown network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" successfully" Feb 13 19:50:09.346332 containerd[1885]: time="2025-02-13T19:50:09.346283355Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" returns successfully" Feb 13 19:50:09.359780 containerd[1885]: time="2025-02-13T19:50:09.347384174Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\"" Feb 13 19:50:09.364072 containerd[1885]: time="2025-02-13T19:50:09.364032503Z" level=info msg="TearDown network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" successfully" Feb 13 19:50:09.364072 containerd[1885]: time="2025-02-13T19:50:09.364069247Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" returns successfully" Feb 13 19:50:09.364927 containerd[1885]: time="2025-02-13T19:50:09.364764030Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\"" Feb 13 19:50:09.365255 containerd[1885]: time="2025-02-13T19:50:09.365201506Z" level=info msg="TearDown network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" successfully" Feb 13 19:50:09.365255 containerd[1885]: time="2025-02-13T19:50:09.365220960Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" returns successfully" Feb 13 19:50:09.368679 containerd[1885]: time="2025-02-13T19:50:09.368633414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:8,}" Feb 13 19:50:09.712962 containerd[1885]: time="2025-02-13T19:50:09.712901710Z" level=error msg="Failed to destroy network for sandbox \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:09.713558 containerd[1885]: time="2025-02-13T19:50:09.713242852Z" level=error msg="encountered an error cleaning up failed sandbox \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:09.713558 containerd[1885]: time="2025-02-13T19:50:09.713407500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5nrdn,Uid:43673bfd-14eb-4ac2-bae2-c2c6f50ab414,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:09.713822 kubelet[2339]: E0213 19:50:09.713648 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:09.713822 kubelet[2339]: E0213 19:50:09.713757 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-5nrdn" Feb 13 19:50:09.713822 kubelet[2339]: E0213 19:50:09.713789 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-5nrdn" Feb 13 19:50:09.714031 kubelet[2339]: E0213 19:50:09.713904 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-5nrdn_default(43673bfd-14eb-4ac2-bae2-c2c6f50ab414)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-5nrdn_default(43673bfd-14eb-4ac2-bae2-c2c6f50ab414)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-5nrdn" podUID="43673bfd-14eb-4ac2-bae2-c2c6f50ab414" Feb 13 19:50:09.746981 containerd[1885]: time="2025-02-13T19:50:09.746936527Z" level=error msg="Failed to destroy network for sandbox \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:09.747473 containerd[1885]: time="2025-02-13T19:50:09.747432181Z" level=error msg="encountered an error cleaning up failed sandbox \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:09.747570 containerd[1885]: time="2025-02-13T19:50:09.747511397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:09.748125 kubelet[2339]: E0213 19:50:09.747790 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:09.748125 kubelet[2339]: E0213 19:50:09.747846 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:09.748125 kubelet[2339]: E0213 19:50:09.747867 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:09.748286 kubelet[2339]: E0213 19:50:09.747909 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:50:09.933347 kubelet[2339]: E0213 19:50:09.933306 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:10.274348 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520-shm.mount: Deactivated successfully. Feb 13 19:50:10.326208 kubelet[2339]: I0213 19:50:10.325763 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f" Feb 13 19:50:10.329181 containerd[1885]: time="2025-02-13T19:50:10.326639453Z" level=info msg="StopPodSandbox for \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\"" Feb 13 19:50:10.329181 containerd[1885]: time="2025-02-13T19:50:10.326846833Z" level=info msg="Ensure that sandbox a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f in task-service has been cleanup successfully" Feb 13 19:50:10.330522 containerd[1885]: time="2025-02-13T19:50:10.330254153Z" level=info msg="TearDown network for sandbox \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\" successfully" Feb 13 19:50:10.330522 containerd[1885]: time="2025-02-13T19:50:10.330289959Z" level=info msg="StopPodSandbox for \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\" returns successfully" Feb 13 19:50:10.330643 systemd[1]: run-netns-cni\x2d3a60fad8\x2db113\x2d7911\x2dc307\x2d7e7eaf668d08.mount: Deactivated successfully. Feb 13 19:50:10.332625 containerd[1885]: time="2025-02-13T19:50:10.332514509Z" level=info msg="StopPodSandbox for \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\"" Feb 13 19:50:10.333009 containerd[1885]: time="2025-02-13T19:50:10.332784155Z" level=info msg="TearDown network for sandbox \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\" successfully" Feb 13 19:50:10.333009 containerd[1885]: time="2025-02-13T19:50:10.332804395Z" level=info msg="StopPodSandbox for \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\" returns successfully" Feb 13 19:50:10.333948 containerd[1885]: time="2025-02-13T19:50:10.333788947Z" level=info msg="StopPodSandbox for \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\"" Feb 13 19:50:10.335687 containerd[1885]: time="2025-02-13T19:50:10.335420280Z" level=info msg="TearDown network for sandbox \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\" successfully" Feb 13 19:50:10.335687 containerd[1885]: time="2025-02-13T19:50:10.335455811Z" level=info msg="StopPodSandbox for \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\" returns successfully" Feb 13 19:50:10.336615 containerd[1885]: time="2025-02-13T19:50:10.336554065Z" level=info msg="StopPodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\"" Feb 13 19:50:10.336705 containerd[1885]: time="2025-02-13T19:50:10.336657133Z" level=info msg="TearDown network for sandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" successfully" Feb 13 19:50:10.336705 containerd[1885]: time="2025-02-13T19:50:10.336687497Z" level=info msg="StopPodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" returns successfully" Feb 13 19:50:10.339217 kubelet[2339]: I0213 19:50:10.338966 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520" Feb 13 19:50:10.341212 containerd[1885]: time="2025-02-13T19:50:10.340080848Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\"" Feb 13 19:50:10.341212 containerd[1885]: time="2025-02-13T19:50:10.340541858Z" level=info msg="TearDown network for sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" successfully" Feb 13 19:50:10.341212 containerd[1885]: time="2025-02-13T19:50:10.340613485Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" returns successfully" Feb 13 19:50:10.341587 containerd[1885]: time="2025-02-13T19:50:10.341529441Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\"" Feb 13 19:50:10.341806 containerd[1885]: time="2025-02-13T19:50:10.341783938Z" level=info msg="TearDown network for sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" successfully" Feb 13 19:50:10.341996 containerd[1885]: time="2025-02-13T19:50:10.341807255Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" returns successfully" Feb 13 19:50:10.342565 containerd[1885]: time="2025-02-13T19:50:10.342211968Z" level=info msg="StopPodSandbox for \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\"" Feb 13 19:50:10.342763 containerd[1885]: time="2025-02-13T19:50:10.342662706Z" level=info msg="Ensure that sandbox ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520 in task-service has been cleanup successfully" Feb 13 19:50:10.343379 containerd[1885]: time="2025-02-13T19:50:10.343357775Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\"" Feb 13 19:50:10.343896 containerd[1885]: time="2025-02-13T19:50:10.343872364Z" level=info msg="TearDown network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" successfully" Feb 13 19:50:10.344137 containerd[1885]: time="2025-02-13T19:50:10.344068603Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" returns successfully" Feb 13 19:50:10.344860 containerd[1885]: time="2025-02-13T19:50:10.344737899Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\"" Feb 13 19:50:10.345181 containerd[1885]: time="2025-02-13T19:50:10.345144823Z" level=info msg="TearDown network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" successfully" Feb 13 19:50:10.345584 containerd[1885]: time="2025-02-13T19:50:10.345519498Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" returns successfully" Feb 13 19:50:10.346385 containerd[1885]: time="2025-02-13T19:50:10.345967970Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\"" Feb 13 19:50:10.346385 containerd[1885]: time="2025-02-13T19:50:10.346057364Z" level=info msg="TearDown network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" successfully" Feb 13 19:50:10.346385 containerd[1885]: time="2025-02-13T19:50:10.346074320Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" returns successfully" Feb 13 19:50:10.346810 containerd[1885]: time="2025-02-13T19:50:10.346784174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:9,}" Feb 13 19:50:10.347819 containerd[1885]: time="2025-02-13T19:50:10.347698799Z" level=info msg="TearDown network for sandbox \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\" successfully" Feb 13 19:50:10.347819 containerd[1885]: time="2025-02-13T19:50:10.347762108Z" level=info msg="StopPodSandbox for \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\" returns successfully" Feb 13 19:50:10.348642 containerd[1885]: time="2025-02-13T19:50:10.348507208Z" level=info msg="StopPodSandbox for \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\"" Feb 13 19:50:10.348731 containerd[1885]: time="2025-02-13T19:50:10.348598921Z" level=info msg="TearDown network for sandbox \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\" successfully" Feb 13 19:50:10.348731 containerd[1885]: time="2025-02-13T19:50:10.348688791Z" level=info msg="StopPodSandbox for \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\" returns successfully" Feb 13 19:50:10.349691 containerd[1885]: time="2025-02-13T19:50:10.349454116Z" level=info msg="StopPodSandbox for \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\"" Feb 13 19:50:10.349691 containerd[1885]: time="2025-02-13T19:50:10.349554100Z" level=info msg="TearDown network for sandbox \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\" successfully" Feb 13 19:50:10.349691 containerd[1885]: time="2025-02-13T19:50:10.349568945Z" level=info msg="StopPodSandbox for \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\" returns successfully" Feb 13 19:50:10.349779 systemd[1]: run-netns-cni\x2d7cf36907\x2d307a\x2d1d43\x2d880c\x2d0120cb71a815.mount: Deactivated successfully. Feb 13 19:50:10.353363 containerd[1885]: time="2025-02-13T19:50:10.353086969Z" level=info msg="StopPodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\"" Feb 13 19:50:10.353363 containerd[1885]: time="2025-02-13T19:50:10.353236539Z" level=info msg="TearDown network for sandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" successfully" Feb 13 19:50:10.353363 containerd[1885]: time="2025-02-13T19:50:10.353254087Z" level=info msg="StopPodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" returns successfully" Feb 13 19:50:10.355324 containerd[1885]: time="2025-02-13T19:50:10.354870942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5nrdn,Uid:43673bfd-14eb-4ac2-bae2-c2c6f50ab414,Namespace:default,Attempt:4,}" Feb 13 19:50:10.670241 containerd[1885]: time="2025-02-13T19:50:10.670192955Z" level=error msg="Failed to destroy network for sandbox \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:10.670849 containerd[1885]: time="2025-02-13T19:50:10.670812277Z" level=error msg="encountered an error cleaning up failed sandbox \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:10.670985 containerd[1885]: time="2025-02-13T19:50:10.670890893Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:10.671366 kubelet[2339]: E0213 19:50:10.671239 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:10.671366 kubelet[2339]: E0213 19:50:10.671328 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:10.671764 kubelet[2339]: E0213 19:50:10.671543 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:10.671764 kubelet[2339]: E0213 19:50:10.671631 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:50:10.674634 containerd[1885]: time="2025-02-13T19:50:10.674254700Z" level=error msg="Failed to destroy network for sandbox \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:10.674634 containerd[1885]: time="2025-02-13T19:50:10.674596151Z" level=error msg="encountered an error cleaning up failed sandbox \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:10.674921 containerd[1885]: time="2025-02-13T19:50:10.674891827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5nrdn,Uid:43673bfd-14eb-4ac2-bae2-c2c6f50ab414,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:10.675358 kubelet[2339]: E0213 19:50:10.675323 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:10.675835 kubelet[2339]: E0213 19:50:10.675621 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-5nrdn" Feb 13 19:50:10.675835 kubelet[2339]: E0213 19:50:10.675666 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-5nrdn" Feb 13 19:50:10.676094 kubelet[2339]: E0213 19:50:10.675915 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-5nrdn_default(43673bfd-14eb-4ac2-bae2-c2c6f50ab414)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-5nrdn_default(43673bfd-14eb-4ac2-bae2-c2c6f50ab414)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-5nrdn" podUID="43673bfd-14eb-4ac2-bae2-c2c6f50ab414" Feb 13 19:50:10.934945 kubelet[2339]: E0213 19:50:10.934817 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:11.275471 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483-shm.mount: Deactivated successfully. Feb 13 19:50:11.348705 kubelet[2339]: I0213 19:50:11.348673 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483" Feb 13 19:50:11.353665 containerd[1885]: time="2025-02-13T19:50:11.349951149Z" level=info msg="StopPodSandbox for \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\"" Feb 13 19:50:11.353665 containerd[1885]: time="2025-02-13T19:50:11.350632304Z" level=info msg="Ensure that sandbox 6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483 in task-service has been cleanup successfully" Feb 13 19:50:11.360479 containerd[1885]: time="2025-02-13T19:50:11.358957222Z" level=info msg="TearDown network for sandbox \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\" successfully" Feb 13 19:50:11.360479 containerd[1885]: time="2025-02-13T19:50:11.358996943Z" level=info msg="StopPodSandbox for \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\" returns successfully" Feb 13 19:50:11.360963 systemd[1]: run-netns-cni\x2d4b8b40f3\x2d0d82\x2d4dc2\x2dadaf\x2d4ee30f868828.mount: Deactivated successfully. Feb 13 19:50:11.361680 containerd[1885]: time="2025-02-13T19:50:11.361094847Z" level=info msg="StopPodSandbox for \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\"" Feb 13 19:50:11.363234 containerd[1885]: time="2025-02-13T19:50:11.361897325Z" level=info msg="TearDown network for sandbox \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\" successfully" Feb 13 19:50:11.363234 containerd[1885]: time="2025-02-13T19:50:11.361969471Z" level=info msg="StopPodSandbox for \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\" returns successfully" Feb 13 19:50:11.363234 containerd[1885]: time="2025-02-13T19:50:11.363118133Z" level=info msg="StopPodSandbox for \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\"" Feb 13 19:50:11.365443 containerd[1885]: time="2025-02-13T19:50:11.365258393Z" level=info msg="TearDown network for sandbox \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\" successfully" Feb 13 19:50:11.365443 containerd[1885]: time="2025-02-13T19:50:11.365286351Z" level=info msg="StopPodSandbox for \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\" returns successfully" Feb 13 19:50:11.365794 containerd[1885]: time="2025-02-13T19:50:11.365773738Z" level=info msg="StopPodSandbox for \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\"" Feb 13 19:50:11.365949 containerd[1885]: time="2025-02-13T19:50:11.365921115Z" level=info msg="TearDown network for sandbox \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\" successfully" Feb 13 19:50:11.365949 containerd[1885]: time="2025-02-13T19:50:11.365939785Z" level=info msg="StopPodSandbox for \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\" returns successfully" Feb 13 19:50:11.367229 containerd[1885]: time="2025-02-13T19:50:11.366941221Z" level=info msg="StopPodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\"" Feb 13 19:50:11.367229 containerd[1885]: time="2025-02-13T19:50:11.367101124Z" level=info msg="TearDown network for sandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" successfully" Feb 13 19:50:11.367229 containerd[1885]: time="2025-02-13T19:50:11.367117788Z" level=info msg="StopPodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" returns successfully" Feb 13 19:50:11.368245 containerd[1885]: time="2025-02-13T19:50:11.368179000Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\"" Feb 13 19:50:11.368313 containerd[1885]: time="2025-02-13T19:50:11.368300853Z" level=info msg="TearDown network for sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" successfully" Feb 13 19:50:11.368479 containerd[1885]: time="2025-02-13T19:50:11.368317072Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" returns successfully" Feb 13 19:50:11.369395 containerd[1885]: time="2025-02-13T19:50:11.369366460Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\"" Feb 13 19:50:11.369472 containerd[1885]: time="2025-02-13T19:50:11.369458721Z" level=info msg="TearDown network for sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" successfully" Feb 13 19:50:11.369521 containerd[1885]: time="2025-02-13T19:50:11.369474434Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" returns successfully" Feb 13 19:50:11.369729 kubelet[2339]: I0213 19:50:11.369704 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac" Feb 13 19:50:11.371137 containerd[1885]: time="2025-02-13T19:50:11.370468838Z" level=info msg="StopPodSandbox for \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\"" Feb 13 19:50:11.371137 containerd[1885]: time="2025-02-13T19:50:11.370928398Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\"" Feb 13 19:50:11.371137 containerd[1885]: time="2025-02-13T19:50:11.371010165Z" level=info msg="Ensure that sandbox bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac in task-service has been cleanup successfully" Feb 13 19:50:11.371321 containerd[1885]: time="2025-02-13T19:50:11.371013001Z" level=info msg="TearDown network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" successfully" Feb 13 19:50:11.371321 containerd[1885]: time="2025-02-13T19:50:11.371282094Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" returns successfully" Feb 13 19:50:11.373265 containerd[1885]: time="2025-02-13T19:50:11.372539143Z" level=info msg="TearDown network for sandbox \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\" successfully" Feb 13 19:50:11.373265 containerd[1885]: time="2025-02-13T19:50:11.372563663Z" level=info msg="StopPodSandbox for \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\" returns successfully" Feb 13 19:50:11.373658 containerd[1885]: time="2025-02-13T19:50:11.373542124Z" level=info msg="StopPodSandbox for \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\"" Feb 13 19:50:11.373658 containerd[1885]: time="2025-02-13T19:50:11.373633394Z" level=info msg="TearDown network for sandbox \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\" successfully" Feb 13 19:50:11.373658 containerd[1885]: time="2025-02-13T19:50:11.373647294Z" level=info msg="StopPodSandbox for \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\" returns successfully" Feb 13 19:50:11.373806 containerd[1885]: time="2025-02-13T19:50:11.373713202Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\"" Feb 13 19:50:11.373806 containerd[1885]: time="2025-02-13T19:50:11.373787391Z" level=info msg="TearDown network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" successfully" Feb 13 19:50:11.373806 containerd[1885]: time="2025-02-13T19:50:11.373800399Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" returns successfully" Feb 13 19:50:11.375741 containerd[1885]: time="2025-02-13T19:50:11.375307765Z" level=info msg="StopPodSandbox for \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\"" Feb 13 19:50:11.375741 containerd[1885]: time="2025-02-13T19:50:11.375396899Z" level=info msg="TearDown network for sandbox \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\" successfully" Feb 13 19:50:11.375741 containerd[1885]: time="2025-02-13T19:50:11.375413175Z" level=info msg="StopPodSandbox for \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\" returns successfully" Feb 13 19:50:11.374700 systemd[1]: run-netns-cni\x2dd5fab771\x2db836\x2d4cd7\x2dd33a\x2d235429fbbfcb.mount: Deactivated successfully. Feb 13 19:50:11.377943 containerd[1885]: time="2025-02-13T19:50:11.377915521Z" level=info msg="StopPodSandbox for \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\"" Feb 13 19:50:11.378082 containerd[1885]: time="2025-02-13T19:50:11.378015895Z" level=info msg="TearDown network for sandbox \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\" successfully" Feb 13 19:50:11.378082 containerd[1885]: time="2025-02-13T19:50:11.378030050Z" level=info msg="StopPodSandbox for \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\" returns successfully" Feb 13 19:50:11.378327 containerd[1885]: time="2025-02-13T19:50:11.378107151Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\"" Feb 13 19:50:11.378327 containerd[1885]: time="2025-02-13T19:50:11.378212391Z" level=info msg="TearDown network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" successfully" Feb 13 19:50:11.379854 containerd[1885]: time="2025-02-13T19:50:11.379829357Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" returns successfully" Feb 13 19:50:11.380918 containerd[1885]: time="2025-02-13T19:50:11.380581822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:10,}" Feb 13 19:50:11.385815 containerd[1885]: time="2025-02-13T19:50:11.385776653Z" level=info msg="StopPodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\"" Feb 13 19:50:11.385963 containerd[1885]: time="2025-02-13T19:50:11.385903558Z" level=info msg="TearDown network for sandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" successfully" Feb 13 19:50:11.385963 containerd[1885]: time="2025-02-13T19:50:11.385919203Z" level=info msg="StopPodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" returns successfully" Feb 13 19:50:11.387754 containerd[1885]: time="2025-02-13T19:50:11.387716343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5nrdn,Uid:43673bfd-14eb-4ac2-bae2-c2c6f50ab414,Namespace:default,Attempt:5,}" Feb 13 19:50:11.638022 containerd[1885]: time="2025-02-13T19:50:11.637251420Z" level=error msg="Failed to destroy network for sandbox \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:11.638022 containerd[1885]: time="2025-02-13T19:50:11.637614761Z" level=error msg="encountered an error cleaning up failed sandbox \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:11.638022 containerd[1885]: time="2025-02-13T19:50:11.637690791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5nrdn,Uid:43673bfd-14eb-4ac2-bae2-c2c6f50ab414,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:11.638427 kubelet[2339]: E0213 19:50:11.637940 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:11.638427 kubelet[2339]: E0213 19:50:11.638005 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-5nrdn" Feb 13 19:50:11.638427 kubelet[2339]: E0213 19:50:11.638035 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-5nrdn" Feb 13 19:50:11.638634 kubelet[2339]: E0213 19:50:11.638083 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-5nrdn_default(43673bfd-14eb-4ac2-bae2-c2c6f50ab414)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-5nrdn_default(43673bfd-14eb-4ac2-bae2-c2c6f50ab414)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-5nrdn" podUID="43673bfd-14eb-4ac2-bae2-c2c6f50ab414" Feb 13 19:50:11.665993 containerd[1885]: time="2025-02-13T19:50:11.665940024Z" level=error msg="Failed to destroy network for sandbox \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:11.666319 containerd[1885]: time="2025-02-13T19:50:11.666282755Z" level=error msg="encountered an error cleaning up failed sandbox \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:11.666398 containerd[1885]: time="2025-02-13T19:50:11.666361924Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:10,} failed, error" error="failed to setup network for sandbox \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:11.666649 kubelet[2339]: E0213 19:50:11.666610 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:11.666723 kubelet[2339]: E0213 19:50:11.666678 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:11.666723 kubelet[2339]: E0213 19:50:11.666707 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:11.666809 kubelet[2339]: E0213 19:50:11.666760 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:50:11.950073 kubelet[2339]: E0213 19:50:11.935695 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:11.966344 containerd[1885]: time="2025-02-13T19:50:11.966287220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:11.968261 containerd[1885]: time="2025-02-13T19:50:11.968206183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:50:11.970410 containerd[1885]: time="2025-02-13T19:50:11.970341987Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:11.978857 containerd[1885]: time="2025-02-13T19:50:11.978744048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:11.983235 containerd[1885]: time="2025-02-13T19:50:11.981086588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.748431885s" Feb 13 19:50:11.983235 containerd[1885]: time="2025-02-13T19:50:11.981139434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:50:12.012749 containerd[1885]: time="2025-02-13T19:50:12.012706365Z" level=info msg="CreateContainer within sandbox \"05b0383aab88c5bd826d6e504fee5d5d9fc7007f93e629b5bd511be71f84fb46\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:50:12.034897 containerd[1885]: time="2025-02-13T19:50:12.034845218Z" level=info msg="CreateContainer within sandbox \"05b0383aab88c5bd826d6e504fee5d5d9fc7007f93e629b5bd511be71f84fb46\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"655e8ffd9a32e1388f98fcfcbeddaa8e42b942962a4cc304f8b47d6da92657ff\"" Feb 13 19:50:12.035629 containerd[1885]: time="2025-02-13T19:50:12.035581819Z" level=info msg="StartContainer for \"655e8ffd9a32e1388f98fcfcbeddaa8e42b942962a4cc304f8b47d6da92657ff\"" Feb 13 19:50:12.188505 systemd[1]: Started cri-containerd-655e8ffd9a32e1388f98fcfcbeddaa8e42b942962a4cc304f8b47d6da92657ff.scope - libcontainer container 655e8ffd9a32e1388f98fcfcbeddaa8e42b942962a4cc304f8b47d6da92657ff. Feb 13 19:50:12.242794 containerd[1885]: time="2025-02-13T19:50:12.242520931Z" level=info msg="StartContainer for \"655e8ffd9a32e1388f98fcfcbeddaa8e42b942962a4cc304f8b47d6da92657ff\" returns successfully" Feb 13 19:50:12.280033 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe-shm.mount: Deactivated successfully. Feb 13 19:50:12.280535 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab-shm.mount: Deactivated successfully. Feb 13 19:50:12.286284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1159178081.mount: Deactivated successfully. Feb 13 19:50:12.379321 kubelet[2339]: I0213 19:50:12.378427 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab" Feb 13 19:50:12.380064 containerd[1885]: time="2025-02-13T19:50:12.379188136Z" level=info msg="StopPodSandbox for \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\"" Feb 13 19:50:12.380064 containerd[1885]: time="2025-02-13T19:50:12.379461900Z" level=info msg="Ensure that sandbox fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab in task-service has been cleanup successfully" Feb 13 19:50:12.383668 containerd[1885]: time="2025-02-13T19:50:12.382551591Z" level=info msg="TearDown network for sandbox \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\" successfully" Feb 13 19:50:12.383668 containerd[1885]: time="2025-02-13T19:50:12.382585596Z" level=info msg="StopPodSandbox for \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\" returns successfully" Feb 13 19:50:12.383329 systemd[1]: run-netns-cni\x2de47dbc0c\x2da15d\x2d81e7\x2dda84\x2db1ab3b8529a3.mount: Deactivated successfully. Feb 13 19:50:12.388502 containerd[1885]: time="2025-02-13T19:50:12.387901303Z" level=info msg="StopPodSandbox for \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\"" Feb 13 19:50:12.388502 containerd[1885]: time="2025-02-13T19:50:12.387996025Z" level=info msg="TearDown network for sandbox \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\" successfully" Feb 13 19:50:12.388502 containerd[1885]: time="2025-02-13T19:50:12.388005902Z" level=info msg="StopPodSandbox for \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\" returns successfully" Feb 13 19:50:12.389325 containerd[1885]: time="2025-02-13T19:50:12.388731683Z" level=info msg="StopPodSandbox for \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\"" Feb 13 19:50:12.389325 containerd[1885]: time="2025-02-13T19:50:12.389007500Z" level=info msg="TearDown network for sandbox \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\" successfully" Feb 13 19:50:12.389325 containerd[1885]: time="2025-02-13T19:50:12.389034360Z" level=info msg="StopPodSandbox for \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\" returns successfully" Feb 13 19:50:12.390295 containerd[1885]: time="2025-02-13T19:50:12.389769734Z" level=info msg="StopPodSandbox for \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\"" Feb 13 19:50:12.390295 containerd[1885]: time="2025-02-13T19:50:12.389961226Z" level=info msg="TearDown network for sandbox \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\" successfully" Feb 13 19:50:12.390295 containerd[1885]: time="2025-02-13T19:50:12.389977597Z" level=info msg="StopPodSandbox for \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\" returns successfully" Feb 13 19:50:12.390761 containerd[1885]: time="2025-02-13T19:50:12.390575056Z" level=info msg="StopPodSandbox for \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\"" Feb 13 19:50:12.391290 containerd[1885]: time="2025-02-13T19:50:12.391265550Z" level=info msg="TearDown network for sandbox \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\" successfully" Feb 13 19:50:12.391290 containerd[1885]: time="2025-02-13T19:50:12.391285046Z" level=info msg="StopPodSandbox for \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\" returns successfully" Feb 13 19:50:12.393659 containerd[1885]: time="2025-02-13T19:50:12.392554647Z" level=info msg="StopPodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\"" Feb 13 19:50:12.393659 containerd[1885]: time="2025-02-13T19:50:12.392659321Z" level=info msg="TearDown network for sandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" successfully" Feb 13 19:50:12.393659 containerd[1885]: time="2025-02-13T19:50:12.392674251Z" level=info msg="StopPodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" returns successfully" Feb 13 19:50:12.394794 containerd[1885]: time="2025-02-13T19:50:12.394349874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5nrdn,Uid:43673bfd-14eb-4ac2-bae2-c2c6f50ab414,Namespace:default,Attempt:6,}" Feb 13 19:50:12.409425 kubelet[2339]: I0213 19:50:12.409393 2339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe" Feb 13 19:50:12.414486 containerd[1885]: time="2025-02-13T19:50:12.414436554Z" level=info msg="StopPodSandbox for \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\"" Feb 13 19:50:12.414972 containerd[1885]: time="2025-02-13T19:50:12.414833092Z" level=info msg="Ensure that sandbox 0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe in task-service has been cleanup successfully" Feb 13 19:50:12.419989 systemd[1]: run-netns-cni\x2dc9c8aac7\x2da20e\x2d5148\x2d2dc4\x2db72645b2bd4f.mount: Deactivated successfully. Feb 13 19:50:12.423330 containerd[1885]: time="2025-02-13T19:50:12.421855205Z" level=info msg="TearDown network for sandbox \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\" successfully" Feb 13 19:50:12.423330 containerd[1885]: time="2025-02-13T19:50:12.422497785Z" level=info msg="StopPodSandbox for \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\" returns successfully" Feb 13 19:50:12.426178 containerd[1885]: time="2025-02-13T19:50:12.424410172Z" level=info msg="StopPodSandbox for \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\"" Feb 13 19:50:12.426178 containerd[1885]: time="2025-02-13T19:50:12.424530117Z" level=info msg="TearDown network for sandbox \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\" successfully" Feb 13 19:50:12.426178 containerd[1885]: time="2025-02-13T19:50:12.424556813Z" level=info msg="StopPodSandbox for \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\" returns successfully" Feb 13 19:50:12.432414 kubelet[2339]: I0213 19:50:12.432353 2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8s427" podStartSLOduration=3.967202764 podStartE2EDuration="27.432239612s" podCreationTimestamp="2025-02-13 19:49:45 +0000 UTC" firstStartedPulling="2025-02-13 19:49:48.51979129 +0000 UTC m=+4.705020241" lastFinishedPulling="2025-02-13 19:50:11.98482813 +0000 UTC m=+28.170057089" observedRunningTime="2025-02-13 19:50:12.423485696 +0000 UTC m=+28.608714693" watchObservedRunningTime="2025-02-13 19:50:12.432239612 +0000 UTC m=+28.617468576" Feb 13 19:50:12.441944 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:50:12.442089 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:50:12.443655 containerd[1885]: time="2025-02-13T19:50:12.442849737Z" level=info msg="StopPodSandbox for \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\"" Feb 13 19:50:12.448188 containerd[1885]: time="2025-02-13T19:50:12.447753055Z" level=info msg="TearDown network for sandbox \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\" successfully" Feb 13 19:50:12.449764 containerd[1885]: time="2025-02-13T19:50:12.449113637Z" level=info msg="StopPodSandbox for \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\" returns successfully" Feb 13 19:50:12.450553 containerd[1885]: time="2025-02-13T19:50:12.450517415Z" level=info msg="StopPodSandbox for \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\"" Feb 13 19:50:12.450845 containerd[1885]: time="2025-02-13T19:50:12.450815392Z" level=info msg="TearDown network for sandbox \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\" successfully" Feb 13 19:50:12.450845 containerd[1885]: time="2025-02-13T19:50:12.450836644Z" level=info msg="StopPodSandbox for \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\" returns successfully" Feb 13 19:50:12.451223 containerd[1885]: time="2025-02-13T19:50:12.451201146Z" level=info msg="StopPodSandbox for \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\"" Feb 13 19:50:12.452449 containerd[1885]: time="2025-02-13T19:50:12.451297676Z" level=info msg="TearDown network for sandbox \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\" successfully" Feb 13 19:50:12.452449 containerd[1885]: time="2025-02-13T19:50:12.451320379Z" level=info msg="StopPodSandbox for \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\" returns successfully" Feb 13 19:50:12.452449 containerd[1885]: time="2025-02-13T19:50:12.451954090Z" level=info msg="StopPodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\"" Feb 13 19:50:12.452449 containerd[1885]: time="2025-02-13T19:50:12.452087059Z" level=info msg="TearDown network for sandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" successfully" Feb 13 19:50:12.452449 containerd[1885]: time="2025-02-13T19:50:12.452103185Z" level=info msg="StopPodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" returns successfully" Feb 13 19:50:12.453921 containerd[1885]: time="2025-02-13T19:50:12.453890344Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\"" Feb 13 19:50:12.454232 containerd[1885]: time="2025-02-13T19:50:12.454210621Z" level=info msg="TearDown network for sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" successfully" Feb 13 19:50:12.454299 containerd[1885]: time="2025-02-13T19:50:12.454233096Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" returns successfully" Feb 13 19:50:12.457043 containerd[1885]: time="2025-02-13T19:50:12.455993065Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\"" Feb 13 19:50:12.457043 containerd[1885]: time="2025-02-13T19:50:12.456105744Z" level=info msg="TearDown network for sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" successfully" Feb 13 19:50:12.457043 containerd[1885]: time="2025-02-13T19:50:12.456119671Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" returns successfully" Feb 13 19:50:12.458041 containerd[1885]: time="2025-02-13T19:50:12.458013945Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\"" Feb 13 19:50:12.458389 containerd[1885]: time="2025-02-13T19:50:12.458365576Z" level=info msg="TearDown network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" successfully" Feb 13 19:50:12.459219 containerd[1885]: time="2025-02-13T19:50:12.458491187Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" returns successfully" Feb 13 19:50:12.459949 containerd[1885]: time="2025-02-13T19:50:12.459821120Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\"" Feb 13 19:50:12.459949 containerd[1885]: time="2025-02-13T19:50:12.459917879Z" level=info msg="TearDown network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" successfully" Feb 13 19:50:12.459949 containerd[1885]: time="2025-02-13T19:50:12.459933767Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" returns successfully" Feb 13 19:50:12.460877 containerd[1885]: time="2025-02-13T19:50:12.460853839Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\"" Feb 13 19:50:12.461070 containerd[1885]: time="2025-02-13T19:50:12.461044567Z" level=info msg="TearDown network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" successfully" Feb 13 19:50:12.461190 containerd[1885]: time="2025-02-13T19:50:12.461061066Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" returns successfully" Feb 13 19:50:12.462098 containerd[1885]: time="2025-02-13T19:50:12.462070402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:11,}" Feb 13 19:50:12.920648 containerd[1885]: 2025-02-13 19:50:12.798 [INFO][3451] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9" Feb 13 19:50:12.920648 containerd[1885]: 2025-02-13 19:50:12.798 [INFO][3451] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9" iface="eth0" netns="/var/run/netns/cni-99f4f1f7-f253-6b72-f478-fa42a83a6d99" Feb 13 19:50:12.920648 containerd[1885]: 2025-02-13 19:50:12.802 [INFO][3451] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9" iface="eth0" netns="/var/run/netns/cni-99f4f1f7-f253-6b72-f478-fa42a83a6d99" Feb 13 19:50:12.920648 containerd[1885]: 2025-02-13 19:50:12.805 [INFO][3451] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9" iface="eth0" netns="/var/run/netns/cni-99f4f1f7-f253-6b72-f478-fa42a83a6d99" Feb 13 19:50:12.920648 containerd[1885]: 2025-02-13 19:50:12.805 [INFO][3451] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9" Feb 13 19:50:12.920648 containerd[1885]: 2025-02-13 19:50:12.805 [INFO][3451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9" Feb 13 19:50:12.920648 containerd[1885]: 2025-02-13 19:50:12.888 [INFO][3468] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9" HandleID="k8s-pod-network.87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9" Workload="172.31.31.165-k8s-csi--node--driver--fdgfj-eth0" Feb 13 19:50:12.920648 containerd[1885]: 2025-02-13 19:50:12.888 [INFO][3468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:12.920648 containerd[1885]: 2025-02-13 19:50:12.888 [INFO][3468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:12.920648 containerd[1885]: 2025-02-13 19:50:12.908 [WARNING][3468] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9" HandleID="k8s-pod-network.87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9" Workload="172.31.31.165-k8s-csi--node--driver--fdgfj-eth0" Feb 13 19:50:12.920648 containerd[1885]: 2025-02-13 19:50:12.909 [INFO][3468] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9" HandleID="k8s-pod-network.87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9" Workload="172.31.31.165-k8s-csi--node--driver--fdgfj-eth0" Feb 13 19:50:12.920648 containerd[1885]: 2025-02-13 19:50:12.916 [INFO][3468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:12.920648 containerd[1885]: 2025-02-13 19:50:12.918 [INFO][3451] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9" Feb 13 19:50:12.925596 containerd[1885]: time="2025-02-13T19:50:12.925543413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:11,} failed, error" error="failed to setup network for sandbox \"87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:12.927139 kubelet[2339]: E0213 19:50:12.926288 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:12.927139 kubelet[2339]: E0213 19:50:12.926392 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:12.927139 kubelet[2339]: E0213 19:50:12.926422 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fdgfj" Feb 13 19:50:12.927439 kubelet[2339]: E0213 19:50:12.926488 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fdgfj_calico-system(43ba3476-37d5-44c8-9a35-8e22f8bd98f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87ae5b83656ce8addd73acea93eb09f914176451e8cf077ce9f36f12d72417e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fdgfj" podUID="43ba3476-37d5-44c8-9a35-8e22f8bd98f5" Feb 13 19:50:12.936457 kubelet[2339]: E0213 19:50:12.936370 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:12.939592 containerd[1885]: 2025-02-13 19:50:12.789 [INFO][3432] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5" Feb 13 19:50:12.939592 containerd[1885]: 2025-02-13 19:50:12.789 [INFO][3432] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5" iface="eth0" netns="/var/run/netns/cni-647168b9-1cd6-fe4c-d168-577651af2111" Feb 13 19:50:12.939592 containerd[1885]: 2025-02-13 19:50:12.789 [INFO][3432] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5" iface="eth0" netns="/var/run/netns/cni-647168b9-1cd6-fe4c-d168-577651af2111" Feb 13 19:50:12.939592 containerd[1885]: 2025-02-13 19:50:12.793 [INFO][3432] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5" iface="eth0" netns="/var/run/netns/cni-647168b9-1cd6-fe4c-d168-577651af2111" Feb 13 19:50:12.939592 containerd[1885]: 2025-02-13 19:50:12.794 [INFO][3432] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5" Feb 13 19:50:12.939592 containerd[1885]: 2025-02-13 19:50:12.794 [INFO][3432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5" Feb 13 19:50:12.939592 containerd[1885]: 2025-02-13 19:50:12.893 [INFO][3467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5" HandleID="k8s-pod-network.c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5" Workload="172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-eth0" Feb 13 19:50:12.939592 containerd[1885]: 2025-02-13 19:50:12.893 [INFO][3467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:12.939592 containerd[1885]: 2025-02-13 19:50:12.916 [INFO][3467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:12.939592 containerd[1885]: 2025-02-13 19:50:12.927 [WARNING][3467] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5" HandleID="k8s-pod-network.c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5" Workload="172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-eth0" Feb 13 19:50:12.939592 containerd[1885]: 2025-02-13 19:50:12.928 [INFO][3467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5" HandleID="k8s-pod-network.c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5" Workload="172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-eth0" Feb 13 19:50:12.939592 containerd[1885]: 2025-02-13 19:50:12.936 [INFO][3467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:12.939592 containerd[1885]: 2025-02-13 19:50:12.938 [INFO][3432] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5" Feb 13 19:50:12.944263 containerd[1885]: time="2025-02-13T19:50:12.944207519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5nrdn,Uid:43673bfd-14eb-4ac2-bae2-c2c6f50ab414,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:12.944562 kubelet[2339]: E0213 19:50:12.944505 2339 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:50:12.944671 kubelet[2339]: E0213 19:50:12.944574 2339 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-5nrdn" Feb 13 19:50:12.944671 kubelet[2339]: E0213 19:50:12.944605 2339 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-5nrdn" Feb 13 19:50:12.944811 kubelet[2339]: E0213 19:50:12.944659 2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-5nrdn_default(43673bfd-14eb-4ac2-bae2-c2c6f50ab414)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-5nrdn_default(43673bfd-14eb-4ac2-bae2-c2c6f50ab414)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-5nrdn" podUID="43673bfd-14eb-4ac2-bae2-c2c6f50ab414" Feb 13 19:50:13.280915 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c31f9ceba2a2169e99108f0961e00eb8579d46f9ca5831861ffbc89b5aa65fc5-shm.mount: Deactivated successfully. Feb 13 19:50:13.416477 containerd[1885]: time="2025-02-13T19:50:13.416434063Z" level=info msg="StopPodSandbox for \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\"" Feb 13 19:50:13.417062 containerd[1885]: time="2025-02-13T19:50:13.416567361Z" level=info msg="TearDown network for sandbox \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\" successfully" Feb 13 19:50:13.417062 containerd[1885]: time="2025-02-13T19:50:13.416587466Z" level=info msg="StopPodSandbox for \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\" returns successfully" Feb 13 19:50:13.417062 containerd[1885]: time="2025-02-13T19:50:13.416442585Z" level=info msg="StopPodSandbox for \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\"" Feb 13 19:50:13.417062 containerd[1885]: time="2025-02-13T19:50:13.416887295Z" level=info msg="TearDown network for sandbox \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\" successfully" Feb 13 19:50:13.417062 containerd[1885]: time="2025-02-13T19:50:13.416905856Z" level=info msg="StopPodSandbox for \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\" returns successfully" Feb 13 19:50:13.424076 containerd[1885]: time="2025-02-13T19:50:13.417351243Z" level=info msg="StopPodSandbox for \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\"" Feb 13 19:50:13.424076 containerd[1885]: time="2025-02-13T19:50:13.417443573Z" level=info msg="TearDown network for sandbox \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\" successfully" Feb 13 19:50:13.424076 containerd[1885]: time="2025-02-13T19:50:13.417457852Z" level=info msg="StopPodSandbox for \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\" returns successfully" Feb 13 19:50:13.424076 containerd[1885]: time="2025-02-13T19:50:13.421981030Z" level=info msg="StopPodSandbox for \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\"" Feb 13 19:50:13.424076 containerd[1885]: time="2025-02-13T19:50:13.422324676Z" level=info msg="TearDown network for sandbox \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\" successfully" Feb 13 19:50:13.424076 containerd[1885]: time="2025-02-13T19:50:13.422344828Z" level=info msg="StopPodSandbox for \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\" returns successfully" Feb 13 19:50:13.429196 containerd[1885]: time="2025-02-13T19:50:13.428863855Z" level=info msg="StopPodSandbox for \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\"" Feb 13 19:50:13.429196 containerd[1885]: time="2025-02-13T19:50:13.429056472Z" level=info msg="TearDown network for sandbox \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\" successfully" Feb 13 19:50:13.429196 containerd[1885]: time="2025-02-13T19:50:13.429114776Z" level=info msg="StopPodSandbox for \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\" returns successfully" Feb 13 19:50:13.429722 containerd[1885]: time="2025-02-13T19:50:13.429060264Z" level=info msg="StopPodSandbox for \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\"" Feb 13 19:50:13.429722 containerd[1885]: time="2025-02-13T19:50:13.429413325Z" level=info msg="TearDown network for sandbox \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\" successfully" Feb 13 19:50:13.429722 containerd[1885]: time="2025-02-13T19:50:13.429517126Z" level=info msg="StopPodSandbox for \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\" returns successfully" Feb 13 19:50:13.429722 containerd[1885]: time="2025-02-13T19:50:13.429685805Z" level=info msg="StopPodSandbox for \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\"" Feb 13 19:50:13.429914 containerd[1885]: time="2025-02-13T19:50:13.429768913Z" level=info msg="TearDown network for sandbox \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\" successfully" Feb 13 19:50:13.429914 containerd[1885]: time="2025-02-13T19:50:13.429783270Z" level=info msg="StopPodSandbox for \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\" returns successfully" Feb 13 19:50:13.432734 containerd[1885]: time="2025-02-13T19:50:13.432648811Z" level=info msg="StopPodSandbox for \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\"" Feb 13 19:50:13.432850 containerd[1885]: time="2025-02-13T19:50:13.432775039Z" level=info msg="TearDown network for sandbox \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\" successfully" Feb 13 19:50:13.433476 containerd[1885]: time="2025-02-13T19:50:13.433155259Z" level=info msg="StopPodSandbox for \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\" returns successfully" Feb 13 19:50:13.433476 containerd[1885]: time="2025-02-13T19:50:13.433090711Z" level=info msg="StopPodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\"" Feb 13 19:50:13.433476 containerd[1885]: time="2025-02-13T19:50:13.433389726Z" level=info msg="TearDown network for sandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" successfully" Feb 13 19:50:13.433476 containerd[1885]: time="2025-02-13T19:50:13.433408286Z" level=info msg="StopPodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" returns successfully" Feb 13 19:50:13.433709 containerd[1885]: time="2025-02-13T19:50:13.433650733Z" level=info msg="StopPodSandbox for \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\"" Feb 13 19:50:13.433755 containerd[1885]: time="2025-02-13T19:50:13.433738964Z" level=info msg="TearDown network for sandbox \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\" successfully" Feb 13 19:50:13.436799 containerd[1885]: time="2025-02-13T19:50:13.433754159Z" level=info msg="StopPodSandbox for \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\" returns successfully" Feb 13 19:50:13.436799 containerd[1885]: time="2025-02-13T19:50:13.436489550Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\"" Feb 13 19:50:13.436799 containerd[1885]: time="2025-02-13T19:50:13.436543594Z" level=info msg="StopPodSandbox for \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\"" Feb 13 19:50:13.436799 containerd[1885]: time="2025-02-13T19:50:13.436680302Z" level=info msg="TearDown network for sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" successfully" Feb 13 19:50:13.436799 containerd[1885]: time="2025-02-13T19:50:13.436729799Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" returns successfully" Feb 13 19:50:13.437034 containerd[1885]: time="2025-02-13T19:50:13.436823008Z" level=info msg="TearDown network for sandbox \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\" successfully" Feb 13 19:50:13.437034 containerd[1885]: time="2025-02-13T19:50:13.436839655Z" level=info msg="StopPodSandbox for \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\" returns successfully" Feb 13 19:50:13.437743 containerd[1885]: time="2025-02-13T19:50:13.437303759Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\"" Feb 13 19:50:13.437743 containerd[1885]: time="2025-02-13T19:50:13.437449109Z" level=info msg="TearDown network for sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" successfully" Feb 13 19:50:13.437743 containerd[1885]: time="2025-02-13T19:50:13.437464006Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" returns successfully" Feb 13 19:50:13.437743 containerd[1885]: time="2025-02-13T19:50:13.437551063Z" level=info msg="StopPodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\"" Feb 13 19:50:13.437743 containerd[1885]: time="2025-02-13T19:50:13.437643831Z" level=info msg="TearDown network for sandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" successfully" Feb 13 19:50:13.437743 containerd[1885]: time="2025-02-13T19:50:13.437674675Z" level=info msg="StopPodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" returns successfully" Feb 13 19:50:13.438292 containerd[1885]: time="2025-02-13T19:50:13.438225850Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\"" Feb 13 19:50:13.438465 containerd[1885]: time="2025-02-13T19:50:13.438313881Z" level=info msg="TearDown network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" successfully" Feb 13 19:50:13.438465 containerd[1885]: time="2025-02-13T19:50:13.438328042Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" returns successfully" Feb 13 19:50:13.438899 containerd[1885]: time="2025-02-13T19:50:13.438863892Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\"" Feb 13 19:50:13.439148 containerd[1885]: time="2025-02-13T19:50:13.438966337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5nrdn,Uid:43673bfd-14eb-4ac2-bae2-c2c6f50ab414,Namespace:default,Attempt:6,}" Feb 13 19:50:13.439148 containerd[1885]: time="2025-02-13T19:50:13.439041067Z" level=info msg="TearDown network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" successfully" Feb 13 19:50:13.439148 containerd[1885]: time="2025-02-13T19:50:13.439054880Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" returns successfully" Feb 13 19:50:13.440297 containerd[1885]: time="2025-02-13T19:50:13.440230758Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\"" Feb 13 19:50:13.440620 containerd[1885]: time="2025-02-13T19:50:13.440601024Z" level=info msg="TearDown network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" successfully" Feb 13 19:50:13.440701 containerd[1885]: time="2025-02-13T19:50:13.440685041Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" returns successfully" Feb 13 19:50:13.441581 containerd[1885]: time="2025-02-13T19:50:13.441483662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:11,}" Feb 13 19:50:13.485584 systemd[1]: run-containerd-runc-k8s.io-655e8ffd9a32e1388f98fcfcbeddaa8e42b942962a4cc304f8b47d6da92657ff-runc.zgcliE.mount: Deactivated successfully. Feb 13 19:50:13.767603 (udev-worker)[3369]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:13.768494 systemd-networkd[1737]: cali9b2a82d273b: Link UP Feb 13 19:50:13.768727 systemd-networkd[1737]: cali9b2a82d273b: Gained carrier Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.558 [INFO][3508] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.595 [INFO][3508] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.31.165-k8s-csi--node--driver--fdgfj-eth0 csi-node-driver- calico-system 43ba3476-37d5-44c8-9a35-8e22f8bd98f5 1194 0 2025-02-13 19:49:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.31.165 csi-node-driver-fdgfj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9b2a82d273b [] []}} ContainerID="3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" Namespace="calico-system" Pod="csi-node-driver-fdgfj" WorkloadEndpoint="172.31.31.165-k8s-csi--node--driver--fdgfj-" Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.595 [INFO][3508] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" Namespace="calico-system" Pod="csi-node-driver-fdgfj" WorkloadEndpoint="172.31.31.165-k8s-csi--node--driver--fdgfj-eth0" Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.684 [INFO][3537] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" HandleID="k8s-pod-network.3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" Workload="172.31.31.165-k8s-csi--node--driver--fdgfj-eth0" Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.704 [INFO][3537] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" HandleID="k8s-pod-network.3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" Workload="172.31.31.165-k8s-csi--node--driver--fdgfj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051970), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.31.165", "pod":"csi-node-driver-fdgfj", "timestamp":"2025-02-13 19:50:13.683994236 +0000 UTC"}, Hostname:"172.31.31.165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.704 [INFO][3537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.704 [INFO][3537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.704 [INFO][3537] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.31.165' Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.707 [INFO][3537] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" host="172.31.31.165" Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.716 [INFO][3537] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.31.165" Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.723 [INFO][3537] ipam/ipam.go 489: Trying affinity for 192.168.67.64/26 host="172.31.31.165" Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.726 [INFO][3537] ipam/ipam.go 155: Attempting to load block cidr=192.168.67.64/26 host="172.31.31.165" Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.728 [INFO][3537] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="172.31.31.165" Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.729 [INFO][3537] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" host="172.31.31.165" Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.732 [INFO][3537] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.740 [INFO][3537] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" host="172.31.31.165" Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.755 [INFO][3537] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.67.65/26] block=192.168.67.64/26 handle="k8s-pod-network.3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" host="172.31.31.165" Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.755 [INFO][3537] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.65/26] handle="k8s-pod-network.3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" host="172.31.31.165" Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.755 [INFO][3537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:13.802185 containerd[1885]: 2025-02-13 19:50:13.755 [INFO][3537] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.65/26] IPv6=[] ContainerID="3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" HandleID="k8s-pod-network.3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" Workload="172.31.31.165-k8s-csi--node--driver--fdgfj-eth0" Feb 13 19:50:13.803472 containerd[1885]: 2025-02-13 19:50:13.757 [INFO][3508] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" Namespace="calico-system" Pod="csi-node-driver-fdgfj" WorkloadEndpoint="172.31.31.165-k8s-csi--node--driver--fdgfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.165-k8s-csi--node--driver--fdgfj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"43ba3476-37d5-44c8-9a35-8e22f8bd98f5", ResourceVersion:"1194", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.165", ContainerID:"", Pod:"csi-node-driver-fdgfj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b2a82d273b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:13.803472 containerd[1885]: 2025-02-13 19:50:13.757 [INFO][3508] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.67.65/32] ContainerID="3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" Namespace="calico-system" Pod="csi-node-driver-fdgfj" WorkloadEndpoint="172.31.31.165-k8s-csi--node--driver--fdgfj-eth0" Feb 13 19:50:13.803472 containerd[1885]: 2025-02-13 19:50:13.758 [INFO][3508] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b2a82d273b ContainerID="3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" Namespace="calico-system" Pod="csi-node-driver-fdgfj" WorkloadEndpoint="172.31.31.165-k8s-csi--node--driver--fdgfj-eth0" Feb 13 19:50:13.803472 containerd[1885]: 2025-02-13 19:50:13.768 [INFO][3508] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" Namespace="calico-system" Pod="csi-node-driver-fdgfj" WorkloadEndpoint="172.31.31.165-k8s-csi--node--driver--fdgfj-eth0" Feb 13 19:50:13.803472 containerd[1885]: 2025-02-13 19:50:13.769 [INFO][3508] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" Namespace="calico-system" Pod="csi-node-driver-fdgfj" WorkloadEndpoint="172.31.31.165-k8s-csi--node--driver--fdgfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.165-k8s-csi--node--driver--fdgfj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"43ba3476-37d5-44c8-9a35-8e22f8bd98f5", ResourceVersion:"1194", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.165", ContainerID:"3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b", Pod:"csi-node-driver-fdgfj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b2a82d273b", MAC:"6e:be:08:b6:4e:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:13.803472 containerd[1885]: 2025-02-13 19:50:13.800 [INFO][3508] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b" Namespace="calico-system" Pod="csi-node-driver-fdgfj" WorkloadEndpoint="172.31.31.165-k8s-csi--node--driver--fdgfj-eth0" Feb 13 19:50:13.833005 containerd[1885]: time="2025-02-13T19:50:13.832254379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:13.833005 containerd[1885]: time="2025-02-13T19:50:13.832326787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:13.833005 containerd[1885]: time="2025-02-13T19:50:13.832349197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:13.833005 containerd[1885]: time="2025-02-13T19:50:13.832446176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:13.868351 systemd-networkd[1737]: cali07b54814460: Link UP Feb 13 19:50:13.868380 systemd[1]: Started cri-containerd-3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b.scope - libcontainer container 3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b. Feb 13 19:50:13.869774 (udev-worker)[3552]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:13.871697 systemd-networkd[1737]: cali07b54814460: Gained carrier Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.565 [INFO][3507] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.594 [INFO][3507] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-eth0 nginx-deployment-7fcdb87857- default 43673bfd-14eb-4ac2-bae2-c2c6f50ab414 1193 0 2025-02-13 19:50:06 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.31.165 nginx-deployment-7fcdb87857-5nrdn eth0 default [] [] [kns.default ksa.default.default] cali07b54814460 [] []}} ContainerID="99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" Namespace="default" Pod="nginx-deployment-7fcdb87857-5nrdn" WorkloadEndpoint="172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-" Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.594 [INFO][3507] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" Namespace="default" Pod="nginx-deployment-7fcdb87857-5nrdn" WorkloadEndpoint="172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-eth0" Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.686 [INFO][3538] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" HandleID="k8s-pod-network.99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" Workload="172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-eth0" Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.705 [INFO][3538] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" HandleID="k8s-pod-network.99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" Workload="172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051b50), Attrs:map[string]string{"namespace":"default", "node":"172.31.31.165", "pod":"nginx-deployment-7fcdb87857-5nrdn", "timestamp":"2025-02-13 19:50:13.686434777 +0000 UTC"}, Hostname:"172.31.31.165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.705 [INFO][3538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.755 [INFO][3538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.756 [INFO][3538] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.31.165' Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.808 [INFO][3538] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" host="172.31.31.165" Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.817 [INFO][3538] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.31.165" Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.827 [INFO][3538] ipam/ipam.go 489: Trying affinity for 192.168.67.64/26 host="172.31.31.165" Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.831 [INFO][3538] ipam/ipam.go 155: Attempting to load block cidr=192.168.67.64/26 host="172.31.31.165" Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.835 [INFO][3538] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="172.31.31.165" Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.835 [INFO][3538] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" host="172.31.31.165" Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.842 [INFO][3538] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795 Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.848 [INFO][3538] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" host="172.31.31.165" Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.860 [INFO][3538] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.67.66/26] block=192.168.67.64/26 handle="k8s-pod-network.99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" host="172.31.31.165" Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.861 [INFO][3538] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.66/26] handle="k8s-pod-network.99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" host="172.31.31.165" Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.861 [INFO][3538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:13.890922 containerd[1885]: 2025-02-13 19:50:13.861 [INFO][3538] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.66/26] IPv6=[] ContainerID="99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" HandleID="k8s-pod-network.99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" Workload="172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-eth0" Feb 13 19:50:13.891911 containerd[1885]: 2025-02-13 19:50:13.863 [INFO][3507] cni-plugin/k8s.go 386: Populated endpoint ContainerID="99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" Namespace="default" Pod="nginx-deployment-7fcdb87857-5nrdn" WorkloadEndpoint="172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"43673bfd-14eb-4ac2-bae2-c2c6f50ab414", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.165", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-5nrdn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.67.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali07b54814460", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:13.891911 containerd[1885]: 2025-02-13 19:50:13.864 [INFO][3507] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.67.66/32] ContainerID="99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" Namespace="default" Pod="nginx-deployment-7fcdb87857-5nrdn" WorkloadEndpoint="172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-eth0" Feb 13 19:50:13.891911 containerd[1885]: 2025-02-13 19:50:13.864 [INFO][3507] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07b54814460 ContainerID="99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" Namespace="default" Pod="nginx-deployment-7fcdb87857-5nrdn" WorkloadEndpoint="172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-eth0" Feb 13 19:50:13.891911 containerd[1885]: 2025-02-13 19:50:13.872 [INFO][3507] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" Namespace="default" Pod="nginx-deployment-7fcdb87857-5nrdn" WorkloadEndpoint="172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-eth0" Feb 13 19:50:13.891911 containerd[1885]: 2025-02-13 19:50:13.872 [INFO][3507] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" Namespace="default" Pod="nginx-deployment-7fcdb87857-5nrdn" WorkloadEndpoint="172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"43673bfd-14eb-4ac2-bae2-c2c6f50ab414", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.165", ContainerID:"99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795", Pod:"nginx-deployment-7fcdb87857-5nrdn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.67.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali07b54814460", MAC:"92:6e:8f:3a:f7:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:13.891911 containerd[1885]: 2025-02-13 19:50:13.884 [INFO][3507] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795" Namespace="default" Pod="nginx-deployment-7fcdb87857-5nrdn" WorkloadEndpoint="172.31.31.165-k8s-nginx--deployment--7fcdb87857--5nrdn-eth0" Feb 13 19:50:13.916796 containerd[1885]: time="2025-02-13T19:50:13.916749848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fdgfj,Uid:43ba3476-37d5-44c8-9a35-8e22f8bd98f5,Namespace:calico-system,Attempt:11,} returns sandbox id \"3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b\"" Feb 13 19:50:13.919178 containerd[1885]: time="2025-02-13T19:50:13.918786610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:50:13.937492 kubelet[2339]: E0213 19:50:13.937449 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:13.942715 containerd[1885]: time="2025-02-13T19:50:13.940628613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:13.942715 containerd[1885]: time="2025-02-13T19:50:13.940712742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:13.942715 containerd[1885]: time="2025-02-13T19:50:13.940738779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:13.943370 containerd[1885]: time="2025-02-13T19:50:13.943129470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:14.001485 systemd[1]: Started cri-containerd-99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795.scope - libcontainer container 99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795. Feb 13 19:50:14.076514 containerd[1885]: time="2025-02-13T19:50:14.076480508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-5nrdn,Uid:43673bfd-14eb-4ac2-bae2-c2c6f50ab414,Namespace:default,Attempt:6,} returns sandbox id \"99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795\"" Feb 13 19:50:14.281099 systemd[1]: run-containerd-runc-k8s.io-3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b-runc.VwQR9a.mount: Deactivated successfully. Feb 13 19:50:14.556018 kernel: bpftool[3771]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:50:14.841811 systemd-networkd[1737]: vxlan.calico: Link UP Feb 13 19:50:14.841823 systemd-networkd[1737]: vxlan.calico: Gained carrier Feb 13 19:50:14.897267 systemd-networkd[1737]: cali9b2a82d273b: Gained IPv6LL Feb 13 19:50:14.938815 kubelet[2339]: E0213 19:50:14.938768 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:15.025813 systemd-networkd[1737]: cali07b54814460: Gained IPv6LL Feb 13 19:50:15.766769 containerd[1885]: time="2025-02-13T19:50:15.766578696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:15.768151 containerd[1885]: time="2025-02-13T19:50:15.768098426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:50:15.771758 containerd[1885]: time="2025-02-13T19:50:15.771424029Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:15.774796 containerd[1885]: time="2025-02-13T19:50:15.774751960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:15.777133 containerd[1885]: time="2025-02-13T19:50:15.775914027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.857086701s" Feb 13 19:50:15.777133 containerd[1885]: time="2025-02-13T19:50:15.776006598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:50:15.778266 containerd[1885]: time="2025-02-13T19:50:15.778125093Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:50:15.779346 containerd[1885]: time="2025-02-13T19:50:15.779312892Z" level=info msg="CreateContainer within sandbox \"3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:50:15.810731 containerd[1885]: time="2025-02-13T19:50:15.810384236Z" level=info msg="CreateContainer within sandbox \"3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"17cbba87c3015a17331f5d3027c7e3a7e799cd77a56529e22af5df3bf368757b\"" Feb 13 19:50:15.817997 containerd[1885]: time="2025-02-13T19:50:15.817952923Z" level=info msg="StartContainer for \"17cbba87c3015a17331f5d3027c7e3a7e799cd77a56529e22af5df3bf368757b\"" Feb 13 19:50:15.901438 systemd[1]: Started cri-containerd-17cbba87c3015a17331f5d3027c7e3a7e799cd77a56529e22af5df3bf368757b.scope - libcontainer container 17cbba87c3015a17331f5d3027c7e3a7e799cd77a56529e22af5df3bf368757b. Feb 13 19:50:15.921379 systemd-networkd[1737]: vxlan.calico: Gained IPv6LL Feb 13 19:50:15.940059 kubelet[2339]: E0213 19:50:15.940025 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:15.973013 containerd[1885]: time="2025-02-13T19:50:15.972962944Z" level=info msg="StartContainer for \"17cbba87c3015a17331f5d3027c7e3a7e799cd77a56529e22af5df3bf368757b\" returns successfully" Feb 13 19:50:16.942179 kubelet[2339]: E0213 19:50:16.941689 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:17.941943 kubelet[2339]: E0213 19:50:17.941884 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:18.743860 ntpd[1867]: Listen normally on 7 vxlan.calico 192.168.67.64:123 Feb 13 19:50:18.743957 ntpd[1867]: Listen normally on 8 cali9b2a82d273b [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 19:50:18.746443 ntpd[1867]: 13 Feb 19:50:18 ntpd[1867]: Listen normally on 7 vxlan.calico 192.168.67.64:123 Feb 13 19:50:18.746443 ntpd[1867]: 13 Feb 19:50:18 ntpd[1867]: Listen normally on 8 cali9b2a82d273b [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 19:50:18.746443 ntpd[1867]: 13 Feb 19:50:18 ntpd[1867]: Listen normally on 9 cali07b54814460 [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 19:50:18.746443 ntpd[1867]: 13 Feb 19:50:18 ntpd[1867]: Listen normally on 10 vxlan.calico [fe80::6412:84ff:fe7f:dfdf%5]:123 Feb 13 19:50:18.744016 ntpd[1867]: Listen normally on 9 cali07b54814460 [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 19:50:18.744061 ntpd[1867]: Listen normally on 10 vxlan.calico [fe80::6412:84ff:fe7f:dfdf%5]:123 Feb 13 19:50:18.942022 kubelet[2339]: E0213 19:50:18.941983 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:19.521332 update_engine[1874]: I20250213 19:50:19.520208 1874 update_attempter.cc:509] Updating boot flags... Feb 13 19:50:19.667517 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3898) Feb 13 19:50:19.802348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2898239528.mount: Deactivated successfully. Feb 13 19:50:19.946325 kubelet[2339]: E0213 19:50:19.944263 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:20.105185 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3900) Feb 13 19:50:20.601549 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3900) Feb 13 19:50:20.948136 kubelet[2339]: E0213 19:50:20.947877 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:21.950192 kubelet[2339]: E0213 19:50:21.950075 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:22.395423 containerd[1885]: time="2025-02-13T19:50:22.395367538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:22.398065 containerd[1885]: time="2025-02-13T19:50:22.397995221Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 19:50:22.399945 containerd[1885]: time="2025-02-13T19:50:22.399430323Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:22.429936 containerd[1885]: time="2025-02-13T19:50:22.429865557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:22.431108 containerd[1885]: time="2025-02-13T19:50:22.430952309Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 6.652788389s" Feb 13 19:50:22.431108 containerd[1885]: time="2025-02-13T19:50:22.430997009Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:50:22.443567 containerd[1885]: time="2025-02-13T19:50:22.443419050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:50:22.477331 containerd[1885]: time="2025-02-13T19:50:22.477232721Z" level=info msg="CreateContainer within sandbox \"99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:50:22.503221 containerd[1885]: time="2025-02-13T19:50:22.503146452Z" level=info msg="CreateContainer within sandbox \"99f962eb4fbaf2aea9900f6800be86b7f76aca13bb100011c184d7903c217795\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"482c4749dbf4c1f5265fe32f8c87582e903c0499e17fea25fce6d1f259bb8ed9\"" Feb 13 19:50:22.503826 containerd[1885]: time="2025-02-13T19:50:22.503797620Z" level=info msg="StartContainer for \"482c4749dbf4c1f5265fe32f8c87582e903c0499e17fea25fce6d1f259bb8ed9\"" Feb 13 19:50:22.550698 systemd[1]: Started cri-containerd-482c4749dbf4c1f5265fe32f8c87582e903c0499e17fea25fce6d1f259bb8ed9.scope - libcontainer container 482c4749dbf4c1f5265fe32f8c87582e903c0499e17fea25fce6d1f259bb8ed9. Feb 13 19:50:22.602613 containerd[1885]: time="2025-02-13T19:50:22.602576794Z" level=info msg="StartContainer for \"482c4749dbf4c1f5265fe32f8c87582e903c0499e17fea25fce6d1f259bb8ed9\" returns successfully" Feb 13 19:50:22.951566 kubelet[2339]: E0213 19:50:22.950966 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:23.635642 kubelet[2339]: I0213 19:50:23.635571 2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-5nrdn" podStartSLOduration=9.265842208 podStartE2EDuration="17.629784488s" podCreationTimestamp="2025-02-13 19:50:06 +0000 UTC" firstStartedPulling="2025-02-13 19:50:14.078346226 +0000 UTC m=+30.263575184" lastFinishedPulling="2025-02-13 19:50:22.442288498 +0000 UTC m=+38.627517464" observedRunningTime="2025-02-13 19:50:23.629553725 +0000 UTC m=+39.814782692" watchObservedRunningTime="2025-02-13 19:50:23.629784488 +0000 UTC m=+39.815013453" Feb 13 19:50:23.951619 kubelet[2339]: E0213 19:50:23.951457 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:24.484477 containerd[1885]: time="2025-02-13T19:50:24.484365234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:24.495971 containerd[1885]: time="2025-02-13T19:50:24.495842343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:50:24.504657 containerd[1885]: time="2025-02-13T19:50:24.499005748Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:24.514199 containerd[1885]: time="2025-02-13T19:50:24.511426380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:24.514733 containerd[1885]: time="2025-02-13T19:50:24.514654533Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.07117297s" Feb 13 19:50:24.515036 containerd[1885]: time="2025-02-13T19:50:24.514972930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:50:24.546094 containerd[1885]: time="2025-02-13T19:50:24.545907889Z" level=info msg="CreateContainer within sandbox \"3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:50:24.594726 containerd[1885]: time="2025-02-13T19:50:24.594673880Z" level=info msg="CreateContainer within sandbox \"3eaca9ab6d3a35ac66af255f85861d0c057e71238af5d682c33bfedff587219b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fd40e3d779f1f8526deb9e7058df8ad1649fbb86b314a6061e2607232f8d8721\"" Feb 13 19:50:24.596006 containerd[1885]: time="2025-02-13T19:50:24.595299793Z" level=info msg="StartContainer for \"fd40e3d779f1f8526deb9e7058df8ad1649fbb86b314a6061e2607232f8d8721\"" Feb 13 19:50:24.691444 systemd[1]: Started cri-containerd-fd40e3d779f1f8526deb9e7058df8ad1649fbb86b314a6061e2607232f8d8721.scope - libcontainer container fd40e3d779f1f8526deb9e7058df8ad1649fbb86b314a6061e2607232f8d8721. Feb 13 19:50:24.753095 containerd[1885]: time="2025-02-13T19:50:24.752976276Z" level=info msg="StartContainer for \"fd40e3d779f1f8526deb9e7058df8ad1649fbb86b314a6061e2607232f8d8721\" returns successfully" Feb 13 19:50:24.917623 kubelet[2339]: E0213 19:50:24.917562 2339 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:24.955921 kubelet[2339]: E0213 19:50:24.955844 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:25.077022 kubelet[2339]: I0213 19:50:25.076974 2339 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:50:25.079046 kubelet[2339]: I0213 19:50:25.078998 2339 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:50:25.956066 kubelet[2339]: E0213 19:50:25.956002 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:26.893006 kubelet[2339]: I0213 19:50:26.892942 2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fdgfj" podStartSLOduration=31.282972708 podStartE2EDuration="41.892916471s" podCreationTimestamp="2025-02-13 19:49:45 +0000 UTC" firstStartedPulling="2025-02-13 19:50:13.918116396 +0000 UTC m=+30.103345351" lastFinishedPulling="2025-02-13 19:50:24.52805996 +0000 UTC m=+40.713289114" observedRunningTime="2025-02-13 19:50:25.67742725 +0000 UTC m=+41.862656226" watchObservedRunningTime="2025-02-13 19:50:26.892916471 +0000 UTC m=+43.078145446" Feb 13 19:50:26.906007 systemd[1]: Created slice kubepods-besteffort-pod6dcf390a_3e2d_4a5d_a3c5_283848caff27.slice - libcontainer container kubepods-besteffort-pod6dcf390a_3e2d_4a5d_a3c5_283848caff27.slice. Feb 13 19:50:26.929844 kubelet[2339]: I0213 19:50:26.929790 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6dcf390a-3e2d-4a5d-a3c5-283848caff27-data\") pod \"nfs-server-provisioner-0\" (UID: \"6dcf390a-3e2d-4a5d-a3c5-283848caff27\") " pod="default/nfs-server-provisioner-0" Feb 13 19:50:26.930008 kubelet[2339]: I0213 19:50:26.929911 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjb8t\" (UniqueName: \"kubernetes.io/projected/6dcf390a-3e2d-4a5d-a3c5-283848caff27-kube-api-access-hjb8t\") pod \"nfs-server-provisioner-0\" (UID: \"6dcf390a-3e2d-4a5d-a3c5-283848caff27\") " pod="default/nfs-server-provisioner-0" Feb 13 19:50:26.957225 kubelet[2339]: E0213 19:50:26.957176 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:27.221737 containerd[1885]: time="2025-02-13T19:50:27.221609915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6dcf390a-3e2d-4a5d-a3c5-283848caff27,Namespace:default,Attempt:0,}" Feb 13 19:50:27.442734 systemd-networkd[1737]: cali60e51b789ff: Link UP Feb 13 19:50:27.442949 systemd-networkd[1737]: cali60e51b789ff: Gained carrier Feb 13 19:50:27.446723 (udev-worker)[4313]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.312 [INFO][4296] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.31.165-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 6dcf390a-3e2d-4a5d-a3c5-283848caff27 1281 0 2025-02-13 19:50:26 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.31.165 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.31.165-k8s-nfs--server--provisioner--0-" Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.313 [INFO][4296] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.31.165-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.349 [INFO][4307] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" HandleID="k8s-pod-network.1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" Workload="172.31.31.165-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.381 [INFO][4307] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" HandleID="k8s-pod-network.1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" Workload="172.31.31.165-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002916d0), Attrs:map[string]string{"namespace":"default", "node":"172.31.31.165", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:50:27.349821581 +0000 UTC"}, Hostname:"172.31.31.165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.381 [INFO][4307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.381 [INFO][4307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.381 [INFO][4307] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.31.165' Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.386 [INFO][4307] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" host="172.31.31.165" Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.396 [INFO][4307] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.31.165" Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.402 [INFO][4307] ipam/ipam.go 489: Trying affinity for 192.168.67.64/26 host="172.31.31.165" Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.406 [INFO][4307] ipam/ipam.go 155: Attempting to load block cidr=192.168.67.64/26 host="172.31.31.165" Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.409 [INFO][4307] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="172.31.31.165" Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.409 [INFO][4307] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" host="172.31.31.165" Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.412 [INFO][4307] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7 Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.421 [INFO][4307] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" host="172.31.31.165" Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.437 [INFO][4307] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.67.67/26] block=192.168.67.64/26 handle="k8s-pod-network.1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" host="172.31.31.165" Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.437 [INFO][4307] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.67/26] handle="k8s-pod-network.1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" host="172.31.31.165" Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.437 [INFO][4307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:27.481407 containerd[1885]: 2025-02-13 19:50:27.437 [INFO][4307] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.67/26] IPv6=[] ContainerID="1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" HandleID="k8s-pod-network.1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" Workload="172.31.31.165-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:50:27.482600 containerd[1885]: 2025-02-13 19:50:27.438 [INFO][4296] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.31.165-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.165-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6dcf390a-3e2d-4a5d-a3c5-283848caff27", ResourceVersion:"1281", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.165", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.67.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:27.482600 containerd[1885]: 2025-02-13 19:50:27.438 [INFO][4296] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.67.67/32] ContainerID="1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.31.165-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:50:27.482600 containerd[1885]: 2025-02-13 19:50:27.438 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.31.165-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:50:27.482600 containerd[1885]: 2025-02-13 19:50:27.441 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.31.165-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:50:27.483075 containerd[1885]: 2025-02-13 19:50:27.441 [INFO][4296] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.31.165-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.165-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6dcf390a-3e2d-4a5d-a3c5-283848caff27", ResourceVersion:"1281", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.165", ContainerID:"1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.67.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"06:62:0f:1f:00:46", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:27.483075 containerd[1885]: 2025-02-13 19:50:27.479 [INFO][4296] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.31.165-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:50:27.516528 containerd[1885]: time="2025-02-13T19:50:27.515883913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:27.516528 containerd[1885]: time="2025-02-13T19:50:27.515965896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:27.516528 containerd[1885]: time="2025-02-13T19:50:27.515987127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:27.516528 containerd[1885]: time="2025-02-13T19:50:27.516082115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:27.550009 systemd[1]: Started cri-containerd-1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7.scope - libcontainer container 1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7. Feb 13 19:50:27.598248 containerd[1885]: time="2025-02-13T19:50:27.595652620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6dcf390a-3e2d-4a5d-a3c5-283848caff27,Namespace:default,Attempt:0,} returns sandbox id \"1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7\"" Feb 13 19:50:27.600999 containerd[1885]: time="2025-02-13T19:50:27.600962316Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:50:27.957954 kubelet[2339]: E0213 19:50:27.957895 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:28.849983 systemd-networkd[1737]: cali60e51b789ff: Gained IPv6LL Feb 13 19:50:28.963529 kubelet[2339]: E0213 19:50:28.962290 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:29.962816 kubelet[2339]: E0213 19:50:29.962755 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:30.980519 kubelet[2339]: E0213 19:50:30.980434 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:31.059072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount966584038.mount: Deactivated successfully. Feb 13 19:50:31.743731 ntpd[1867]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 19:50:31.744969 ntpd[1867]: 13 Feb 19:50:31 ntpd[1867]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 19:50:31.989341 kubelet[2339]: E0213 19:50:31.989287 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:32.990312 kubelet[2339]: E0213 19:50:32.990237 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:33.990645 kubelet[2339]: E0213 19:50:33.990585 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:34.230095 containerd[1885]: time="2025-02-13T19:50:34.230038393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:34.232763 containerd[1885]: time="2025-02-13T19:50:34.232640627Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 19:50:34.236381 containerd[1885]: time="2025-02-13T19:50:34.235205349Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:34.240786 containerd[1885]: time="2025-02-13T19:50:34.240630816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:34.242661 containerd[1885]: time="2025-02-13T19:50:34.242427970Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.641416186s" Feb 13 19:50:34.242888 containerd[1885]: time="2025-02-13T19:50:34.242860228Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 19:50:34.297419 containerd[1885]: time="2025-02-13T19:50:34.297373791Z" level=info msg="CreateContainer within sandbox \"1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:50:34.327800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2953638769.mount: Deactivated successfully. Feb 13 19:50:34.358814 containerd[1885]: time="2025-02-13T19:50:34.358761865Z" level=info msg="CreateContainer within sandbox \"1c58403d9580df5deb00bd734a4ba40d7ccf431de7ed36d36474af910a3d8cf7\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"380a25e6736964eb8a90edae5336b3538a14e9b0ac59e5cd58fbbc0b432246a3\"" Feb 13 19:50:34.362172 containerd[1885]: time="2025-02-13T19:50:34.359555840Z" level=info msg="StartContainer for \"380a25e6736964eb8a90edae5336b3538a14e9b0ac59e5cd58fbbc0b432246a3\"" Feb 13 19:50:34.415841 systemd[1]: run-containerd-runc-k8s.io-380a25e6736964eb8a90edae5336b3538a14e9b0ac59e5cd58fbbc0b432246a3-runc.PNDZNL.mount: Deactivated successfully. Feb 13 19:50:34.423910 systemd[1]: Started cri-containerd-380a25e6736964eb8a90edae5336b3538a14e9b0ac59e5cd58fbbc0b432246a3.scope - libcontainer container 380a25e6736964eb8a90edae5336b3538a14e9b0ac59e5cd58fbbc0b432246a3. Feb 13 19:50:34.500531 containerd[1885]: time="2025-02-13T19:50:34.500141333Z" level=info msg="StartContainer for \"380a25e6736964eb8a90edae5336b3538a14e9b0ac59e5cd58fbbc0b432246a3\" returns successfully" Feb 13 19:50:34.884591 kubelet[2339]: I0213 19:50:34.884472 2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.230876634 podStartE2EDuration="8.8810996s" podCreationTimestamp="2025-02-13 19:50:26 +0000 UTC" firstStartedPulling="2025-02-13 19:50:27.600550646 +0000 UTC m=+43.785779591" lastFinishedPulling="2025-02-13 19:50:34.250773604 +0000 UTC m=+50.436002557" observedRunningTime="2025-02-13 19:50:34.868604516 +0000 UTC m=+51.053833483" watchObservedRunningTime="2025-02-13 19:50:34.8810996 +0000 UTC m=+51.066328561" Feb 13 19:50:34.992000 kubelet[2339]: E0213 19:50:34.991942 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:35.993026 kubelet[2339]: E0213 19:50:35.992974 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:36.993545 kubelet[2339]: E0213 19:50:36.993492 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:37.994595 kubelet[2339]: E0213 19:50:37.994532 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:38.995627 kubelet[2339]: E0213 19:50:38.995573 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:39.996181 kubelet[2339]: E0213 19:50:39.995903 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:40.996932 kubelet[2339]: E0213 19:50:40.996873 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:41.997214 kubelet[2339]: E0213 19:50:41.997149 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:42.997862 kubelet[2339]: E0213 19:50:42.997821 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:43.499439 systemd[1]: run-containerd-runc-k8s.io-655e8ffd9a32e1388f98fcfcbeddaa8e42b942962a4cc304f8b47d6da92657ff-runc.bo0hLN.mount: Deactivated successfully. Feb 13 19:50:43.998301 kubelet[2339]: E0213 19:50:43.998256 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:44.908820 kubelet[2339]: E0213 19:50:44.908764 2339 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:44.945916 containerd[1885]: time="2025-02-13T19:50:44.945874712Z" level=info msg="StopPodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\"" Feb 13 19:50:44.948202 containerd[1885]: time="2025-02-13T19:50:44.946014026Z" level=info msg="TearDown network for sandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" successfully" Feb 13 19:50:44.948202 containerd[1885]: time="2025-02-13T19:50:44.946033238Z" level=info msg="StopPodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" returns successfully" Feb 13 19:50:44.980179 containerd[1885]: time="2025-02-13T19:50:44.980119700Z" level=info msg="RemovePodSandbox for \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\"" Feb 13 19:50:44.995795 containerd[1885]: time="2025-02-13T19:50:44.995738833Z" level=info msg="Forcibly stopping sandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\"" Feb 13 19:50:44.995967 containerd[1885]: time="2025-02-13T19:50:44.995881221Z" level=info msg="TearDown network for sandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" successfully" Feb 13 19:50:44.998499 kubelet[2339]: E0213 19:50:44.998442 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:45.016520 containerd[1885]: time="2025-02-13T19:50:45.016238166Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.016771 containerd[1885]: time="2025-02-13T19:50:45.016595015Z" level=info msg="RemovePodSandbox \"2b62366907a5c4e5cfdc6cd7b31f945525e392a0f589dc2f82e925d86e2ea2c1\" returns successfully" Feb 13 19:50:45.019414 containerd[1885]: time="2025-02-13T19:50:45.019267976Z" level=info msg="StopPodSandbox for \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\"" Feb 13 19:50:45.021595 containerd[1885]: time="2025-02-13T19:50:45.021547986Z" level=info msg="TearDown network for sandbox \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\" successfully" Feb 13 19:50:45.021595 containerd[1885]: time="2025-02-13T19:50:45.021584272Z" level=info msg="StopPodSandbox for \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\" returns successfully" Feb 13 19:50:45.024796 containerd[1885]: time="2025-02-13T19:50:45.023740139Z" level=info msg="RemovePodSandbox for \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\"" Feb 13 19:50:45.024796 containerd[1885]: time="2025-02-13T19:50:45.023784754Z" level=info msg="Forcibly stopping sandbox \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\"" Feb 13 19:50:45.024796 containerd[1885]: time="2025-02-13T19:50:45.023861019Z" level=info msg="TearDown network for sandbox \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\" successfully" Feb 13 19:50:45.029411 containerd[1885]: time="2025-02-13T19:50:45.029362038Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.029557 containerd[1885]: time="2025-02-13T19:50:45.029432415Z" level=info msg="RemovePodSandbox \"efecd54d734502e34d0a69d8a5a114d63b5062f6117cf7eba47d753d483cb10b\" returns successfully" Feb 13 19:50:45.031912 containerd[1885]: time="2025-02-13T19:50:45.031730887Z" level=info msg="StopPodSandbox for \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\"" Feb 13 19:50:45.032090 containerd[1885]: time="2025-02-13T19:50:45.032004977Z" level=info msg="TearDown network for sandbox \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\" successfully" Feb 13 19:50:45.032090 containerd[1885]: time="2025-02-13T19:50:45.032022103Z" level=info msg="StopPodSandbox for \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\" returns successfully" Feb 13 19:50:45.032820 containerd[1885]: time="2025-02-13T19:50:45.032780060Z" level=info msg="RemovePodSandbox for \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\"" Feb 13 19:50:45.032910 containerd[1885]: time="2025-02-13T19:50:45.032818144Z" level=info msg="Forcibly stopping sandbox \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\"" Feb 13 19:50:45.032968 containerd[1885]: time="2025-02-13T19:50:45.032911794Z" level=info msg="TearDown network for sandbox \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\" successfully" Feb 13 19:50:45.040069 containerd[1885]: time="2025-02-13T19:50:45.040026660Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.040282 containerd[1885]: time="2025-02-13T19:50:45.040082331Z" level=info msg="RemovePodSandbox \"6d0cf2870dee546e3520ef65dbb1bad38d0288cc2b953b5056fb6974d680b31f\" returns successfully" Feb 13 19:50:45.041049 containerd[1885]: time="2025-02-13T19:50:45.040814996Z" level=info msg="StopPodSandbox for \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\"" Feb 13 19:50:45.041049 containerd[1885]: time="2025-02-13T19:50:45.041039804Z" level=info msg="TearDown network for sandbox \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\" successfully" Feb 13 19:50:45.041261 containerd[1885]: time="2025-02-13T19:50:45.041059398Z" level=info msg="StopPodSandbox for \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\" returns successfully" Feb 13 19:50:45.041585 containerd[1885]: time="2025-02-13T19:50:45.041560468Z" level=info msg="RemovePodSandbox for \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\"" Feb 13 19:50:45.041671 containerd[1885]: time="2025-02-13T19:50:45.041587615Z" level=info msg="Forcibly stopping sandbox \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\"" Feb 13 19:50:45.041718 containerd[1885]: time="2025-02-13T19:50:45.041673764Z" level=info msg="TearDown network for sandbox \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\" successfully" Feb 13 19:50:45.047722 containerd[1885]: time="2025-02-13T19:50:45.047674387Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.048044 containerd[1885]: time="2025-02-13T19:50:45.047727175Z" level=info msg="RemovePodSandbox \"ee64be68ddf1b878f675186b18d29f6845866691c336b6f2a375a3c90bd17520\" returns successfully" Feb 13 19:50:45.048044 containerd[1885]: time="2025-02-13T19:50:45.047988038Z" level=info msg="StopPodSandbox for \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\"" Feb 13 19:50:45.048281 containerd[1885]: time="2025-02-13T19:50:45.048087622Z" level=info msg="TearDown network for sandbox \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\" successfully" Feb 13 19:50:45.048281 containerd[1885]: time="2025-02-13T19:50:45.048101765Z" level=info msg="StopPodSandbox for \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\" returns successfully" Feb 13 19:50:45.048682 containerd[1885]: time="2025-02-13T19:50:45.048654742Z" level=info msg="RemovePodSandbox for \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\"" Feb 13 19:50:45.048821 containerd[1885]: time="2025-02-13T19:50:45.048685017Z" level=info msg="Forcibly stopping sandbox \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\"" Feb 13 19:50:45.048894 containerd[1885]: time="2025-02-13T19:50:45.048830265Z" level=info msg="TearDown network for sandbox \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\" successfully" Feb 13 19:50:45.053338 containerd[1885]: time="2025-02-13T19:50:45.053296347Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.053475 containerd[1885]: time="2025-02-13T19:50:45.053350960Z" level=info msg="RemovePodSandbox \"bdf7339076a61ca7c53a93664690d8fb4943179b338aec63466a39de872d4bac\" returns successfully" Feb 13 19:50:45.054104 containerd[1885]: time="2025-02-13T19:50:45.053885997Z" level=info msg="StopPodSandbox for \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\"" Feb 13 19:50:45.054104 containerd[1885]: time="2025-02-13T19:50:45.053987231Z" level=info msg="TearDown network for sandbox \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\" successfully" Feb 13 19:50:45.054104 containerd[1885]: time="2025-02-13T19:50:45.054033056Z" level=info msg="StopPodSandbox for \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\" returns successfully" Feb 13 19:50:45.054538 containerd[1885]: time="2025-02-13T19:50:45.054508323Z" level=info msg="RemovePodSandbox for \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\"" Feb 13 19:50:45.054653 containerd[1885]: time="2025-02-13T19:50:45.054543275Z" level=info msg="Forcibly stopping sandbox \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\"" Feb 13 19:50:45.054700 containerd[1885]: time="2025-02-13T19:50:45.054632723Z" level=info msg="TearDown network for sandbox \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\" successfully" Feb 13 19:50:45.060326 containerd[1885]: time="2025-02-13T19:50:45.060268311Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.060577 containerd[1885]: time="2025-02-13T19:50:45.060332542Z" level=info msg="RemovePodSandbox \"fbecb406311d8c545579625e68a82f20eb994799a3f5f7fe2546c8ac5dd772ab\" returns successfully" Feb 13 19:50:45.060817 containerd[1885]: time="2025-02-13T19:50:45.060787818Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\"" Feb 13 19:50:45.060922 containerd[1885]: time="2025-02-13T19:50:45.060906002Z" level=info msg="TearDown network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" successfully" Feb 13 19:50:45.061018 containerd[1885]: time="2025-02-13T19:50:45.060922858Z" level=info msg="StopPodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" returns successfully" Feb 13 19:50:45.061546 containerd[1885]: time="2025-02-13T19:50:45.061515710Z" level=info msg="RemovePodSandbox for \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\"" Feb 13 19:50:45.061717 containerd[1885]: time="2025-02-13T19:50:45.061546928Z" level=info msg="Forcibly stopping sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\"" Feb 13 19:50:45.061768 containerd[1885]: time="2025-02-13T19:50:45.061716542Z" level=info msg="TearDown network for sandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" successfully" Feb 13 19:50:45.077405 containerd[1885]: time="2025-02-13T19:50:45.077353803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.077584 containerd[1885]: time="2025-02-13T19:50:45.077413975Z" level=info msg="RemovePodSandbox \"3900b26f05f6b582e9e36a2bcf24e111c0c42c0196f7b684ddf1317025c90a67\" returns successfully" Feb 13 19:50:45.078136 containerd[1885]: time="2025-02-13T19:50:45.077955449Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\"" Feb 13 19:50:45.078136 containerd[1885]: time="2025-02-13T19:50:45.078057067Z" level=info msg="TearDown network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" successfully" Feb 13 19:50:45.078136 containerd[1885]: time="2025-02-13T19:50:45.078071870Z" level=info msg="StopPodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" returns successfully" Feb 13 19:50:45.078575 containerd[1885]: time="2025-02-13T19:50:45.078522597Z" level=info msg="RemovePodSandbox for \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\"" Feb 13 19:50:45.078575 containerd[1885]: time="2025-02-13T19:50:45.078558177Z" level=info msg="Forcibly stopping sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\"" Feb 13 19:50:45.078912 containerd[1885]: time="2025-02-13T19:50:45.078642128Z" level=info msg="TearDown network for sandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" successfully" Feb 13 19:50:45.085499 containerd[1885]: time="2025-02-13T19:50:45.085452020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.085694 containerd[1885]: time="2025-02-13T19:50:45.085514442Z" level=info msg="RemovePodSandbox \"8c5df6da342cc029e669b2e1725644628818754d4820a4521fd1deae8cf78f25\" returns successfully" Feb 13 19:50:45.086193 containerd[1885]: time="2025-02-13T19:50:45.085980988Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\"" Feb 13 19:50:45.086193 containerd[1885]: time="2025-02-13T19:50:45.086087983Z" level=info msg="TearDown network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" successfully" Feb 13 19:50:45.086193 containerd[1885]: time="2025-02-13T19:50:45.086099281Z" level=info msg="StopPodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" returns successfully" Feb 13 19:50:45.086605 containerd[1885]: time="2025-02-13T19:50:45.086422601Z" level=info msg="RemovePodSandbox for \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\"" Feb 13 19:50:45.086677 containerd[1885]: time="2025-02-13T19:50:45.086606980Z" level=info msg="Forcibly stopping sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\"" Feb 13 19:50:45.086745 containerd[1885]: time="2025-02-13T19:50:45.086688950Z" level=info msg="TearDown network for sandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" successfully" Feb 13 19:50:45.090528 containerd[1885]: time="2025-02-13T19:50:45.090481441Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.090658 containerd[1885]: time="2025-02-13T19:50:45.090532693Z" level=info msg="RemovePodSandbox \"e59aaf0417539260f4c96c5a93c31b736ff1bf70fe4d5217fd5801b4041c635a\" returns successfully" Feb 13 19:50:45.091246 containerd[1885]: time="2025-02-13T19:50:45.090953726Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\"" Feb 13 19:50:45.091246 containerd[1885]: time="2025-02-13T19:50:45.091128723Z" level=info msg="TearDown network for sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" successfully" Feb 13 19:50:45.091246 containerd[1885]: time="2025-02-13T19:50:45.091141354Z" level=info msg="StopPodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" returns successfully" Feb 13 19:50:45.091746 containerd[1885]: time="2025-02-13T19:50:45.091630529Z" level=info msg="RemovePodSandbox for \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\"" Feb 13 19:50:45.091746 containerd[1885]: time="2025-02-13T19:50:45.091656803Z" level=info msg="Forcibly stopping sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\"" Feb 13 19:50:45.091746 containerd[1885]: time="2025-02-13T19:50:45.091733758Z" level=info msg="TearDown network for sandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" successfully" Feb 13 19:50:45.095626 containerd[1885]: time="2025-02-13T19:50:45.095583048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.095882 containerd[1885]: time="2025-02-13T19:50:45.095636718Z" level=info msg="RemovePodSandbox \"e2ad112d4a7671173d5228e9290494f6240cb66df774a603401a8417f889f378\" returns successfully" Feb 13 19:50:45.096357 containerd[1885]: time="2025-02-13T19:50:45.096331706Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\"" Feb 13 19:50:45.096449 containerd[1885]: time="2025-02-13T19:50:45.096431393Z" level=info msg="TearDown network for sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" successfully" Feb 13 19:50:45.096496 containerd[1885]: time="2025-02-13T19:50:45.096449587Z" level=info msg="StopPodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" returns successfully" Feb 13 19:50:45.096824 containerd[1885]: time="2025-02-13T19:50:45.096784839Z" level=info msg="RemovePodSandbox for \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\"" Feb 13 19:50:45.096941 containerd[1885]: time="2025-02-13T19:50:45.096917932Z" level=info msg="Forcibly stopping sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\"" Feb 13 19:50:45.097045 containerd[1885]: time="2025-02-13T19:50:45.097001275Z" level=info msg="TearDown network for sandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" successfully" Feb 13 19:50:45.101243 containerd[1885]: time="2025-02-13T19:50:45.101205932Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.101360 containerd[1885]: time="2025-02-13T19:50:45.101253746Z" level=info msg="RemovePodSandbox \"d16ab7f92ece653ad368d90448f6ff8e5aa4a1619cb8bd6812079227fd669573\" returns successfully" Feb 13 19:50:45.101737 containerd[1885]: time="2025-02-13T19:50:45.101712670Z" level=info msg="StopPodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\"" Feb 13 19:50:45.101833 containerd[1885]: time="2025-02-13T19:50:45.101813345Z" level=info msg="TearDown network for sandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" successfully" Feb 13 19:50:45.101833 containerd[1885]: time="2025-02-13T19:50:45.101828596Z" level=info msg="StopPodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" returns successfully" Feb 13 19:50:45.102110 containerd[1885]: time="2025-02-13T19:50:45.102083853Z" level=info msg="RemovePodSandbox for \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\"" Feb 13 19:50:45.102110 containerd[1885]: time="2025-02-13T19:50:45.102109726Z" level=info msg="Forcibly stopping sandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\"" Feb 13 19:50:45.102249 containerd[1885]: time="2025-02-13T19:50:45.102198099Z" level=info msg="TearDown network for sandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" successfully" Feb 13 19:50:45.106275 containerd[1885]: time="2025-02-13T19:50:45.106234966Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.106492 containerd[1885]: time="2025-02-13T19:50:45.106393789Z" level=info msg="RemovePodSandbox \"5f90689133db284f03e57346a91568f6dca86f7b44688ebff7b2f8f119296e12\" returns successfully" Feb 13 19:50:45.106906 containerd[1885]: time="2025-02-13T19:50:45.106877733Z" level=info msg="StopPodSandbox for \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\"" Feb 13 19:50:45.107006 containerd[1885]: time="2025-02-13T19:50:45.106984849Z" level=info msg="TearDown network for sandbox \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\" successfully" Feb 13 19:50:45.107054 containerd[1885]: time="2025-02-13T19:50:45.107003553Z" level=info msg="StopPodSandbox for \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\" returns successfully" Feb 13 19:50:45.107392 containerd[1885]: time="2025-02-13T19:50:45.107368486Z" level=info msg="RemovePodSandbox for \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\"" Feb 13 19:50:45.107505 containerd[1885]: time="2025-02-13T19:50:45.107392888Z" level=info msg="Forcibly stopping sandbox \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\"" Feb 13 19:50:45.107505 containerd[1885]: time="2025-02-13T19:50:45.107464513Z" level=info msg="TearDown network for sandbox \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\" successfully" Feb 13 19:50:45.112982 containerd[1885]: time="2025-02-13T19:50:45.112939997Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.113369 containerd[1885]: time="2025-02-13T19:50:45.113219517Z" level=info msg="RemovePodSandbox \"88354e409f6edf22799361d25f996286bfd1239d0972f5e022c721dcecaebfe9\" returns successfully" Feb 13 19:50:45.114517 containerd[1885]: time="2025-02-13T19:50:45.114492179Z" level=info msg="StopPodSandbox for \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\"" Feb 13 19:50:45.115402 containerd[1885]: time="2025-02-13T19:50:45.115142928Z" level=info msg="TearDown network for sandbox \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\" successfully" Feb 13 19:50:45.115402 containerd[1885]: time="2025-02-13T19:50:45.115188105Z" level=info msg="StopPodSandbox for \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\" returns successfully" Feb 13 19:50:45.115685 containerd[1885]: time="2025-02-13T19:50:45.115663402Z" level=info msg="RemovePodSandbox for \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\"" Feb 13 19:50:45.115753 containerd[1885]: time="2025-02-13T19:50:45.115724471Z" level=info msg="Forcibly stopping sandbox \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\"" Feb 13 19:50:45.115858 containerd[1885]: time="2025-02-13T19:50:45.115807360Z" level=info msg="TearDown network for sandbox \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\" successfully" Feb 13 19:50:45.120033 containerd[1885]: time="2025-02-13T19:50:45.119989375Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.120137 containerd[1885]: time="2025-02-13T19:50:45.120042198Z" level=info msg="RemovePodSandbox \"19cfad14f002028fdbf711b7def1c5dc2eaa589ceeb930d73d575d0146db29dd\" returns successfully" Feb 13 19:50:45.120573 containerd[1885]: time="2025-02-13T19:50:45.120549372Z" level=info msg="StopPodSandbox for \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\"" Feb 13 19:50:45.120768 containerd[1885]: time="2025-02-13T19:50:45.120742765Z" level=info msg="TearDown network for sandbox \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\" successfully" Feb 13 19:50:45.120768 containerd[1885]: time="2025-02-13T19:50:45.120762075Z" level=info msg="StopPodSandbox for \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\" returns successfully" Feb 13 19:50:45.121207 containerd[1885]: time="2025-02-13T19:50:45.121180924Z" level=info msg="RemovePodSandbox for \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\"" Feb 13 19:50:45.121286 containerd[1885]: time="2025-02-13T19:50:45.121210118Z" level=info msg="Forcibly stopping sandbox \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\"" Feb 13 19:50:45.121331 containerd[1885]: time="2025-02-13T19:50:45.121286673Z" level=info msg="TearDown network for sandbox \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\" successfully" Feb 13 19:50:45.136485 containerd[1885]: time="2025-02-13T19:50:45.134754193Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.136485 containerd[1885]: time="2025-02-13T19:50:45.134830914Z" level=info msg="RemovePodSandbox \"a20aae98eb8a101990a4bd0fe05e309548cd647bed1ab097d1f39b26b177432f\" returns successfully" Feb 13 19:50:45.151030 containerd[1885]: time="2025-02-13T19:50:45.150981389Z" level=info msg="StopPodSandbox for \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\"" Feb 13 19:50:45.154605 containerd[1885]: time="2025-02-13T19:50:45.154349090Z" level=info msg="TearDown network for sandbox \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\" successfully" Feb 13 19:50:45.154605 containerd[1885]: time="2025-02-13T19:50:45.154384528Z" level=info msg="StopPodSandbox for \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\" returns successfully" Feb 13 19:50:45.157281 containerd[1885]: time="2025-02-13T19:50:45.155859564Z" level=info msg="RemovePodSandbox for \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\"" Feb 13 19:50:45.157281 containerd[1885]: time="2025-02-13T19:50:45.155903503Z" level=info msg="Forcibly stopping sandbox \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\"" Feb 13 19:50:45.157281 containerd[1885]: time="2025-02-13T19:50:45.155997051Z" level=info msg="TearDown network for sandbox \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\" successfully" Feb 13 19:50:45.172691 containerd[1885]: time="2025-02-13T19:50:45.172569281Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.172691 containerd[1885]: time="2025-02-13T19:50:45.172642144Z" level=info msg="RemovePodSandbox \"6c1991b91c4916b0397cc52e8b0f7a5a9ad584d943401a497b33569268aa2483\" returns successfully" Feb 13 19:50:45.173883 containerd[1885]: time="2025-02-13T19:50:45.173809436Z" level=info msg="StopPodSandbox for \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\"" Feb 13 19:50:45.203267 containerd[1885]: time="2025-02-13T19:50:45.203202816Z" level=info msg="TearDown network for sandbox \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\" successfully" Feb 13 19:50:45.203610 containerd[1885]: time="2025-02-13T19:50:45.203351325Z" level=info msg="StopPodSandbox for \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\" returns successfully" Feb 13 19:50:45.205297 containerd[1885]: time="2025-02-13T19:50:45.204587064Z" level=info msg="RemovePodSandbox for \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\"" Feb 13 19:50:45.205297 containerd[1885]: time="2025-02-13T19:50:45.204632218Z" level=info msg="Forcibly stopping sandbox \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\"" Feb 13 19:50:45.205297 containerd[1885]: time="2025-02-13T19:50:45.204736309Z" level=info msg="TearDown network for sandbox \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\" successfully" Feb 13 19:50:45.212217 containerd[1885]: time="2025-02-13T19:50:45.212155275Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:45.212584 containerd[1885]: time="2025-02-13T19:50:45.212232021Z" level=info msg="RemovePodSandbox \"0d9961da466332acb1a2df958b2895b753324106d680af5ca66ec1f228eb2efe\" returns successfully" Feb 13 19:50:45.998774 kubelet[2339]: E0213 19:50:45.998732 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:46.999674 kubelet[2339]: E0213 19:50:46.999519 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:47.999838 kubelet[2339]: E0213 19:50:47.999766 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:49.000290 kubelet[2339]: E0213 19:50:49.000233 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:50.001305 kubelet[2339]: E0213 19:50:50.001249 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:51.002258 kubelet[2339]: E0213 19:50:51.002205 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:52.002620 kubelet[2339]: E0213 19:50:52.002563 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:53.003029 kubelet[2339]: E0213 19:50:53.002975 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:54.003651 kubelet[2339]: E0213 19:50:54.003590 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:55.004227 kubelet[2339]: E0213 19:50:55.004182 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:56.004512 kubelet[2339]: E0213 19:50:56.004452 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:57.005380 kubelet[2339]: E0213 19:50:57.005324 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:58.005724 kubelet[2339]: E0213 19:50:58.005665 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:58.969559 systemd[1]: Created slice kubepods-besteffort-podedd0e1d6_cbd0_4c9b_80a6_1ba07b3d5803.slice - libcontainer container kubepods-besteffort-podedd0e1d6_cbd0_4c9b_80a6_1ba07b3d5803.slice. Feb 13 19:50:59.006143 kubelet[2339]: E0213 19:50:59.006104 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:50:59.086361 kubelet[2339]: I0213 19:50:59.086308 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eb0154aa-edd3-4eb1-b1f7-1d1854eab6cc\" (UniqueName: \"kubernetes.io/nfs/edd0e1d6-cbd0-4c9b-80a6-1ba07b3d5803-pvc-eb0154aa-edd3-4eb1-b1f7-1d1854eab6cc\") pod \"test-pod-1\" (UID: \"edd0e1d6-cbd0-4c9b-80a6-1ba07b3d5803\") " pod="default/test-pod-1" Feb 13 19:50:59.086361 kubelet[2339]: I0213 19:50:59.086358 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n68c\" (UniqueName: \"kubernetes.io/projected/edd0e1d6-cbd0-4c9b-80a6-1ba07b3d5803-kube-api-access-5n68c\") pod \"test-pod-1\" (UID: \"edd0e1d6-cbd0-4c9b-80a6-1ba07b3d5803\") " pod="default/test-pod-1" Feb 13 19:50:59.368198 kernel: FS-Cache: Loaded Feb 13 19:50:59.447883 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:50:59.447973 kernel: RPC: Registered udp transport module. Feb 13 19:50:59.447996 kernel: RPC: Registered tcp transport module. Feb 13 19:50:59.448044 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:50:59.448203 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:50:59.912429 kernel: NFS: Registering the id_resolver key type Feb 13 19:50:59.912574 kernel: Key type id_resolver registered Feb 13 19:50:59.912603 kernel: Key type id_legacy registered Feb 13 19:51:00.008180 kubelet[2339]: E0213 19:51:00.008097 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:00.044954 nfsidmap[4530]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 19:51:00.052686 nfsidmap[4531]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 19:51:00.182044 containerd[1885]: time="2025-02-13T19:51:00.181918407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:edd0e1d6-cbd0-4c9b-80a6-1ba07b3d5803,Namespace:default,Attempt:0,}" Feb 13 19:51:00.437570 (udev-worker)[4518]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:51:00.444990 systemd-networkd[1737]: cali5ec59c6bf6e: Link UP Feb 13 19:51:00.453771 systemd-networkd[1737]: cali5ec59c6bf6e: Gained carrier Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.282 [INFO][4532] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.31.165-k8s-test--pod--1-eth0 default edd0e1d6-cbd0-4c9b-80a6-1ba07b3d5803 1388 0 2025-02-13 19:50:28 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.31.165 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.31.165-k8s-test--pod--1-" Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.282 [INFO][4532] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.31.165-k8s-test--pod--1-eth0" Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.334 [INFO][4544] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" HandleID="k8s-pod-network.301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" Workload="172.31.31.165-k8s-test--pod--1-eth0" Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.348 [INFO][4544] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" HandleID="k8s-pod-network.301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" Workload="172.31.31.165-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000307100), Attrs:map[string]string{"namespace":"default", "node":"172.31.31.165", "pod":"test-pod-1", "timestamp":"2025-02-13 19:51:00.33489511 +0000 UTC"}, Hostname:"172.31.31.165", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.349 [INFO][4544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.349 [INFO][4544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.349 [INFO][4544] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.31.165' Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.352 [INFO][4544] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" host="172.31.31.165" Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.365 [INFO][4544] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.31.165" Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.381 [INFO][4544] ipam/ipam.go 489: Trying affinity for 192.168.67.64/26 host="172.31.31.165" Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.386 [INFO][4544] ipam/ipam.go 155: Attempting to load block cidr=192.168.67.64/26 host="172.31.31.165" Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.391 [INFO][4544] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="172.31.31.165" Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.391 [INFO][4544] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" host="172.31.31.165" Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.394 [INFO][4544] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882 Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.403 [INFO][4544] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" host="172.31.31.165" Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.416 [INFO][4544] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.67.68/26] block=192.168.67.64/26 handle="k8s-pod-network.301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" host="172.31.31.165" Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.416 [INFO][4544] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.68/26] handle="k8s-pod-network.301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" host="172.31.31.165" Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.417 [INFO][4544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.417 [INFO][4544] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.68/26] IPv6=[] ContainerID="301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" HandleID="k8s-pod-network.301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" Workload="172.31.31.165-k8s-test--pod--1-eth0" Feb 13 19:51:00.484215 containerd[1885]: 2025-02-13 19:51:00.424 [INFO][4532] cni-plugin/k8s.go 386: Populated endpoint ContainerID="301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.31.165-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.165-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"edd0e1d6-cbd0-4c9b-80a6-1ba07b3d5803", ResourceVersion:"1388", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.165", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.67.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:00.485707 containerd[1885]: 2025-02-13 19:51:00.425 [INFO][4532] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.67.68/32] ContainerID="301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.31.165-k8s-test--pod--1-eth0" Feb 13 19:51:00.485707 containerd[1885]: 2025-02-13 19:51:00.425 [INFO][4532] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.31.165-k8s-test--pod--1-eth0" Feb 13 19:51:00.485707 containerd[1885]: 2025-02-13 19:51:00.446 [INFO][4532] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.31.165-k8s-test--pod--1-eth0" Feb 13 19:51:00.485707 containerd[1885]: 2025-02-13 19:51:00.458 [INFO][4532] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.31.165-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.31.165-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"edd0e1d6-cbd0-4c9b-80a6-1ba07b3d5803", ResourceVersion:"1388", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 50, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.31.165", ContainerID:"301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.67.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"2a:8e:be:21:b6:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:00.485707 containerd[1885]: 2025-02-13 19:51:00.477 [INFO][4532] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.31.165-k8s-test--pod--1-eth0" Feb 13 19:51:00.525293 containerd[1885]: time="2025-02-13T19:51:00.524851578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:00.525293 containerd[1885]: time="2025-02-13T19:51:00.524939083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:00.525293 containerd[1885]: time="2025-02-13T19:51:00.524963007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:00.525293 containerd[1885]: time="2025-02-13T19:51:00.525144002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:00.580121 systemd[1]: Started cri-containerd-301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882.scope - libcontainer container 301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882. Feb 13 19:51:00.646565 containerd[1885]: time="2025-02-13T19:51:00.646398541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:edd0e1d6-cbd0-4c9b-80a6-1ba07b3d5803,Namespace:default,Attempt:0,} returns sandbox id \"301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882\"" Feb 13 19:51:00.648218 containerd[1885]: time="2025-02-13T19:51:00.647854372Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:51:00.985029 containerd[1885]: time="2025-02-13T19:51:00.984977297Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:00.986427 containerd[1885]: time="2025-02-13T19:51:00.986367297Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:51:00.990904 containerd[1885]: time="2025-02-13T19:51:00.990851671Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 342.956857ms" Feb 13 19:51:00.990904 containerd[1885]: time="2025-02-13T19:51:00.990903822Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:51:00.993194 containerd[1885]: time="2025-02-13T19:51:00.993125160Z" level=info msg="CreateContainer within sandbox \"301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:51:01.008979 kubelet[2339]: E0213 19:51:01.008942 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:01.009649 containerd[1885]: time="2025-02-13T19:51:01.009609962Z" level=info msg="CreateContainer within sandbox \"301ad0da895235edcfa66b98ef10c897b1587684fd4d3e0282071b25b0784882\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"db8ea82c6a8b933a4957da235fcfaba59e2db0ae734d8df04a9d42852d1fee23\"" Feb 13 19:51:01.010337 containerd[1885]: time="2025-02-13T19:51:01.010208791Z" level=info msg="StartContainer for \"db8ea82c6a8b933a4957da235fcfaba59e2db0ae734d8df04a9d42852d1fee23\"" Feb 13 19:51:01.061810 systemd[1]: Started cri-containerd-db8ea82c6a8b933a4957da235fcfaba59e2db0ae734d8df04a9d42852d1fee23.scope - libcontainer container db8ea82c6a8b933a4957da235fcfaba59e2db0ae734d8df04a9d42852d1fee23. Feb 13 19:51:01.146414 containerd[1885]: time="2025-02-13T19:51:01.146132993Z" level=info msg="StartContainer for \"db8ea82c6a8b933a4957da235fcfaba59e2db0ae734d8df04a9d42852d1fee23\" returns successfully" Feb 13 19:51:01.979513 kubelet[2339]: I0213 19:51:01.979446 2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=33.635179109 podStartE2EDuration="33.979425832s" podCreationTimestamp="2025-02-13 19:50:28 +0000 UTC" firstStartedPulling="2025-02-13 19:51:00.647559062 +0000 UTC m=+76.832788007" lastFinishedPulling="2025-02-13 19:51:00.991805784 +0000 UTC m=+77.177034730" observedRunningTime="2025-02-13 19:51:01.979064915 +0000 UTC m=+78.164293882" watchObservedRunningTime="2025-02-13 19:51:01.979425832 +0000 UTC m=+78.164654798" Feb 13 19:51:02.009449 kubelet[2339]: E0213 19:51:02.009384 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:02.191515 systemd-networkd[1737]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:51:03.010209 kubelet[2339]: E0213 19:51:03.010145 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:04.011139 kubelet[2339]: E0213 19:51:04.011078 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:04.743699 ntpd[1867]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 19:51:04.744393 ntpd[1867]: 13 Feb 19:51:04 ntpd[1867]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 19:51:04.908421 kubelet[2339]: E0213 19:51:04.908363 2339 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:05.011613 kubelet[2339]: E0213 19:51:05.011483 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:06.012044 kubelet[2339]: E0213 19:51:06.011996 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:07.012240 kubelet[2339]: E0213 19:51:07.012192 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:08.012709 kubelet[2339]: E0213 19:51:08.012647 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:09.013036 kubelet[2339]: E0213 19:51:09.012978 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:10.013186 kubelet[2339]: E0213 19:51:10.013124 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:11.014022 kubelet[2339]: E0213 19:51:11.013965 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:12.015074 kubelet[2339]: E0213 19:51:12.015012 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:13.015754 kubelet[2339]: E0213 19:51:13.015699 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:13.445016 systemd[1]: run-containerd-runc-k8s.io-655e8ffd9a32e1388f98fcfcbeddaa8e42b942962a4cc304f8b47d6da92657ff-runc.CYiyHX.mount: Deactivated successfully. Feb 13 19:51:14.015950 kubelet[2339]: E0213 19:51:14.015838 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:15.016950 kubelet[2339]: E0213 19:51:15.016909 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:16.017858 kubelet[2339]: E0213 19:51:16.017815 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:17.018973 kubelet[2339]: E0213 19:51:17.018917 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:18.019717 kubelet[2339]: E0213 19:51:18.019661 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:19.019892 kubelet[2339]: E0213 19:51:19.019831 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:20.020709 kubelet[2339]: E0213 19:51:20.020644 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:21.021915 kubelet[2339]: E0213 19:51:21.021858 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:22.022621 kubelet[2339]: E0213 19:51:22.022576 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:23.023733 kubelet[2339]: E0213 19:51:23.023671 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:24.024755 kubelet[2339]: E0213 19:51:24.024699 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:24.908722 kubelet[2339]: E0213 19:51:24.908666 2339 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:25.025246 kubelet[2339]: E0213 19:51:25.025189 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:26.025806 kubelet[2339]: E0213 19:51:26.025747 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:26.822274 kubelet[2339]: E0213 19:51:26.822212 2339 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.165?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:51:27.026752 kubelet[2339]: E0213 19:51:27.026398 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:28.027013 kubelet[2339]: E0213 19:51:28.026844 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:29.027538 kubelet[2339]: E0213 19:51:29.027481 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:30.028487 kubelet[2339]: E0213 19:51:30.028425 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:31.029178 kubelet[2339]: E0213 19:51:31.029103 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:32.029330 kubelet[2339]: E0213 19:51:32.029274 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:33.029491 kubelet[2339]: E0213 19:51:33.029420 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:34.029631 kubelet[2339]: E0213 19:51:34.029574 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:35.030325 kubelet[2339]: E0213 19:51:35.030261 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:36.030948 kubelet[2339]: E0213 19:51:36.030890 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:36.822969 kubelet[2339]: E0213 19:51:36.822894 2339 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.165?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 19:51:37.031544 kubelet[2339]: E0213 19:51:37.031491 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:38.032729 kubelet[2339]: E0213 19:51:38.032668 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:39.033746 kubelet[2339]: E0213 19:51:39.033687 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:40.036244 kubelet[2339]: E0213 19:51:40.034413 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:40.556104 update_engine[1874]: I20250213 19:51:40.555898 1874 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 19:51:40.556104 update_engine[1874]: I20250213 19:51:40.555959 1874 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 19:51:40.556616 update_engine[1874]: I20250213 19:51:40.556243 1874 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 19:51:40.561507 update_engine[1874]: I20250213 19:51:40.561326 1874 omaha_request_params.cc:62] Current group set to stable Feb 13 19:51:40.563996 update_engine[1874]: I20250213 19:51:40.563565 1874 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 19:51:40.563996 update_engine[1874]: I20250213 19:51:40.563602 1874 update_attempter.cc:643] Scheduling an action processor start. Feb 13 19:51:40.563996 update_engine[1874]: I20250213 19:51:40.563627 1874 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:51:40.563996 update_engine[1874]: I20250213 19:51:40.563676 1874 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 19:51:40.563996 update_engine[1874]: I20250213 19:51:40.563772 1874 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:51:40.563996 update_engine[1874]: I20250213 19:51:40.563782 1874 omaha_request_action.cc:272] Request: Feb 13 19:51:40.563996 update_engine[1874]: Feb 13 19:51:40.563996 update_engine[1874]: Feb 13 19:51:40.563996 update_engine[1874]: Feb 13 19:51:40.563996 update_engine[1874]: Feb 13 19:51:40.563996 update_engine[1874]: Feb 13 19:51:40.563996 update_engine[1874]: Feb 13 19:51:40.563996 update_engine[1874]: Feb 13 19:51:40.563996 update_engine[1874]: Feb 13 19:51:40.563996 update_engine[1874]: I20250213 19:51:40.563791 1874 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:51:40.566675 locksmithd[1907]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 19:51:40.573479 update_engine[1874]: I20250213 19:51:40.573333 1874 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:51:40.574174 update_engine[1874]: I20250213 19:51:40.574121 1874 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:51:40.582836 update_engine[1874]: E20250213 19:51:40.582751 1874 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:51:40.582970 update_engine[1874]: I20250213 19:51:40.582875 1874 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 19:51:41.035000 kubelet[2339]: E0213 19:51:41.034864 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:42.036106 kubelet[2339]: E0213 19:51:42.036049 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:43.036450 kubelet[2339]: E0213 19:51:43.036407 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:43.442913 systemd[1]: run-containerd-runc-k8s.io-655e8ffd9a32e1388f98fcfcbeddaa8e42b942962a4cc304f8b47d6da92657ff-runc.mWJpQ2.mount: Deactivated successfully. Feb 13 19:51:44.037442 kubelet[2339]: E0213 19:51:44.037356 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:44.908488 kubelet[2339]: E0213 19:51:44.908433 2339 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:45.038300 kubelet[2339]: E0213 19:51:45.038246 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:46.038607 kubelet[2339]: E0213 19:51:46.038535 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:46.823887 kubelet[2339]: E0213 19:51:46.823793 2339 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.165?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:51:47.039473 kubelet[2339]: E0213 19:51:47.039408 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:48.039774 kubelet[2339]: E0213 19:51:48.039566 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:49.040475 kubelet[2339]: E0213 19:51:49.040412 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:50.041116 kubelet[2339]: E0213 19:51:50.040977 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:50.350021 kubelet[2339]: E0213 19:51:50.349381 2339 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.165?timeout=10s\": unexpected EOF" Feb 13 19:51:50.356547 kubelet[2339]: E0213 19:51:50.356014 2339 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.218:6443/api/v1/namespaces/calico-system/events\": unexpected EOF" event=< Feb 13 19:51:50.356547 kubelet[2339]: &Event{ObjectMeta:{calico-node-8s427.1823dc7b5eaa0abe calico-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-node-8s427,UID:8f980a18-9339-47a2-8d22-d71f11933c49,APIVersion:v1,ResourceVersion:984,FieldPath:spec.containers{calico-node},},Reason:Unhealthy,Message:Readiness probe failed: 2025-02-13 19:51:43.511 [INFO][319] node/health.go 202: Number of node(s) with BGP peering established = 0 Feb 13 19:51:50.356547 kubelet[2339]: calico/node is not ready: BIRD is not ready: BGP not established with 172.31.24.218 Feb 13 19:51:50.356547 kubelet[2339]: ,Source:EventSource{Component:kubelet,Host:172.31.31.165,},FirstTimestamp:2025-02-13 19:51:43.522433726 +0000 UTC m=+119.707662693,LastTimestamp:2025-02-13 19:51:43.522433726 +0000 UTC m=+119.707662693,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.31.165,} Feb 13 19:51:50.356547 kubelet[2339]: > Feb 13 19:51:50.357994 kubelet[2339]: E0213 19:51:50.357950 2339 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.165?timeout=10s\": read tcp 172.31.31.165:41420->172.31.24.218:6443: read: connection reset by peer" Feb 13 19:51:50.357994 kubelet[2339]: I0213 19:51:50.357994 2339 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 13 19:51:50.359517 kubelet[2339]: E0213 19:51:50.359485 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.165?timeout=10s\": dial tcp 172.31.24.218:6443: connect: connection refused" interval="200ms" Feb 13 19:51:50.519351 update_engine[1874]: I20250213 19:51:50.519253 1874 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:51:50.519836 update_engine[1874]: I20250213 19:51:50.519613 1874 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:51:50.519938 update_engine[1874]: I20250213 19:51:50.519902 1874 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:51:50.521418 update_engine[1874]: E20250213 19:51:50.521376 1874 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:51:50.521575 update_engine[1874]: I20250213 19:51:50.521457 1874 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 19:51:50.560876 kubelet[2339]: E0213 19:51:50.560818 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.165?timeout=10s\": dial tcp 172.31.24.218:6443: connect: connection refused" interval="400ms" Feb 13 19:51:50.962176 kubelet[2339]: E0213 19:51:50.962114 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.165?timeout=10s\": dial tcp 172.31.24.218:6443: connect: connection refused" interval="800ms" Feb 13 19:51:51.041661 kubelet[2339]: E0213 19:51:51.041600 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:52.042432 kubelet[2339]: E0213 19:51:52.042374 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:53.043584 kubelet[2339]: E0213 19:51:53.043523 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:54.044122 kubelet[2339]: E0213 19:51:54.044068 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:55.045079 kubelet[2339]: E0213 19:51:55.045017 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:56.045314 kubelet[2339]: E0213 19:51:56.045253 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:57.045872 kubelet[2339]: E0213 19:51:57.045819 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:58.046776 kubelet[2339]: E0213 19:51:58.046720 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:59.047434 kubelet[2339]: E0213 19:51:59.047376 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:00.047610 kubelet[2339]: E0213 19:52:00.047507 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:00.525480 update_engine[1874]: I20250213 19:52:00.524991 1874 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:52:00.525480 update_engine[1874]: I20250213 19:52:00.525422 1874 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:52:00.526409 update_engine[1874]: I20250213 19:52:00.526075 1874 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:52:00.526705 update_engine[1874]: E20250213 19:52:00.526662 1874 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:52:00.526927 update_engine[1874]: I20250213 19:52:00.526872 1874 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 19:52:01.049227 kubelet[2339]: E0213 19:52:01.049087 2339 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"